date stringlengths 10 10 | nb_tokens int64 60 629k | text_size int64 234 1.02M | content stringlengths 234 1.02M |
|---|---|---|---|
2022/01/20 | 401 | 1,562 | <issue_start>username_0: I am running a model with fixed hyperparameters. To my surprise/shock, the model converged extremely fast with the least loss possible.
I want to know the causes of this phenomenon. I have the following guesses:
1. Underlying mapping is so simple.
2. Hyperparameters are apt.
3. Both.
Are there any other reasons for this phenomenon?<issue_comment>username_1: There might be several reasons for that:
* the data is easily understood by the model you are using
* the model you use is fitted to the problem
* the problem complexity is low
There are a lot more reasons to explain the convergence of an algorithm.
Upvotes: 0 <issue_comment>username_2: If a NN converges in a few steps to the absolute minimum of the loss function it means the loss function has a gradient (in the domain that the inputs defines) very regular, pointing to the absolute minimum.
Upvotes: 0 <issue_comment>username_3: send us your loss function plot over epochs ( or steps ). this will help to get a better guidance(use log scale for loss axis). sending more details of your learning process may help too.
but in this situation, i think you should decrease the learning rate and using the learning rate decrease method. this method helps you to see stepwise decrease of loss to the best losses. you can see more details of this method in this [link](https://medium.com/analytics-vidhya/learning-rate-decay-and-methods-in-deep-learning-2cee564f910b#:%7E:text=Learning%20rate%20decay%20is%20a,help%20both%20optimization%20and%20generalization.).
Upvotes: 1 |
2022/01/21 | 1,144 | 4,630 | <issue_start>username_0: I am reading *[Reinforcement Learning: An Introduction](http://incompleteideas.net/book/RLbook2020.pdf)* by Sutton & Barto. According to this textbook, as far as I understood, the authors claim that the policy and value iteration methods converge to an optimal stationary point. Actually, I now understand the procedure of these two iterative algorithms, but I can't accept why they converge to an optimal point.
In the textbook and many posts that I found by googling, many people say that "The value functions are monotonically increased as the iteration progresses. Thus, it will go to the optimal policy, as well as optimal value functions."
I strongly agree that "only if the algorithm's performance is monotonically improved and there exist an upper bound in terms of performance, the algorithm will converge to a stationary point." However, I cannot accept the word "Optimal." I think, to claim an algorithm converges to an optimal stationary point, we need to show not only *its monotonic improving property* but also "its locally non-stopping property." (Sorry, I made these words myself, but I believe you experts can understand what I mean.)
I believe that there must be some points that I was not able to understand. Can someone let me know why the policy and value iteration methods converge to an "OPTIMAL" solution?
ps. Only if the system can be represented as a Markovian decision process, are either the policy or the value iteration method optimal algorithm?<issue_comment>username_1: These two algorithms converge to the optimal value function because
1. they are instances of the [generalization policy iteration](https://ai.stackexchange.com/a/20624/2444), so they iteratively perform one **policy evaluation (PE)** step followed by a **policy improvement (PI)** step
2. the PE step is an iterative/numerical implementation of the **Bellman expectation operator (BEO)** (i.e. it's numerical algorithm equivalent to solving a system of equations); [here](https://ai.stackexchange.com/a/11133/2444) you have an explanation of what the Bellman operator is
3. the BEO is a contraction (proof [here](https://ai.stackexchange.com/a/22970/2444)), so the iterative application of the BEO makes the approximate value function closer to the optimal one, which is **unique**, i.e. PE convergences to the optimal value function of the current policy (proof [here](https://ai.stackexchange.com/a/20327/2444))
4. Policy improvement is guaranteed to generate a policy that is better than the one in the previous iteration, unless the policy in the previous iteration was already optimal (see the **policy improvement theorem** in [section 4.2 of the RL bible](http://incompleteideas.net/book/RLbook2020.pdf#page=98))
One thing that may confuse you is that you don't exactly know or have in mind the definition of the value function. A value function $v\_\pi(s)$ is defined as the expected return that you will get starting in state $s$, then following policy $\pi$. So, if you have some policy $\pi$, then you perform one PE step until convergence, then you know that that value function is the optimal value function for $\pi$. Now, if $\pi\_{t+1}$ is guaranteed to be a strict improvement over $\pi\_{t}$, then it basically means that you will get more rewards with $\pi\_{t+1}$ (which is the goal).
If you read the linked proofs and chapter 4 of the bible, then you should understand why these algorithms converge.
To address your last point, yes, we assume that we have an MDP. That's an assumption that most famous DP and RL algorithms make.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Base on the fantastic answer by @username_1, let me complement a little bit clue which may help understand Policy Iteration's optimality convergence.
So Danny you said:
>
> However, I cannot accept the word "Optimal." I think, to claim an algorithm converges to an optimal stationary point, we need to show not only its monotonic improving property but also "its locally non-stopping property."
>
>
>
I think here you actually worry about it may miss some possible cases which contains the optimal one in the process of iteration. However it won't. Here is a excerpt on page 75 in *Reinforcement Learing: An Introduction* written by <NAME>:
>
> All the updates done in DP algorithms are called expected updates because they are based on an expectation over all possible next states rather than on a sample next state.
>
>
>
So it's about all possible next states rather than on a sample next state, which won't miss some possible cases in the iteration.
Upvotes: 0 |
2022/01/22 | 2,110 | 7,149 | <issue_start>username_0: I'm learning about more advanced methods of hyperparameter optimization, such as the Bayesian methods in the `scikit-optimize` package. For those unfamiliar with the package, it can be used easily with model classes from `scikit-learn`, in this case the random forest classes such as `RandomForestClassifier`, and it provides more intelligent alternatives to traditional hyperparameter optimization methods like grid search.
I noticed that in some examples, the `n_estimators` hyperparameter (of the random forest) is included in the optimization, which I wouldn't expect. The `n_estimators` hyperparameter determines the number of component decision trees in the random forest, so I would expect that more estimators always results in a better model with respect to a single target variable (for clarity, I'm not referring to anything having to do with optimizing a custom objective function in `scikit-optimize`, only single variables).
Ignoring practical issues like training time as well as the potential effects of randomness (i.e., that different random seeds could lead to models with varying effectiveness), are there situations where fewer estimators could result in a more accurate model? If so, what is the rationale?<issue_comment>username_1: I would say that in general situation more estimators are better.
RandomForest fits a lot of estimators - decision trees that take a subset of data (obtained sampling with replacement) and subset of features (by default `sqrt(n_features)` in sklearn).
Each of these estimators is noisy and prone to overfitting, producing a complicated decision surface.
But when you take sufficiently many of them, noisy artifacts, produced by individual estimators, are smoothed and you can get pretty accurate classifier or regressor.
It can be the case, that some added estimators are too noisy and worsen the ensemble, but overall, quality is expected to improve. At some point, the amount of estimators would be sufficient, and additional change won't change the result a lot.
Upvotes: 1 <issue_comment>username_2: For Random Forests in particular, you will find a nice collection on references regarding research on the choice of number of trees in [BiauScornet2015](https://arxiv.org/pdf/1511.05741.pdf), section 2.4.
Further, a Random Forest can be considered an ensemble of decision trees. There is a bunch of literature on what, in fact, makes an ensemble effective. As far as I could gather, the gist of it seems to be that we need the right kind of "diversity" in the ensemble. *Diversity* is the keyword that seems to be used in the literature.
Coming from the practical side, you may find some research that investigates the relationship between ad-hoc/intuitive diversity measures (see also Zhou2012 sec 5.3) and ensemble performance for the classification task. The results: "it's complicated". [[1](https://dl.acm.org/doi/abs/10.1016/j.ijar.2011.12.011), [2](https://link.springer.com/article/10.1023/A:1022859003006), [3](https://scialert.net/fulltext/?doi=jse.2017.60.65)]
However, from the theoretical side, I feel like there is plenty of motivation for this notion. A very good read is "Zhou2012: Ensemble Methods: Foundations and Algorithms". Let me quote two points from there in a condensed manner.
Error-Ambiguity Decomposition
=============================
Assume that the task is to use an ensemble of $T$ individual learners $h\_1, \ldots, h\_T$ to approximate a function $f: R^d \mapsto R$, and the final prediction of the ensemble is obtained through weighted averaging (4.9), i.e.,
$$
H(\boldsymbol{x})=\sum\_{i=1}^T w\_i h\_i(\boldsymbol{x})
$$
where $w\_i$ is the weight for the learner $h\_i$, and the weights are constrained by $w\_i \geq 0$ and $\sum\_{i=1}^T w\_i=1$
Let the errors of an individual learner (submodel) and the error of the ensemble be, resp.
$$
\begin{aligned}
& \operatorname{err}\left(h\_i \mid \boldsymbol{x}\right)=\left(f(\boldsymbol{x})-h\_i(\boldsymbol{x})\right)^2 \\
& \operatorname{err}(H \mid \boldsymbol{x})=(f(\boldsymbol{x})-H(\boldsymbol{x}))^2
\end{aligned}
$$
Likewise, we can define the *ambiguity*, a measure of disagreement among individual learners on instance (point / sample) $x$:
$$
\operatorname{ambi}\left(h\_i \mid \boldsymbol{x}\right)=\left(h\_i(\boldsymbol{x})-H(\boldsymbol{x})\right)^2,
$$
The generalization error and the ambiguity of the individual learner $h\_i$ can be written respectively as
$$
\begin{aligned}
\operatorname{err}\left(h\_i\right) & =\int \operatorname{err}\left(h\_i \mid \boldsymbol{x}\right) p(\boldsymbol{x}) d \boldsymbol{x}, \\
\operatorname{ambi}\left(h\_i\right) & =\int a m b i\left(h\_i \mid \boldsymbol{x}\right) p(\boldsymbol{x}) d \boldsymbol{x} .
\end{aligned}
$$
The generalization error of the ensemble can be written as
$$
\operatorname{err}(H)=\int \operatorname{err}(H \mid \boldsymbol{x}) p(\boldsymbol{x}) d \boldsymbol{x}
$$
Based on the above notations, we can get the error-ambiguity decomposition [Krogh and Vedelsby, 1995]
$$
\operatorname{err}(H)=\overline{\operatorname{err}}(h)-\overline{a m b i}(h),
$$
where $\overline{\operatorname{err}}(h)=\sum\_{i=1}^T w\_i \cdot \operatorname{err}\left(h\_i\right)$ is the weighted average of individual generalization errors, and $\overline{a m b i}(h)=\sum\_{i=1}^T w\_i \cdot \operatorname{ambi}\left(h\_i\right)$ is the weighted average of ambiguities that is also referred to as the ensemble ambiguity.
This shows that the the more accurate and the more diverse the individual learners, the better the ensemble.
Bias-Variance-Covariance Decomposition
======================================
The bias-variance-covariance decomposition of squared error of ensemble is
$$
\operatorname{err}(H)=\overline{\operatorname{bias}}(H)^2+\frac{1}{T} \overline{\operatorname{variance}}(H)+\left(1-\frac{1}{T}\right) \overline{\operatorname{covariance}}(H) .
$$
where
\begin{aligned}
& \overline{\operatorname{bias}}(H)=\frac{1}{T} \sum\_{i=1}^T\left(\mathbb{E}\left[h\_i\right]-f\right) \\
& \overline{\operatorname{variance}}(H)=\frac{1}{T} \sum\_{i=1}^T \mathbb{E}\left(h\_i-\mathbb{E}\left[h\_i\right]\right)^2, \\
& \overline{\operatorname{covariance}}(H)=\frac{1}{T(T-1)} \sum\_{i=1}^T \sum\_{j=1}^T \mathbb{E}\left(h\_i-\mathbb{E}\left[h\_i\right]\right) \mathbb{E}\left(h\_j-\mathbb{E}\left[h\_j\right]\right)
\end{aligned}
The smaller the covariance, the better the ensemble. It is obvious that if all the learners make similar errors, the covariance will be large, and therefore it is preferred that the individual learners make different errors. Thus, through the covariance term, (5.18) shows that the diversity is important for ensemble performance. Notice that the bias and variance terms are constrained to be positive, while the covariance termcan be negative
See Also
========
* <https://machinelearningmastery.com/ensemble-diversity-for-machine-learning/>
* <https://stats.stackexchange.com/questions/328798/defining-diversity-in-ensemble-learning>
* <https://stats.stackexchange.com/questions/576777/diversity-between-classifiers-in-ensemble-learning/576827#576827>
Upvotes: 0 |
2022/01/22 | 439 | 1,934 | <issue_start>username_0: I am specifically interested in data2vec, Meta's new model that can convert image, text, and sound data into a unified neural network representation. To my understanding, they did this through self-supervised learning by masking parts of the input and having the network predict the hidden states if the input hadn't been masked. This allows these modes to share a common representation.
However, I don't understand how the representations of different modes can be connected. For example, how are the hidden state representations of an image of a banana and the word banana trained to be similar, if they are at all?<issue_comment>username_1: it's not multimodal.
It's trying to be a standard way of training a for a model.
But it is not suppose to work across modalities
Upvotes: -1 <issue_comment>username_2: I agree with the answer by @username_1. The corresponding section that clarifies this in the paper can be found on page three:
>
> **Multimodal pre-training.** [...] Our work does not perform multimodal training but aims to unifiy the learning objective for self-supervised
> learning in different modalities. We hope that this will enable better multimodal representations in the future.
> [[ArXiv]](https://arxiv.org/abs/2202.03555)
>
>
>
So what the authors contribute is a framework for learning rich data representations, given any type of data. The framework is the training pipeline that uses the same network in a teacher and student setup. The network first generates data representations from unmasked data (teacher mode) and then does the same with a masked version of the data (student mode). The training objective is minimizing the error of all latent representations between the network in student mode and in teacher mode. If you were to apply the approach you have to choose a model suitable for your data (e.g. the authors use a Vision Transformer for Images).
Upvotes: 0 |
2022/01/24 | 1,389 | 5,039 | <issue_start>username_0: I have heard the following argument being made regarding Neural Networks:
* A Neural Network is a composition of several Activation Functions
* Sigmoid Activation Functions are Non-Convex Functions
* The composition of Non-Convex Functions can produce a Non-Convex Function
* Thus, Loss Functions for Neural Networks that contain several Sigmoid Activation Functions can be Non-Convex
Using the R programming language, I plotted the second derivative of the Sigmoid Function and we can see that it fails the Convexity Test (i.e. the second derivative can take both positive and negative values):
```
e = 2.718
eq = function(x){ (-e^-x)* (1+e^-x)^-2 + (e^-x)*(-2*(1+e^-x)^-3 *(-e^-x))}
plot(eq(-100:100), type='l', main = "Plot of Second Derivative of the Sigmoid Function")
```
[](https://i.stack.imgur.com/LGeum.png)
**My Question:** (If the above argument is in fact true) Can the same argument be extended to lack of Convexity of Loss Functions of Neural Networks containing several "RELU Activation Functions" ?
* On it's own, the ReLU function is said to be Convex.
* Mathematically, we can show that compositions of Convex Functions can only produce a Convex Function.
However, Neural Networks that contain compositions of (only) ReLU Activation functions make it **unclear to me how a Loss Functions that contains (only) "RELU Activation Functions" would a Non-Convex.**
[](https://i.stack.imgur.com/0U2jb.png)
Can someone please comment on this? **If compositions of Convex Functions can only produce Convex Functions - does this mean that the Loss Function of a Neural Network containing only containing ReLU Activation Functions can never be Non-Convex?**
Thanks!
* **References:**
<https://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html>
**Note:** Using some informal logic, I do not think that the Loss Functions of Neural Networks containing RELU Activation Functions are generally Convex. This is because RELU (style) Activation Functions are generally some of the most common types of activation functions being used - yet the same difficulties concerning mon-convex optimization still remain. Thus, I would like to think that Neural Networks with RELU Activation Functions are still generally non-convex.<issue_comment>username_1: You're missing a couple of quite important concepts:
* [Universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem): with enough parameters a neural network can approximate any function.
* Basically every loss function is non convex. (There is this little problem in machine learning call local minima about which we like to complain a lot :) )
But no need to trust me, just run a simple experiment and try yourself to approximate a non convex function, like **$sin(x)$** with relu:
```
from sklearn.neural_network import MLPRegressor
import numpy as np
import matplotlib.pyplot as plt
f = lambda x: [[x_] for x_ in x]
noise_level = 0.1
X_train_ = np.arange(0, 10, 0.2)
real_sin = np.sin(X_train_)
y_train = real_sin + np.random.normal(0, noise_level, len(X_train_))
nodes = 1000
layers = 10
regr = MLPRegressor(hidden_layer_sizes=tuple([nodes] * 4), activation="relu").fit(f(X_train_), y_train)
predicted_sin = regr.predict(f(X_train_))
plt.plot(X_train_, real_sin, label="sin target")
plt.plot(X_train_, predicted_sin, label="sin predicted")
plt.legend()
plt.show()
```
You'll see it's not a task too hard to learn:
[](https://i.stack.imgur.com/5iVv7.png)
PS: of course this is just of a toy example, and if you decrease layers and amount of hidden units the results will become crap, but it still proves that activation surely affects, but not constrain the non linearity of the final function learned by a neural network.
Upvotes: 2 <issue_comment>username_2: I think you are asking about Fully Input Convex Neural Networks as proposed in [1].
ReLU is in fact a convex function, and the sum of convex functions can only produce convex functions. However, unlike you said, composition of convex functions can produce non-convex functions, unless they are non-decreasing.
With FICNNs you can only learn convex functions. For that all weights W must be non-negative for the activation function g
From [1] the interesting part is:
>
> The function f is convex in y provided that
> all $W\_i(z)\_{1:k}$ are non-negative, and all functions $g\_i$ are convex and non-decreasing.
> The proof is simple and follows from the fact that non-negative sums of convex functions are also convex and that
> the composition of a convex and convex non-decreasing
> function is also convex (see e.g. Boyd & Vandenberghe
> (2004, 3.2.4)).
>
>
>
[1] Amos, Brandon, <NAME>, and <NAME>. "Input convex neural networks." International Conference on Machine Learning. PMLR, 2017.
Upvotes: 1 |
2022/01/24 | 1,395 | 6,129 | <issue_start>username_0: As I understand, this is the general summary of the Regularization-Overfitting Problem:
* The classical "Bias-Variance Tradeoff" suggests that complicated models (i.e. models with more parameters, e.g. neural networks with many layers/weights) are able to well capture complicated patterns in data (i.e. low bias) but are unable to generalize well to unseen data (i.e. high variance). On the other hand, simpler models are able to generalize better to unseen data (i.e. low variance), but unable to capture complex patterns in data (i.e. high bias).
* Regularization tries to navigate this compromise by attempting to improve the ability of complicated models to generalize to unseen data. Regularization does this by making "complex models simpler", by strategically reducing the number of parameters in complex models such that they maintain their ability to capture complexity in the data but also generalize to unseen data.
* Regularization does this by bringing some of the model parameters towards 0 (L1 Regularization) or by bringing many of the model parameters somewhat towards 0 (L2 Regularization). This "shrinkage" effectively negates the influence of some of the parameters in complex models - and as a result, regularized models tend to have "sparser" solutions (i.e. contain more model parameters with values closer to 0).
Regarding this, I am still not sure if the mathematics behind why sparser models might result in less overfitting is clearly known.
The way I currently see things, Regularization seems to be more of a general heuristic : Countless evidence shows that models overfit less when you add a "regularization penalty term" to the model's Loss Function - and thus deliberately choose model parameters corresponding to a region of the Loss Function that is situated away from the true minimum point. Mathematically, I can understand how this happens.
**But are there any mathematical justifications that suggest a sparser model based on a regularized solution is less likely to overfit data compared to a non-regularized solution - or is this still based on heuristics and anecdotal evidence? Do we have any insights as to how the Mathematics of Regularization acts to prevent Overfitting?**<issue_comment>username_1: I think different mathematical explanations exist for different situations where regularization is useful. The importance of regularization varies by problem as well. It is absolutely necessary when $p>>n$ as I'll mention below. In general it is a way to impose reasonable priors on the model though from a bayesian perspective.
I'm going to put together a quick answer that I hope is somewhat satisfactory. I don't think it is exactly what you are going after though. At a high level I would recommend skimming [Hastie (2001)](https://hastie.su.domains/ElemStatLearn/printings/ESLII_print12_toc.pdf), especially section 16.2.2 titled *The "Bet on Sparsity" Principle*. You can think of sparsity in a bunch of different ways in addition to the number of zero weights in a linear model.
>
> $L\_1$ penalty is better suited to sparse situations, where there are few basis functions with nonzero coefficients (among all possible choices).
>
>
>
I think the key here is that sparsity could exist in some basis, not necessarily your model weight basis.
Another even more targeted mathematics heavy book would include [Statistical Learning with Sparsity](https://hastie.su.domains/StatLearnSparsity_files/SLS.pdf).
**Solution identifiability**
For example in the case where your parameter space is much larger than your number of samples ($p >> n$), you have an identifiability problem. Infinite numbers of solutions exist, so picking one with username_2ll total weight of parameters is just as justified as any other, but perhaps more plausible in most situations from aesthetics. Without regularization in this setting, you would have instability issues where different equally good solutions could be chosen, perhaps based on random initial conditions.
**Domain specific knowledge** In many cases, you wouldn't expect all of your parameters to be meaningful. Enforcing sparsity will mathematically limit the solution space to one with more zeros, as you point out, or in general a solution space with a fewer number of underlying basis functions being involved in the data generation. In many domains there are a username_2llish number of factors that are causing a large number of observed variables to be changed, so regularization is imposing that kind of a constraint onto your model. Since the remainder of the variables are not real, or would otherwise make use of too many basis functions to represent your task, you are helping the model out by providing this useful piece of information. There are many extensions on this. For example if you know that your features are spatially correlated you could add in a fused lasso penalty, etc. The rationale here mathematically is probably something along the lines of including more noise terms in your solution results in a lower likelihood of generalizing.
Upvotes: 3 [selected_answer]<issue_comment>username_2: To my knowledge, this is well understood in the setting with a true (sparse) linear model.
In high dimensional regimes and when the true model is sparse, a regularized solution is less likely to overfit the data because with high probability, we will obtain the true support of our model. This follows from the equivalence of L1 solution to solving L0 solution, when the Restricted nullspace property (RNSP) holds. A sufficient condition for RNSP is the restricted isometry property (RIP). Under a subgaussian analysis, one can see RIP is satisfied with high probability meaning solving lasso obtains our true support. Analysis gets more complex in a noisy setting, but intuition can be built from the noiseless case. Low l2 error of coefficients follows with high probability as well.
See [https://www.amazon.com/Statistical-Learning-Sparsity-Generalizations-Probability/dp/1498712169](https://rads.stackoverflow.com/amzn/click/com/1498712169) for more information.
Upvotes: 1 |
2022/01/25 | 1,352 | 3,997 | <issue_start>username_0: As discussed in [this question](https://ai.stackexchange.com/q/7680/2444), the policy gradient algorithms given in [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/bookdraft2017nov5.pdf) use the gradient
\begin{align\*}
\gamma^t \hat A\_t \nabla\_{\theta} \log \pi(a\_t \, | \, s\_t, \theta)
\end{align\*}
where $\hat A\_t$ is the advantage estimate for step $t$. For example, $\hat A\_t = r\_t + \gamma V(s\_{t+1}) - V(s\_t)$ in the one-step actor-critic algorithm given in section 13.5.
In the answers to the linked question, it is claimed that the extra discounting is "correct", which implies that it should be included.
If I look in the literature to a seminal paper such as [Proximal Policy Optimization Algorithms](https://arxiv.org/abs/1707.06347) by OpenAI, they do not include the extra discounting factor, i.e. they use a gradient defined as
\begin{align\*}
\hat A\_t \dfrac{\nabla\_{\theta}\pi(a\_t \, | \, s\_t, \theta)}{\pi(a\_t \, | \,s\_t, \theta\_{\rm old})}
\end{align\*}
which does not include the discounting factor (of course, it's dealing with the off-policy case, but I don't see how that would make a difference in terms of the discounting). [OpenAI's implementation of PPO](https://github.com/openai/baselines/tree/master/baselines/ppo2) also does not include the extra discounting factor.
So, how am I supposed to interpret this discrepancy? I agree that the extra discounting factor should be present, from a theoretical standpoint. Then, why is it not in the OpenAI code or paper?<issue_comment>username_1: I believe you will find the answer in the paper [High-Dimensional Continuous Control Using Generalized Advantage Estimation](https://arxiv.org/abs/1506.02438), which is the basis for the advantage function used in the PPO paper that you referenced.
From the paper, the estimate of the advantage function is defined as:
\begin{align\*}
\hat{A}\_{t}^{GAE(\gamma,\lambda)} = \sum\_{l=0}^{\infty}(\gamma\lambda)^{l}\delta\_{t+1}^{V}
\end{align\*}
where $\delta\_{t}^{V}$, the TD residual of $V$, is defined as:
\begin{align\*}
\delta\_{t}^{V} = r\_{t}+\gamma V(s\_{t+1})-V(s\_{t})
\end{align\*}
where $V$ is an approximate of the value function.
If you look closely at these two equations you will see that the discount $\gamma$ is applied twice.
I never went through the code of the whole OpenAI implementation of PPO, but if I am not mistaken the implementation of the above equations can be found [here in ppo2/runner.py](https://github.com/openai/baselines/blob/ea25b9e8b234e6ee1bca43083f8f3cf974143998/baselines/ppo2/runner.py#L63-L64).
Upvotes: -1 <issue_comment>username_2: If you want to maximize the expected reward
\begin{align\*}
\mathbb{E}\bigg[\sum\_{t=1}^nr\_t \bigg]
\end{align\*}
and are using a score function based gradient estimator (as opposed to a SAC/DDPG style update), you have the unbiased gradient estimator
\begin{align\*}
\sum\_{i=1}^n \sum\_{k=i}^n r\_k\nabla\_{\theta}\log\pi(a\_t) \tag{1}
\end{align\*}
Then, you can add discounting as a variance reduction technique; the gradient estimator
\begin{align\*}
\sum\_{i=1}^n \sum\_{k=i}^n \gamma^{k-i}r\_k\nabla\_{\theta}\log\pi(a\_t) \tag{2}
\end{align\*}
will have a lower variance than Eq. (1) (see [this answer](https://stats.stackexchange.com/a/428157/257864)).
If you want to maximize the discounted expected reward
\begin{align\*}
\mathbb{E}\bigg[\sum\_{t=1}^n \gamma^{t-1}r\_t \bigg]
\end{align\*}
you get the unbiased gradient estimator
\begin{align\*}
\sum\_{i=1}^n \sum\_{k=i}^n \gamma^{k-1}r\_k\nabla\_{\theta}\log\pi(a\_t) \tag{3}
\end{align\*}
So in Sutton and Barto they are essentially presenting the formulation Eq. (3). The difference between (2) and (3) is the factor of $\gamma^{i-1}$ which is what I was confused about in the question.
Thus in summary, the formulation (2) is an biased estimator of the expected reward; whereas (3) is an unbiased estimator of the expected discounted reward.
Upvotes: 0 |
2022/01/27 | 1,132 | 4,360 | <issue_start>username_0: Let me explain, suppose we are building a neural network that predicts if two items are similar or not. This is a classification task with hard labels (0, 1) of examples of similar and dissimilar items. Suppose we also have access to embeddings for each item.
A naive approach might be to concat the two item embeddings, add a linear layer or two and finally perform a sigmoid (as this is binary classification) for the output probability.
However, that approach would mean that potentially inputing `(x, y)` to the model could give a different score from inputing `(y, x)` into it, since concat is not symmetric.
How can we go about overcoming this? What is the common practice in this situation?
So far I have thought about:
1. Whenever I input `(x, y)` I can also input `(y, x)` and always take the average prediction of both of them. But this feels like a hacky way of forcing the network to be symmetric, it doesn't make it learn the same thing despite of the input order.
2. Replacing concat with some other symmetric tensor operation. But what operation? Addition? Element-wise multiplication? Element-wise max? What's the "default"?<issue_comment>username_1: The problem you're describing is related to (if not a subset of) [Shift Invariance](https://en.wikipedia.org/wiki/Shift-invariant_system). Shift invariance refers to any geometric translation of an input, but concatenation of a pair of tenors in 2 different ways $(x, y) \rightarrow (y, x)$ can be seen as translation with step equal to the shape of the tensors.
How to tackle lack of shift invariance? There is still not unanimous consensus on why deep neural network are not shift invariant, even though some papers pointed out that some convolution operations might be a core issue.
* [Zhang](http://proceedings.mlr.press/v97/zhang19a/zhang19a.pdf) proposed an alternative variation of pooling tat should enhance anti-aliased feature maps
* [Chaman & Dokmanic](https://arxiv.org/pdf/2011.14214.pdf) focus instead on analyzing the impact of dawn sampling operations, suggesting a new subsampling operation as well to replace in the conventional down sampling approaches.
Starting from a completely different perspective, other papers analyze the impact of classic euclidean geometry, utilized not only in loss design (l1 and l2 norm for example), but also an underlying assumption of every classic deep learning model (despite non linear activation functions, every hidden layer is still a linear transformation of the form $w\*x + b$). So instead of fixing linear operations or enforce good shift invariant feature maps, we change the geometry we're using to ensure that translation and rotations have no impact in the optimization of our objective.
* [This paper](https://arxiv.org/pdf/1611.08097.pdf) is a good start if you feel brave enough to start experimenting with not euclidean approaches.
On a final note, I think that you first suggestion could be worth a shot if you rethink it this way:
* Present the model each time 2 pairs $(x, y)$ & $(y, x)$ and **add a custom loss component** to enforce the same prediction (could be literally MSE between the two outputs logits). This loss should at least give you a hint if it's possible for the model to become robust to this specific translation operation (if that loss component decrease over training time the answer is yes).
Upvotes: 3 <issue_comment>username_2: I answered a similar question [here](https://ai.stackexchange.com/a/34049/32722). So the goal here is to train a network which can tell whether the inputs $x$ and $y$ are "similar" or not. You can first build a model $f$ which "compresses" the high-dimensional input into a smaller embedding dimension. In the case of [Xception](https://arxiv.org/abs/1610.02357) this $f$ would be a mapping from a $299 \times 299 \times 3$ RGB image to a $2048$ "feature vector".
Now the classifier model $c(x, y)$ can be built as $c(x, y) = g(f(x) - f(y))$, where $g$ can be a very simple function without any trainable parameters like $g(\overline{d}) = 1 - e^{-\sum\_i d\_i^2}$ or something more complex. Clearly here $c(x, y) = c(y, x)$ and $c(x, x) = 0$.
With a custom $f$ and $g$ you can train this model end-to-end, or if your embeddings are fixed (or you use a pre-trained network $f$) it is also possible to just train the $g$.
Upvotes: 1 |
2022/01/30 | 1,142 | 4,304 | <issue_start>username_0: In Sutton & Barto's book on reinforcement learning ([section 5.4, p. 100](http://incompleteideas.net/book/RLbook2020.pdf#page=122)) we have the following:
>
> The on-policy method we present in this section uses $\epsilon$ greedy policies, meaning that most of the time they choose an action that has maximal estimated action value, but with probability $\epsilon$ they instead select an action at random. That is, all nongreedy actions are given the minimal probability of selection, $\frac{\epsilon}{|\mathcal{A}|}$, and the remaining bulk of
> the probability, $1-\epsilon+\frac{\epsilon}{|\mathcal{A}|}$, is given to the greedy action.
>
>
>
I understood the probability of a random action selection: since the total probability of random action selections is $\epsilon$ and since all actions can be selected as random we calculate the probability of an action to be selected randomly as $\frac{\epsilon}{|\mathcal{A}|}$.
However, I did not understand how the probability $1-\epsilon+\frac{\epsilon}{|\mathcal{A}|}$ for greedy action selection was derived. How is it calculated?<issue_comment>username_1: The problem you're describing is related to (if not a subset of) [Shift Invariance](https://en.wikipedia.org/wiki/Shift-invariant_system). Shift invariance refers to any geometric translation of an input, but concatenation of a pair of tenors in 2 different ways $(x, y) \rightarrow (y, x)$ can be seen as translation with step equal to the shape of the tensors.
How to tackle lack of shift invariance? There is still not unanimous consensus on why deep neural network are not shift invariant, even though some papers pointed out that some convolution operations might be a core issue.
* [Zhang](http://proceedings.mlr.press/v97/zhang19a/zhang19a.pdf) proposed an alternative variation of pooling tat should enhance anti-aliased feature maps
* [Chaman & Dokmanic](https://arxiv.org/pdf/2011.14214.pdf) focus instead on analyzing the impact of dawn sampling operations, suggesting a new subsampling operation as well to replace in the conventional down sampling approaches.
Starting from a completely different perspective, other papers analyze the impact of classic euclidean geometry, utilized not only in loss design (l1 and l2 norm for example), but also an underlying assumption of every classic deep learning model (despite non linear activation functions, every hidden layer is still a linear transformation of the form $w\*x + b$). So instead of fixing linear operations or enforce good shift invariant feature maps, we change the geometry we're using to ensure that translation and rotations have no impact in the optimization of our objective.
* [This paper](https://arxiv.org/pdf/1611.08097.pdf) is a good start if you feel brave enough to start experimenting with not euclidean approaches.
On a final note, I think that you first suggestion could be worth a shot if you rethink it this way:
* Present the model each time 2 pairs $(x, y)$ & $(y, x)$ and **add a custom loss component** to enforce the same prediction (could be literally MSE between the two outputs logits). This loss should at least give you a hint if it's possible for the model to become robust to this specific translation operation (if that loss component decrease over training time the answer is yes).
Upvotes: 3 <issue_comment>username_2: I answered a similar question [here](https://ai.stackexchange.com/a/34049/32722). So the goal here is to train a network which can tell whether the inputs $x$ and $y$ are "similar" or not. You can first build a model $f$ which "compresses" the high-dimensional input into a smaller embedding dimension. In the case of [Xception](https://arxiv.org/abs/1610.02357) this $f$ would be a mapping from a $299 \times 299 \times 3$ RGB image to a $2048$ "feature vector".
Now the classifier model $c(x, y)$ can be built as $c(x, y) = g(f(x) - f(y))$, where $g$ can be a very simple function without any trainable parameters like $g(\overline{d}) = 1 - e^{-\sum\_i d\_i^2}$ or something more complex. Clearly here $c(x, y) = c(y, x)$ and $c(x, x) = 0$.
With a custom $f$ and $g$ you can train this model end-to-end, or if your embeddings are fixed (or you use a pre-trained network $f$) it is also possible to just train the $g$.
Upvotes: 1 |
2022/01/31 | 1,237 | 5,281 | <issue_start>username_0: Could anyone explain this problem I have with the Turing test as performed? Turing (1950) in describing the test says the computer takes the part of the man then plays the game as when played between a man and a woman. In the game, the man and the woman communicate with the hidden judge by text alone (Turing recommends using teleprinters).
If the computer takes the part of the man, then it will have an eye and a finger in order to use the teleprinter as the man would have done. But in the TT *as performed*, the machine is not robotic. It has no eyes and no fingers but rather is wired directly into the judge's terminal. The only thing the machine gets from the judge is what flows down the wire. But the problem I have is, what flows down the wire is not text. The human contestant gets the text. The judge's questions print out on the teleprinter paper roll. The man sees the shapes of the text, and understands the meanings of the shapes. But the computer is never exposed to the shapes of the questions, so how could it possibly know what they mean?
I've never seen anyone raise this problem, so I'm very confused. How could the machine possibly know the judge's questions if it is never exposed to the shapes of the text?<issue_comment>username_1: (As @nbro writes, your question is not very specific; I'm answering here how I understand it from the current version)
In an ideal world, a computer would see written text (via a camera), scan it, understand it, and type a response. I assume Turing didn't go for voice transmission, as voice includes other clues to a person's gender.
However, AI is such a complex field that it would have been impractical to implement this until fairly recently. And OCR and robotic movements (typing on a keyboard) are arguably not that relevant to human cognition, so in most actually run Turing-like tests shortcuts are taken.
Update: Also, note that the original Turing test (1950) was based on a party game about distinguishing between a man and a woman (who were not visible). This *imitation game* was later generalised to a guessing game between a human and a machine.
Upvotes: 1 <issue_comment>username_2: There is no form of OCR that assigns "meaning" by processing visual input of letters and words into the computer representation of those same words (e.g. ASCII). The robot with a camera and keyboard does not solve the problem you have raised. You need to look elsewhere for answers, and the state of AI today is that no-one has strong evidence for how meaning arises within an intelligent system. There is plenty of writing on the subject though.
I think you are trying to understand how and where meaning may arise in any system (biological or machine). There is lots of thought around this subject in AI philosophy and research. A good place to start might be with [<NAME>'s Chinese Room argument](https://en.wikipedia.org/wiki/Chinese_room), which broadly agrees with you that a basic discussion/chatbot program does not prove intelligence, but for different reasons than the "shapes of the text", which is not really an issue at all in my opinion. Searle's argument is by no means the end of the matter, and plenty has been written in rebuttal and support of the argument.
The real issue is the [symbol grounding problem](https://en.wikipedia.org/wiki/Symbol_grounding_problem), which also applies to visuals of text, or any system of referring one entity by using another entirely different one.
Potentially addressing the grounding problem, there are various philosophies and engineering ideas proposed. These include:
* Behaviouralism. It is not important what goes on inside an intelligent system, only that its external measurable behaviours are those of an intelligent system. This matches quite closely to the idea of the Turing Test, but many people find this unsatisfying due to personal experience of self-awareness, subjective experience and consciousness. It is in some ways the "[shut up and calculate](https://en.wikipedia.org/wiki/Copenhagen_interpretation#Principles)" of AI.
* Embodiment and multi-modal experience. If an agent can experience the world directly and associate symbols with relevant experiences (the word "cat" with seeing and hearing cats), then it would be intelligent in the same way as we are.
* Missing components. Humans (and sometimes animals) possess some additional system that cannot be replicated by current computing and robotic devices, even if they were made 1000s of times more powerful. The missing component might be something quantum in our cells ([Penrose, The Emperor's New Mind](https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind)) or "the soul". This is also a common depiction of robots and AI in science fiction, and there is lots of popular support for it as a philosophy, despite weak evidence.
* Complexity and power. We can currently replicate the mental power of small insects on computing devices. When we scale up with more powerful computers, larger neural networks, and perhaps a bit of special extra structure (that we don't know yet), then we will hit a level of complexity where true intelligence will emerge. You could view recent very large language models such as GPT-3 as exploring this idea.
Upvotes: 0 |
2022/02/03 | 1,072 | 4,417 | <issue_start>username_0: I have a textual dataset that has a set of real numbers as labels: L={0.0, 0.33, 0.5, 0.75, 1.0}, and I have a model that takes the text as input and has a Sigmoid output.
If I train the model on this data, will the model keep generating labels that exactly equal one of the values in L? or it might generate, as an example, 0.4?
If not, is there a solution for that?<issue_comment>username_1: (As @nbro writes, your question is not very specific; I'm answering here how I understand it from the current version)
In an ideal world, a computer would see written text (via a camera), scan it, understand it, and type a response. I assume Turing didn't go for voice transmission, as voice includes other clues to a person's gender.
However, AI is such a complex field that it would have been impractical to implement this until fairly recently. And OCR and robotic movements (typing on a keyboard) are arguably not that relevant to human cognition, so in most actually run Turing-like tests shortcuts are taken.
Update: Also, note that the original Turing test (1950) was based on a party game about distinguishing between a man and a woman (who were not visible). This *imitation game* was later generalised to a guessing game between a human and a machine.
Upvotes: 1 <issue_comment>username_2: There is no form of OCR that assigns "meaning" by processing visual input of letters and words into the computer representation of those same words (e.g. ASCII). The robot with a camera and keyboard does not solve the problem you have raised. You need to look elsewhere for answers, and the state of AI today is that no-one has strong evidence for how meaning arises within an intelligent system. There is plenty of writing on the subject though.
I think you are trying to understand how and where meaning may arise in any system (biological or machine). There is lots of thought around this subject in AI philosophy and research. A good place to start might be with [John Searle's Chinese Room argument](https://en.wikipedia.org/wiki/Chinese_room), which broadly agrees with you that a basic discussion/chatbot program does not prove intelligence, but for different reasons than the "shapes of the text", which is not really an issue at all in my opinion. Searle's argument is by no means the end of the matter, and plenty has been written in rebuttal and support of the argument.
The real issue is the [symbol grounding problem](https://en.wikipedia.org/wiki/Symbol_grounding_problem), which also applies to visuals of text, or any system of referring one entity by using another entirely different one.
Potentially addressing the grounding problem, there are various philosophies and engineering ideas proposed. These include:
* Behaviouralism. It is not important what goes on inside an intelligent system, only that its external measurable behaviours are those of an intelligent system. This matches quite closely to the idea of the Turing Test, but many people find this unsatisfying due to personal experience of self-awareness, subjective experience and consciousness. It is in some ways the "[shut up and calculate](https://en.wikipedia.org/wiki/Copenhagen_interpretation#Principles)" of AI.
* Embodiment and multi-modal experience. If an agent can experience the world directly and associate symbols with relevant experiences (the word "cat" with seeing and hearing cats), then it would be intelligent in the same way as we are.
* Missing components. Humans (and sometimes animals) possess some additional system that cannot be replicated by current computing and robotic devices, even if they were made 1000s of times more powerful. The missing component might be something quantum in our cells ([Penrose, The Emperor's New Mind](https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind)) or "the soul". This is also a common depiction of robots and AI in science fiction, and there is lots of popular support for it as a philosophy, despite weak evidence.
* Complexity and power. We can currently replicate the mental power of small insects on computing devices. When we scale up with more powerful computers, larger neural networks, and perhaps a bit of special extra structure (that we don't know yet), then we will hit a level of complexity where true intelligence will emerge. You could view recent very large language models such as GPT-3 as exploring this idea.
Upvotes: 0 |
2022/02/06 | 1,210 | 5,066 | <issue_start>username_0: In my problem, there are about 5,000 training images and there are about 50~100 objects of identical type (or class) on average, per image. And for each training images, there is a partial mask information that denotes the polygon vertices of objects, but the problem is there are only 3 ~ 5 objects per image with mask/annotation information.
So in summary there is 1 class, 5,000 \* 50 ~ 5,000 \* 100 instances of the class, and 5,000 \* 3 ~ 5,000 \* 5 instances with masking information.
So not a single training data image has a full masking information, and yet all the training data images have partial masking information. My job is to make instance segmentation model.
I did some search on semi-supervised segmentation, and to my understanding it seems like the papers are tackling problems where some training images have all the objects annotated while the other training images have 0 objects with annotation. That isn't exactly my situation. How should I approach this problem? Any tips are appreciated.<issue_comment>username_1: (As @nbro writes, your question is not very specific; I'm answering here how I understand it from the current version)
In an ideal world, a computer would see written text (via a camera), scan it, understand it, and type a response. I assume Turing didn't go for voice transmission, as voice includes other clues to a person's gender.
However, AI is such a complex field that it would have been impractical to implement this until fairly recently. And OCR and robotic movements (typing on a keyboard) are arguably not that relevant to human cognition, so in most actually run Turing-like tests shortcuts are taken.
Update: Also, note that the original Turing test (1950) was based on a party game about distinguishing between a man and a woman (who were not visible). This *imitation game* was later generalised to a guessing game between a human and a machine.
Upvotes: 1 <issue_comment>username_2: There is no form of OCR that assigns "meaning" by processing visual input of letters and words into the computer representation of those same words (e.g. ASCII). The robot with a camera and keyboard does not solve the problem you have raised. You need to look elsewhere for answers, and the state of AI today is that no-one has strong evidence for how meaning arises within an intelligent system. There is plenty of writing on the subject though.
I think you are trying to understand how and where meaning may arise in any system (biological or machine). There is lots of thought around this subject in AI philosophy and research. A good place to start might be with [John Searle's Chinese Room argument](https://en.wikipedia.org/wiki/Chinese_room), which broadly agrees with you that a basic discussion/chatbot program does not prove intelligence, but for different reasons than the "shapes of the text", which is not really an issue at all in my opinion. Searle's argument is by no means the end of the matter, and plenty has been written in rebuttal and support of the argument.
The real issue is the [symbol grounding problem](https://en.wikipedia.org/wiki/Symbol_grounding_problem), which also applies to visuals of text, or any system of referring one entity by using another entirely different one.
Potentially addressing the grounding problem, there are various philosophies and engineering ideas proposed. These include:
* Behaviouralism. It is not important what goes on inside an intelligent system, only that its external measurable behaviours are those of an intelligent system. This matches quite closely to the idea of the Turing Test, but many people find this unsatisfying due to personal experience of self-awareness, subjective experience and consciousness. It is in some ways the "[shut up and calculate](https://en.wikipedia.org/wiki/Copenhagen_interpretation#Principles)" of AI.
* Embodiment and multi-modal experience. If an agent can experience the world directly and associate symbols with relevant experiences (the word "cat" with seeing and hearing cats), then it would be intelligent in the same way as we are.
* Missing components. Humans (and sometimes animals) possess some additional system that cannot be replicated by current computing and robotic devices, even if they were made 1000s of times more powerful. The missing component might be something quantum in our cells ([Penrose, The Emperor's New Mind](https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind)) or "the soul". This is also a common depiction of robots and AI in science fiction, and there is lots of popular support for it as a philosophy, despite weak evidence.
* Complexity and power. We can currently replicate the mental power of small insects on computing devices. When we scale up with more powerful computers, larger neural networks, and perhaps a bit of special extra structure (that we don't know yet), then we will hit a level of complexity where true intelligence will emerge. You could view recent very large language models such as GPT-3 as exploring this idea.
Upvotes: 0 |
2022/02/08 | 342 | 1,426 | <issue_start>username_0: * [Towards Theoretically Understanding Why SGD Generalizes Better Than ADAM in Deep Learning](https://proceedings.neurips.cc/paper/2020/file/f3f27a324736617f20abbf2ffd806f6d-Paper.pdf)
What does it mean by Generalization in this article?<issue_comment>username_1: When training machine to learn from data, it is typical withhold a set of data that the machine never sees. This withheld dataset is known as the test set. The performance of the learned model on the test set gives an idea of how well the model has learned the function that generated the data in the first place as opposed to overfitting to the training dataset. This measure of performance is typically referred to as generalization error and this is what is being referred to in the article.
Upvotes: 0 <issue_comment>username_2: The term ***generalization*** refers to the model's ability to adapt and react appropriately to new, unpublished data that was drawn from the same distribution as the one used to build the model . In other words, generalization examines how well a model can digest new data and make correct predictions after being trained on a training set.
**How well a model is able to generalize is key to its success.**
If you train a model too well on the training data, it will be unable to generalize. In such cases, it will end up making wrong predictions when receiving new data.
Upvotes: 2 [selected_answer] |
2022/02/13 | 556 | 2,345 | <issue_start>username_0: I have a dataset that I want to use for training.
The output of the model is a binary value (0,1)
The dataset is not balanced, it has only 200 entries for output 1 and 4000 entries for output 0.
When I tried to use it with LightGMB, the model always predict 0 and for this reason, it is not good.
How can deal with an unbalanced dataset?
One way that I can think of it is to delete several of 0 entries and only use around 200 entries with an output of 0.
This is not good, as the model can not see all datasets.
What is the best way to deal with unbalanced datasets?<issue_comment>username_1: My favorite first alternative is change the error/cost function in order to penalize more the errors in the less frequent label.
About other alternatives (generate synthetic cases, ..) you can easily Google for "imbalanced data". Some easy to understand articles will appear, as:
<https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/>
who suggest 8 groups of solutions, four concrete:
* Collect More Data
* Change Performance Metric
* Resampling
* Generate Synthetic Samples
and four more generic:
* Try Different Algorithms
* Try Penalized Models
* Try a Different Perspective
* Try Getting Creative
Upvotes: 2 <issue_comment>username_2: "the model always predict 0 and for this reason, it is not good."
If accuracy is the performance metric that is of most interest for your application then this may well be the ideal behaviour if the density of positive patterns is not higher than the density of negative patterns anywhere in the feature space.
You need to determine what is the appropriate performance metric for your application *before* deciding what to do. In imbalanced learning problems it is common for false-positive and false-negative misclassification errors to have different costs, in which case the first thing to do is to use cost-sensitive learning and incorporate these costs into the training criterion.
I'd also advise using a probabilistic classifier (e.g. logistic regression, neural net, kernel logistic regress, Gaussian process classification) that gives the probability of class membership rather than a hard yes/no classification as then the misclassification costs can be changed without having to re-fit the model.
Upvotes: 0 |
2022/02/14 | 852 | 3,384 | <issue_start>username_0: Consider the following task to be solved by a neural network: Given a $N\times N$ pixel grid with up to $M$ objects drawn on it, either squares (9 pixels) or diamonds (5 pixels):
[](https://i.stack.imgur.com/stm2o.png)square
[](https://i.stack.imgur.com/lUjYS.png)diamond
The objects may overlap. The task is to give the **minimal** possible **numbers of objects per shape** that can be "seen" and distinguished in the picture and tell how many squares, how many diamonds, and how many objects with unknown shape there are.
Here are some examples with $N = 7$ and $M=5$ with their intended numbers ($n\_\square, n\_\Diamond, n\_?$). The examples with $n\_? = 1$ are those with pixels that may either come from a square or a diamond (highlighted in black, but not bearing any information that may be used).
[](https://i.stack.imgur.com/HVIY0.png)
I wonder if this task can be solved for general $N$ and $M$ by simple multi-layer networks of standard neurons (e.g. McCulloch-Pitts cells) and how to design and train them.
I further wonder if it could be a standard exercise in an introductory course in neural networks to "hand-draw" a neural network that solves the task (by giving explicit weights). If so I'm happy to see a standard solution (full-blown).
This exercise could foster explainability and understandability of networks, I guess.<issue_comment>username_1: My favorite first alternative is change the error/cost function in order to penalize more the errors in the less frequent label.
About other alternatives (generate synthetic cases, ..) you can easily Google for "imbalanced data". Some easy to understand articles will appear, as:
<https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/>
who suggest 8 groups of solutions, four concrete:
* Collect More Data
* Change Performance Metric
* Resampling
* Generate Synthetic Samples
and four more generic:
* Try Different Algorithms
* Try Penalized Models
* Try a Different Perspective
* Try Getting Creative
Upvotes: 2 <issue_comment>username_2: "the model always predict 0 and for this reason, it is not good."
If accuracy is the performance metric that is of most interest for your application then this may well be the ideal behaviour if the density of positive patterns is not higher than the density of negative patterns anywhere in the feature space.
You need to determine what is the appropriate performance metric for your application *before* deciding what to do. In imbalanced learning problems it is common for false-positive and false-negative misclassification errors to have different costs, in which case the first thing to do is to use cost-sensitive learning and incorporate these costs into the training criterion.
I'd also advise using a probabilistic classifier (e.g. logistic regression, neural net, kernel logistic regress, Gaussian process classification) that gives the probability of class membership rather than a hard yes/no classification as then the misclassification costs can be changed without having to re-fit the model.
Upvotes: 0 |
2022/02/21 | 503 | 2,195 | <issue_start>username_0: I am training a model using A2C with stable baselines 2. When I increased the timesteps I noticed that episode rewards seem to reset (see attached plot). I don´t understand where these sudden decays or resets could come from and I am looking for practical experience or pointers to theory what these resets could imply.
[](https://i.stack.imgur.com/deKCT.png)<issue_comment>username_1: My favorite first alternative is change the error/cost function in order to penalize more the errors in the less frequent label.
About other alternatives (generate synthetic cases, ..) you can easily Google for "imbalanced data". Some easy to understand articles will appear, as:
<https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/>
who suggest 8 groups of solutions, four concrete:
* Collect More Data
* Change Performance Metric
* Resampling
* Generate Synthetic Samples
and four more generic:
* Try Different Algorithms
* Try Penalized Models
* Try a Different Perspective
* Try Getting Creative
Upvotes: 2 <issue_comment>username_2: "the model always predict 0 and for this reason, it is not good."
If accuracy is the performance metric that is of most interest for your application then this may well be the ideal behaviour if the density of positive patterns is not higher than the density of negative patterns anywhere in the feature space.
You need to determine what is the appropriate performance metric for your application *before* deciding what to do. In imbalanced learning problems it is common for false-positive and false-negative misclassification errors to have different costs, in which case the first thing to do is to use cost-sensitive learning and incorporate these costs into the training criterion.
I'd also advise using a probabilistic classifier (e.g. logistic regression, neural net, kernel logistic regress, Gaussian process classification) that gives the probability of class membership rather than a hard yes/no classification as then the misclassification costs can be changed without having to re-fit the model.
Upvotes: 0 |
2022/03/01 | 618 | 2,178 | <issue_start>username_0: With an RGB image of a paper sheet with text, I want to obtain an output image which is cropped and deskewed. Example of input:
[](https://i.stack.imgur.com/l64Kn.png)
I have tried non-AI tools (such as `openCV.findContours`) to find the 4 corners of the sheet, but it's not very robust in some lighting conditions, or if there are other elements on the photo.
So I see two options:
* a NN with `input=image, output=image`, that does **everything** (including the deskewing, and even also the brightness adjustment). I'll just train it with thousands of images.
* a NN with `input=image, output=coordinates_of_4_corners`. Then I'll do the cropping + deskewing with a homographic transform, and brightness adjustment with standard non-AI tools
Which approach would you use?
**More generally what kind of architecture of neural network would you use in the general case `input=image, output=image`?**
Is approach #2, for which input=image, output=coordinates possible? Or is there another segmentation method you would use here?<issue_comment>username_1: You could try U-Net for approach 1.
This is called the image-to-image translation problem in machine learning. You could see more architectures in this paper:
<https://arxiv.org/pdf/2101.08629.pdf>
Upvotes: 2 <issue_comment>username_2: I think the second approach will be the best because it only requires that your training set is annotated with four labels for each of the four corners of the paper sheet.
This is sort of the idea of a Region Proposal Network which is used in [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) (section 3.1).
[Here](https://github.com/pytorch/vision/blob/5e56575e688a85a3bc9dc3c97934dd864b65ce47/torchvision/models/detection/rpn.py#L88-L367) is a reference implementation of a Region Proposal Network in PyTorch from the [torchvision](https://github.com/pytorch/vision/) library. Notice how the network outputs `boxes` (in the `forward()` method) which is a tuple `(x1, y1, x2, y2)`. From these four coordinates, you could crop the image to the desired paper sheet region.
Upvotes: 3 |
2022/03/03 | 467 | 1,633 | <issue_start>username_0: Are there some known neural networks that, given an input image, can generate a **similar image**, with the same topic?
Example: input = a photo of a cat on a green table, output = a generated photo of another cat on another green table.
Example 2: input = a portrait of a man with glasses and a beard, output = a portrait of a generated person with similar glasses / beard (see "ThisPersonDoesNotExist").
I imagine it is possible with a [GAN](https://en.wikipedia.org/wiki/Generative_adversarial_network), but more precisely which kind of architecture?<issue_comment>username_1: You could try U-Net for approach 1.
This is called the image-to-image translation problem in machine learning. You could see more architectures in this paper:
<https://arxiv.org/pdf/2101.08629.pdf>
Upvotes: 2 <issue_comment>username_2: I think the second approach will be the best because it only requires that your training set is annotated with four labels for each of the four corners of the paper sheet.
This is sort of the idea of a Region Proposal Network which is used in [Faster R-CNN](https://arxiv.org/pdf/1506.01497.pdf) (section 3.1).
[Here](https://github.com/pytorch/vision/blob/5e56575e688a85a3bc9dc3c97934dd864b65ce47/torchvision/models/detection/rpn.py#L88-L367) is a reference implementation of a Region Proposal Network in PyTorch from the [torchvision](https://github.com/pytorch/vision/) library. Notice how the network outputs `boxes` (in the `forward()` method) which is a tuple `(x1, y1, x2, y2)`. From these four coordinates, you could crop the image to the desired paper sheet region.
Upvotes: 3 |
2022/03/04 | 977 | 4,277 | <issue_start>username_0: I am new to RL and I'm currently working on implementing a DQN and DDPG agent for a 2D car parking environment. I want to train my agent so that it can successfully traverse the env and park in the designated goal in the middle.
So, my question is: **what are the best practices when training an agent for a changing environment?**
In my case, my goal is that a car can randomly spawn every episode anywhere in the dark grey-ish area and always successfully parks in the middle. My problem, in this case, is that if I for example train the agent from only one specified location, it usually won't know how to perform if it's spawned somewhere else.
I also tried making it so that the car starting location gets randomly updated every N step, but unfortunately came to no success.
It may be possible that I've not trained for long enough and with a sufficient number of steps in between the "position resets", but I still want to ask if there are any general practices in the cases like this?
<issue_comment>username_1: My guess is that you haven't trained long enough, but there are things that can be done to possibly accelerate learning.
It depends on what you want the policy to do in the final version. If you want it to be able to be spawned at a random position on the map and park in some other (random) position on the map, then that is how you should train it.
Training in steps can be useful, for example training with a fixed start and terminal point, then randomize the parking spot, then randomize the starting position. In general, giving the agent fully randomization will take longer than setting multiple, sequential goals.
Upvotes: 1 <issue_comment>username_2: I am correct in my understanding that you only provide the agent with the state of the car, i.e. a global x and y position, its angle, velocity, and steering angle?
How does the agent know that it is coming closer to the goal if it is not provided with information about where the goal is? Without this observation of the goal, the agent is operating blindly. That explains why it is so difficult for the agent to reach the goal and impossible when you randomize the starting position.
If my assumptions are correct, the agent takes random actions which are unlikely to reach the goal, but due to the law of large numbers after enough episodes, the agent will reach the goal at random and it can learn to remember this path if given enough reward. But if you then randomize the starting position the agent cannot apply the knowledge it has learned previously because the sequence of actions to reach the goal would now be different. Essentially, there is no correlation between what goal you want the agent to achieve and your state and action space.
To circumvent this problem, I suggest you add additional state information, here are a few suggestions:
* The global x and y position of the goal
* A distance measure measuring the distance from the agent and to the goal. Either the [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) or [Manhattan distance](https://en.wikipedia.org/wiki/Taxicab_geometry).
* Both of the above
---
I also support the suggestion of [username_1](https://ai.stackexchange.com/users/52966/elfurd): "*Training in steps can be useful*". This is called [curriculum learning](https://ronan.collobert.com/pub/matos/2009_curriculum_icml.pdf) and the idea is to present easier training examples to the agent at the beginning of training and steadily increase the difficulty of the environment. In turn, the agent will reach the goal in the easier environments, obtain some reward, and learn. It can then apply what it has learned in the more advanced environments once it progresses through the curriculum.
In your environment, this could be as simple as decreasing the size of the gird world in early training. Or you could spawn the agent close to the goal so that the agent is more likely to reach the goal with just a few random actions, alternatively, you can also randomize the goal close to the starting position of the agent if it has to start from a specified position and then increase the distance to where the goal is sampled.
Upvotes: 3 [selected_answer] |
2022/03/06 | 913 | 3,330 | <issue_start>username_0: Are there any examples of people performing multiple convolutions at a single depth and then performing feature max aggregation as a convex combination as a form of "dynamic convolutions"?
To be more precise: Say you have an input x, and you generate
```
Y_1 = conv(x)
Y_2 = conv(x)
Y_3 = conv(x)
Y = torch.cat([Y_1,Y_2,Y_3])
Weights = nn.Parameter(torch.rand(1,3))
Weights_normalized = nn.softmax(weights)
Attended_features = torch.matmul(Y, weights_normalized.t())
```
So, essentially, you are learning a weighting of the feature maps through this averaging procedure.
Some of you may be familiar with the "Dynamic Convolutions" paper. I’m just curious if you all would consider this dynamic convolution or attention of feature maps. Have you seen it before?
If the code isn’t clear, this is just taking an optimized linear combination of the convolution algorithm feature maps.<issue_comment>username_1: I wouldn't call it nor attention nor dynamic convolution.
Reason being that everything is static. if for conv(x) you refer to a standard convolution then that would imply a static kernel, so nothing fancy going on there but just a classic multichannel CNN, and adding 3 learnable parameters is basically just adding a linear layer (not dense) on top of those features. So in inference phase one of those parameters will be higher than the others and the features coming from the convolution, associated with that parameter, will overcome the others. Which doesn't really sound like attention.
The closest paper to what you're suggesting I can think about is:
[U-GAT-IT: Unsupervised generative attentional notworks with adaptive layer instance normalization for image-to-image translation.](https://arxiv.org/pdf/1907.10830.pdf)
You can see in the image below that in their generator the authors apply individual weights to each feature map, but the crucial difference is that these weights are not just extra initialized parameters, they come from an auxiliary classifier trained precisely to generate attention masks, idea taken from [Learning Deep Features for Discriminative Localization](https://arxiv.org/pdf/1512.04150.pdf) (from which I took the second image).
[](https://i.stack.imgur.com/Lrd7b.png)
[](https://i.stack.imgur.com/Y0c7T.png)
Upvotes: 1 <issue_comment>username_2: See [Dynamic Convolution: Attention over Convolution Kernels](https://arxiv.org/abs/1912.03458) by <NAME> et al.
The convolution kernels are generated by taking a weighted average of `K=4` kernels. The weights are determined *non-linearly* via channel attention (i.e. "excitation" in SE networks) that uses global average pooling and a dense network.
[](https://i.stack.imgur.com/iQEwF.png)
The paper also discusses some "training tricks", namely by limiting the space of possible weights via a softmax with "temperature" of `T=30` to soften the max further.
See also: an [implementation in PyTorch](https://github.com/kaijieshi7/Dynamic-convolution-Pytorch/blob/4befa50c97de72cd093316edb29522e8ebd8fc5e/dynamic_conv.py#L100-L190).
Upvotes: 0 |
2022/03/11 | 985 | 3,732 | <issue_start>username_0: In pre-processing of text, we need to assign a number for each token in a text. Then only we can pass it to a model. In pre-processing of text, we need to assign a number for each token in a text. The paragraph from [this section](https://d2l.ai/chapter_recurrent-neural-networks/text-preprocessing.html) named **Text Preprocessing** recommended indexing according to the frequency of the token
>
> The string type of the token is inconvenient to be used by models,
> which take numerical inputs. Now let us build a dictionary, often
> called vocabulary as well, to map string tokens into numerical indices
> starting from 0. To do so, we first count the unique tokens in all the
> documents from the training set, namely a corpus, and then **assign a
> numerical index to each unique token according to its frequency**.
> Rarely appeared tokens are often removed to reduce the complexity. Any
> token that does not exist in the corpus or has been removed is mapped
> into a special unknown token “”. We optionally add a list of
> reserved tokens, such as “” for padding, “” to present the
> beginning for a sequence, and “” for the end of a sequence.
>
>
>
I want to know whether it is necessary to index in accordance with the frequency of token or any unique index serves the purpose?<issue_comment>username_1: I wouldn't call it nor attention nor dynamic convolution.
Reason being that everything is static. if for conv(x) you refer to a standard convolution then that would imply a static kernel, so nothing fancy going on there but just a classic multichannel CNN, and adding 3 learnable parameters is basically just adding a linear layer (not dense) on top of those features. So in inference phase one of those parameters will be higher than the others and the features coming from the convolution, associated with that parameter, will overcome the others. Which doesn't really sound like attention.
The closest paper to what you're suggesting I can think about is:
[U-GAT-IT: Unsupervised generative attentional notworks with adaptive layer instance normalization for image-to-image translation.](https://arxiv.org/pdf/1907.10830.pdf)
You can see in the image below that in their generator the authors apply individual weights to each feature map, but the crucial difference is that these weights are not just extra initialized parameters, they come from an auxiliary classifier trained precisely to generate attention masks, idea taken from [Learning Deep Features for Discriminative Localization](https://arxiv.org/pdf/1512.04150.pdf) (from which I took the second image).
[](https://i.stack.imgur.com/Lrd7b.png)
[](https://i.stack.imgur.com/Y0c7T.png)
Upvotes: 1 <issue_comment>username_2: See [Dynamic Convolution: Attention over Convolution Kernels](https://arxiv.org/abs/1912.03458) by <NAME> et al.
The convolution kernels are generated by taking a weighted average of `K=4` kernels. The weights are determined *non-linearly* via channel attention (i.e. "excitation" in SE networks) that uses global average pooling and a dense network.
[](https://i.stack.imgur.com/iQEwF.png)
The paper also discusses some "training tricks", namely by limiting the space of possible weights via a softmax with "temperature" of `T=30` to soften the max further.
See also: an [implementation in PyTorch](https://github.com/kaijieshi7/Dynamic-convolution-Pytorch/blob/4befa50c97de72cd093316edb29522e8ebd8fc5e/dynamic_conv.py#L100-L190).
Upvotes: 0 |
2022/03/12 | 806 | 2,894 | <issue_start>username_0: Reading this [Can a neuron have both a bias and a threshold?](https://ai.stackexchange.com/questions/11684/can-a-neuron-have-both-a-bias-and-a-threshold) has confused me, as it appears to be more common to use a threshold of 0 when using bias. But reading this <https://stackoverflow.com/questions/19984957/scikit-learn-predict-default-threshold> indicates that the threshold is 0.5.
So my question is, is sklearn using both a threshold and a bias term ?<issue_comment>username_1: I wouldn't call it nor attention nor dynamic convolution.
Reason being that everything is static. if for conv(x) you refer to a standard convolution then that would imply a static kernel, so nothing fancy going on there but just a classic multichannel CNN, and adding 3 learnable parameters is basically just adding a linear layer (not dense) on top of those features. So in inference phase one of those parameters will be higher than the others and the features coming from the convolution, associated with that parameter, will overcome the others. Which doesn't really sound like attention.
The closest paper to what you're suggesting I can think about is:
[U-GAT-IT: Unsupervised generative attentional notworks with adaptive layer instance normalization for image-to-image translation.](https://arxiv.org/pdf/1907.10830.pdf)
You can see in the image below that in their generator the authors apply individual weights to each feature map, but the crucial difference is that these weights are not just extra initialized parameters, they come from an auxiliary classifier trained precisely to generate attention masks, idea taken from [Learning Deep Features for Discriminative Localization](https://arxiv.org/pdf/1512.04150.pdf) (from which I took the second image).
[](https://i.stack.imgur.com/Lrd7b.png)
[](https://i.stack.imgur.com/Y0c7T.png)
Upvotes: 1 <issue_comment>username_2: See [Dynamic Convolution: Attention over Convolution Kernels](https://arxiv.org/abs/1912.03458) by <NAME> et al.
The convolution kernels are generated by taking a weighted average of `K=4` kernels. The weights are determined *non-linearly* via channel attention (i.e. "excitation" in SE networks) that uses global average pooling and a dense network.
[](https://i.stack.imgur.com/iQEwF.png)
The paper also discusses some "training tricks", namely by limiting the space of possible weights via a softmax with "temperature" of `T=30` to soften the max further.
See also: an [implementation in PyTorch](https://github.com/kaijieshi7/Dynamic-convolution-Pytorch/blob/4befa50c97de72cd093316edb29522e8ebd8fc5e/dynamic_conv.py#L100-L190).
Upvotes: 0 |
2022/03/16 | 995 | 4,280 | <issue_start>username_0: If the Generator in a GAN is taking a matrix of size `WxH` of noise to generate a `WxH` sized output image, and the Discriminator classifies the output as fake, how is that information back-propagated through the generator?
How is the error in classification attributed to individual "pixels" of the generators generated image? Is the error divided by the number of pixels?<issue_comment>username_1: If you ask this question it means you conceive a generative adversarial network as a combination of 2 separate entities, the discriminator and generator, but this is not really the case.
It is true that for convenience we distinguish between generator and discriminator since they fulfill separate purposes, but by simply looking at a drawing of a whole GAN you'll see that they are not separated at all.
When training the generator we simply backpropagate the gradients coming from the discriminator through the fake generated sample, as it was an intermediate layer between generator and discriminator (see red area in the drawing).
Of course the discriminator can be updated on its own, and even the generator since we can compute loss and backpropagate from the fake generated sample level. This is nothing special, for example it's done also in normal CNN training when using losses computed at the feature maps level (like perceptual loss).
[](https://i.stack.imgur.com/oJ5GS.png)
Upvotes: 1 <issue_comment>username_2: I'm going to keep my answer relatively high-level and avoid details like the actual loss functions or activation functions. But please know these also have an effect on GANs.
The Discriminator (D)
---------------------
The discriminator in a GAN is a binary classifier. It is given an image and asked to predict whether that image is `real` or `fake`. The discriminator reduces the given image down using convolutional layers. Convolutional layers detect features, in the case of images these would be edges, borders, certain shapes and so on.
**The discriminator is learning which features correspond to a `real` image and which features correspond to a `fake` image.**
The Generator (G)
-----------------
The generator is almost the inverse of the discriminator: Instead of reducing features it uses deconvolutional layers to create features from a random seed. This random seed is a *vector* (not a matrix) called the *latent vector* (z).
**The generator is learning which *features* to create to return a `real` label from the discriminator.**
How GAN learns
--------------
In each training step the following happens:
1. Get gen\_images from G given z.
2. Get real\_predictions from D by passing real images to D.
3. Get fake\_predictions from D by passing gen\_images to D.
4. Compute loss on G as a function of (`real`, fake\_predictions)
5. Compute loss on D as a function of (`real`, real\_predictions) + (`fake`, fake\_predictions)
6. Use backpropagation to update the weights and biases in D and G for the losses.
The big point here is that the generator's loss function directly depends on the output of D. Did G manage to trick D or not? This is computed by comparing the `fake_predictions` with the `real` label. G wants those two values to be as close as possible.
This loss does not correspond to individual pixels, but because we are using convolutions/deconvolutions in our models it corresponds to feature selection (in D) and feature creation (in G) with groups of pixels.
**In short:** the individual pixels are not *directly* trained in a GAN's generator, instead patterns and features creation are.
**If you are still confused**: Think about how a child draws.
A child doesn't start drawing by examining things a millimeter at a time, but by using symbolism and feature selection. At the beginning you can really only guess what they are trying to make.
A cat might be a big mess of scribbles, but slowly the child learns to draw pointy ears, or whiskers. At some point the child can draw features that are identifiable as a cat. The child has gotten feedback from you when you ask her, "Oh, what's that supposed to be?" and when you say, "Oh, is that a cat?" and then finally, "That's a nice looking cat!".
Upvotes: 0 |
2022/03/16 | 762 | 3,094 | <issue_start>username_0: Several research papers and textbooks (e.g. [this](https://www.mdpi.com/2227-9709/8/3/53)) contain the phrase "**gradient flow**" in the context of neural networks.
I am confused about whether it has any rigorous and formal way of understanding or not. What is the flow referring to here?<issue_comment>username_1: Here is my idea of what that means: Gradient flow is an abstract term to describe properties of the gradient. The gradient is calculated by propagating the error backwards through the networks, therefore it kind of *flows* from the last to the first layer. Depending on network architecture and loss function the flow can behave differently.
One popular kind of undesirable gradient flow is the **vanishing gradient**. It refers to the gradient norm being very small, i.e. the parameter updates are very small which slows down/prevents proper training. It often occurs when training very deep neural networks. Residual connections can help, because they bypass operations that reduce gradient magnitude.
The exact opposite would be **exploding gradients** where your gradient-norm is very large which leads to unstable training and the weights inside the network can't follow a stable trajectory to the optimum. This often occurs in the context of recurrent neural networks.
[Here](https://www.analyticsvidhya.com/blog/2021/06/the-challenge-of-vanishing-exploding-gradients-in-deep-neural-networks/) is a nice article on vanishing/exploding gradients.
A general way to phrase this is that the gradient reveals properties about the loss function which you are optimizing (as it is the derivative). **A noisy or heavily oscillating loss function usually implies an undesirable gradient flow**.
Thinking about these properties plays an important role in network and loss function design.
For example, the root function $\sqrt{\cdot}$ has a very large derivative when approaching zero, so having a $\sqrt{\cdot}$ in the network or the loss function can lead to exploding gradients.
Upvotes: 1 <issue_comment>username_2: It has. Gradient flow or more generally [flow](https://en.wikipedia.org/wiki/Flow_(mathematics)#Time-dependent_ordinary_differential_equations) is a well known concept in maths. Say we have a function $f:\mathbb R^n \longrightarrow \mathbb R^n$ and a function $\theta:[0,\infty)\longrightarrow \mathbb R^n$ such that the ODE
$$
\partial\_t \theta(t) = f(\theta(t))
$$
exists for any initial choice $\theta(0)\in\mathbb R^n$ uniquely. Then i think the naming comes from the following informal description. Lets imagine the case $n=2$ then $\mathbb R^2$ is a plane. Now you drop $\theta(0)$ somewhere on this plane and wait some time $t$ to see where the $\theta(0)$ will *flow* to. You can also draw the trajectories for multiple starting points $\theta(0)$ which will often give you an image that looks like a flow of a liquid.
In the case of the ANN $f$ would be the negative gradient of the costfunction with respect to the paramter-vector $\theta$ and gradient descent would be an approximation of the gradient flow.
Upvotes: 2 |
2022/03/17 | 937 | 2,620 | <issue_start>username_0: [Musical notes](https://www.google.com/search?q=musical%20notes&prmd=ibnv&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjMm5vA-Mv2AhUuGqYKHSrGCyYQ_AUoAXoECAIQAQ&biw=360&bih=626&dpr=2)
[Musical notes videos](https://www.google.com/search?q=music%20notes%20from%20youtube%20video&source=hp&ei=Q6wyYryxBvDb2roPmZasIA&oq=musiical%20notes%20video%20youtube&gs_lcp=ChFtb2JpbGU<KEY>&sclient=mobile-gws-wiz-hp#vuanr=4)
[Piano](https://bigpiano.com)
Can AI, Machine learning, Data science, Computer vision, image processing technologies assist in interpreting musical notes ?
Input dataset : Musical notes
Output : Sound played.
A python program which can assist in interpreting or converting text to speech ?.
The text inputted is Musical notes captured as a image by camera as input device.<issue_comment>username_1: AI/ML can solve the task described, a solution is as below:
* Regular image processing algorithm (pixel row with min black pixels, adjacent rows are considered as 1) to split the sheet music (as image) into rows (**red lines**)
[](https://i.stack.imgur.com/lA5Gg.png)
* Regular image processing algorithm (pixel column with min black pixels, adjacent columns are considered as 1) to split a row into notes based on scanning vertically (**blue lines**)
[](https://i.stack.imgur.com/sPsfq.png)
* Do image classification on every cell created by red lines and blue lines to get notes, and play them
* Real-time image classification may be slow and not possible to play note by note directly, the programme should process the whole sheet music first.
Upvotes: 3 [selected_answer]<issue_comment>username_2: LMGTFY ;)
The problem is called [Optical Music Recognition](https://en.wikipedia.org/wiki/Optical_music_recognition). [Here](https://machinelearningtutorials.weebly.com/home/i-built-a-music-sheet-transcriber-heres-how) you can find tutorial that desribes OMR with deep learning and [here](https://www.mdpi.com/2076-3417/8/4/606/htm) you have scientific paper. I think it is very good start for your investigations.
Upvotes: 2 |
2022/03/18 | 2,674 | 11,507 | <issue_start>username_0: I have been spending a few days trying to wrap my head around how and why neural networks are used to play chess.
Although I know very little about how the game of chess works, I can understand the following idea. Theoretically, we could make a "tree" that included every possible outcome of a chess game. Through knowledge provided by chess experts, we could identify how "favorable" certain parts of this tree are compared to other parts of the tree. We could also use this tree to "rank" optimal chess moves based on how the chess board appears in the current turn (e.g. which pieces you and your opponent have left and where these pieces are situated).
The problem is, this tree would be so enormous that it would be impossible to create, store and "search" (e.g. with the MinMax algorithm):
[](https://i.stack.imgur.com/IXbIc.png)
I understand that perhaps this tree can be created using data to limit the size of the tree based on scenarios that are more likely to appear compared to all possible scenarios. For example, if a player wanted they could spend the whole game aimlessly shifting their "Rook" back and forth - theoretically, this outcome could occur but no player (in their sane mind) would ever do this. Thus, the tree could be constructed using actual data from millions of chess games. This for example could tell us : Based on historical data and given the current setup of the chess board, 21% of games were won when the immediate next move involved moving the Queen to "F5" vs only 3% of games were won when the immediate next move involved moving the Knight to "F5". I suppose at each move, the data based tree could be queried to rank the optimality of each next move by checking the proportion of "terminal nodes" that resulted in wins for each possible move given the current chess board.
However, I still see 2 problems with this approach:
* It is possible that we might run into a scenario(s) that never occurred within the historical data, rendering the tree useless in this scenario
* This tree still might be too large to efficiently store and query.
This is probably why neural networks are being used to play chess - I tried to do some readings about this topic, but I can't seem to fully understand it. In this case, **what exactly would the neural network use as a loss function? I don't see how the loss function in this case is continuous, and thus how could gradient descent be used on such a loss function?**
Could someone please recommend some sources (e.g. YouTube Videos, Blogs, etc.) that show how a neural network can be used to play chess.<issue_comment>username_1: This is a good question. your understanding in general is correct. Indeed, data can be used to construct a proper evaluation of a move/board position and recommended moves based on its history (at least alphago does).
Regarding your first point, it is possible that scenarios that never occurred in historical data could occur, but that's not a problem if your valuation procedure is strong. Typically this is modeled by some continuous function that takes your input board and gives some scalar value. Doing so implicitly assumes you learn the patterns among data pretty well so that ideally you can extrapolate sufficiently in new scenarios. If you had used a lookup table for every board state, you would be screwed in both of the respects you mentioned above.
Regarding your second point, problems related to storing the tree itself and efficiently querying has other solutions that essentially involve a clever pruning method (minimax, and its variants) and/or an efficient cache, like zobrist hashing (see chessprogramming wiki for more techniques).
Now to the core of the question, how do you evaluate and setup a loss function?
There are several ways to do this though: roughly you can think about this as falling into the following buckets
* (i) a reinforcement learning approach, which Alphago uses (<https://www.chess.com/article/view/whats-inside-alphazeros-brain>) . This is interesting because RL is quite a different paradigm of learning than supervised learning. I'd recommend reading through some of sutton/barto (<http://www.incompleteideas.net/book/the-book-2nd.html>) to see how questions of policy learning and move recommendations can be derived in general; this is quite a readable book. Then read about alphazero and alphago <https://www.science.org/doi/10.1126/science.aar6404>.
* (ii) supervised learning.
When I initially tried this out, I trained a neural net to output a value at least as good as the moves outputted by a grandmaster (see [Should I use neural networks or genetic algorithms to solve Gomoku?](https://ai.stackexchange.com/questions/4725/should-i-use-neural-networks-or-genetic-algorithms-to-solve-gomoku/4727#4727) for more details). Another way can be based on whether the player won or lost at the end, assuming that in general their moves were better when they won than when they lost (a little strong of an assumption to me personally). See [https://www.cs.tau.ac.il/~wolf/papers/deepchess.pdf](https://www.cs.tau.ac.il/%7Ewolf/papers/deepchess.pdf) - They randomly sample moves from a game white won and lost, and learns to choose between the moves with a neural network.
To be explicit, once you setup the appropriate loss or policy optimization problem, you can perform gradient descent as usual. what you are optimizing would of course differ based on if you choose the RL route or the supervised learning route.
Since this is in the first part of your question, I will additionally say that NNs help in chess because of this complex problem of how to valuate a position based on some unique progression and state of the game, and the combinatorial explosion (which is somewhat rectified by a move recommender in alphago, and/or intuitively to me, better evaluating positions may allow for more robust move selection given less plies). Without a NN, some more static evaluator or less complex function approximator may be too simplistic for robustly learning `good moves' in chess.
It is interesting to see that some methods do not require you to actually encode which moves are legal and which are not.
Upvotes: 4 <issue_comment>username_2: Minimax and related algorithms *are* used to play chess. That is how chess programs have worked for many years (with some additions such as standard opening playbooks). They do not need to process the whole game tree. There are a few different techniques used to reduce the effective search space. One of the most the most impactful is to truncate search after a certain depth if it has not resolved the game, and to *score* the position reached using some form of heuristic function that takes the board state as input and outputs a number. The impact from this has two parts:
* The search space is reduced by a large factor.
* The strength of the resulting automated player is limited by how effective the heuristic is at reflecting the strength of the game position. A perfect heuristic could allow a very limited search (just one ply look ahead, or up to some point within the game) to play perfectly. However, it is a practical impossibility for many games - if a game is "unsolved" that means by extension that there is no known perfect heuristic.
There are other ways to reduce search space or search it more efficiently, but this search depth plus heuristic pattern is very common in turn-based games. This is how IBM's Deep Blue worked for example, and it did not use neural networks. The heuristic function in Deep Blue was constructed by experts extending already well-studied basics of chess playing, such as assigning simple points value to each side's remaining pieces (e.g. a rook is worth more than a bishop).
>
> **What exactly would the Neural Network use as a Loss Function?**
>
>
>
There are two main ways that a neural network can be applied in a game like chess:
### 1. To predict the chances of winning from a given position
(or equivalently score a heuristic for the position)
In reinforcement learning terms, this would be the value function for that state (game position).
This can be used alongside tree searches, the main difference being that instead of an expert *designing* the heuristic, the neural network will have learned one from observing games. Those games could be a database of games played by humans, or generated by a learning agent playing against itself.
The loss function that drives learning here would typically be a cross-entropy loss comparing the eventual win/loss with the predicted probability. This is noisy of course, because the same position may lead to win, draw or loss depending on relative strengths of the players. However, as the agent player becomes stronger, the prediction will start to match it, and more closely approximate the probabilities from optimal play.
### 2. To predict the best move from a given position
This is also called a policy function.
This is a multi-class prediction problem, that would use the normal multiclass log loss function. Similar to learning the value function, it can be learned from observation assuming all action choices that lead to a win are the best choices. Again this is noisy, but should converge on an approximation of optimal play provided the input is from master-level play database, or is from a working self-play system.
>
> I don't see how the Loss Function in this case is continuous, and thus how could Gradient Descent be used on such a Loss Function?
>
>
>
Once you move away from trying to have the neural networks process a game tree directly, the problem becomes clearer. You need to run the game tree processing using suitable tree search algorithm (minimax, negamax, MCTS etc), and have neural networks for the value and/or policy function estimations at each node, which they can learn from any relevant data.
If you want to create self-learning agents, as opposed to training them from a pre-existing database of games, then you will need to study systems that can generate suitable data from experiencing game play directly. This adds another outer layer of logic alongside the tree search algorithm that helps decide what data to collect and how to score it so it can be used as training data for the neural networks.
There is quite a lot you could study about these kinds of learning systems. I recommend for the basics that you look into reinforcement learning (RL), and the go-to book for many people there is Sutton & Barto's [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/the-book.html), which lays out the theory and terminology used in RL and a variety of basic learning algorithms. It would be a diversion from building a game-playing program to study it all, and it does not delve into all elements of game-playing agents in great detail, but it is IMO a very good grounding into the *learning* parts of any such agent.
In addition, one state-of-the-art self-play learning algorithm is Deep Mind's Alpha Zero. It might be a bit of a steep learning curve to dive straight into how it works, but the algorithm is at its core surprisingly simple and elegant. It may be possible, given some basic knoweldge of terms from RL (so you know what a policy is), to get straight into it based on some tutorials like [this one that learns Othello](https://web.stanford.edu/%7Esurag/posts/alphazero.html).
Upvotes: 4 |
2022/03/19 | 1,271 | 4,923 | <issue_start>username_0: Poor reasoning, and ignorance in general, is the source of a lot of [suffering and evil](https://philosophy.stackexchange.com/questions/64954/is-the-moral-compass-of-an-individual-inherent-or-is-this-aspect-of-their-nature/89759#89759). Covertly erroneous logic is often used in manipulation. And much of this broken thought is being used directly in the training of AI.
There has been talk of, and development in, *fact-checking*, such as for language transformers. But what about *reasoning*?
The function in mind is specifically being able to process a potentially large text body, analysing all logic and implied relations for fallacy and other misleading reasoning. Perhaps shades of colour could indicate level of error. A bonus would be output listing and explaining the mistakes, maybe like compiler errors -- "fallacy x between premise y and conclusion z".
**Are any AI systems available, or in development, for finding and analysing *fallacious inference* in natural language text?**<issue_comment>username_1: There's some impressive work going on at IBM under the name of "[Project Debater](https://eorder.sheridan.com/3_0/app/orders/11030/files/assets/common/downloads/Slonim.pdf)", which already produced some impressive results, that you can check in this [video](https://www.youtube.com/watch?v=3_yy0dnIc58).
The project is about creating a system capable of debating with human on random topics by scraping the internet to get ground knowledge (mainly scientific papers, not random gibberish of course). The system is pretty massive, it include speech synthesis and lot of other components, what's interesting for you are the parts I highlighted in red, especially those on the left that contribute to generate the counter arguments to the human opponent (rebuttal construction).
I won't go into the details cause all those three topics constitute entire branches of Natural Language Processing. But you can easily see how combined together they try to accomplish a rough version of deductive reasoning:
* claim detection: identifying the main argument of a sentence/document
* evidence detection: identifying what facts are presented along with the argument
* stance detection: identifying the positive/neutral/negative stance of each evidence (and possibly opinion based facts presented) toward the claim has been made.
Assuming that we managed to extract all this information from a text (big time, unfortunately not easy at all) then we can build reasoning out of it. For example:
* Public schools are worse than private ones for children education -> detected negative claim
* Researches show that there's an equal amount of graduated students coming from public and private schools. -> detected evidence
* The previous evidence has a positive stance toward the initial claim -> stance detection
Putting everything together, since we have a negative claim followed by a positive evidence toward the same fact, we just found out that the document contains a contradiction.
[](https://i.stack.imgur.com/XCD3o.png)
Upvotes: 2 <issue_comment>username_2: Yes, current large language models since GPT-4 are able to find errors in reasoning, probably zero-shot, not much prompt engineering needed.
Giving a precise probability for something being wrong is much harder, even if the the model knows it, it is hard to query. Rough estimates should be easy.
---
Some examples using GPT-4:
Prompt:
>
> Alice had blue shoes. Therefore, Bob had to eat the cake.
>
>
> Analyze the reasoning.
>
>
>
Result:
>
> The reasoning presented in the statement is illogical and does not
> follow a coherent thought process. There is no apparent connection or
> correlation between Alice having blue shoes and Bob having to eat the
> cake. The conclusion does not follow from the premise, and the
> argument lacks a proper cause-and-effect relationship.
>
>
>
Starting a new chat:
Prompt:
>
> Alice had birthday cake. Therefore, Alice had to eat the cake.
>
>
>
Result:
>
> It is not necessarily true that Alice had to eat the cake just because she had a birthday cake. While it is common for people to eat cake on their birthdays, Alice could have chosen not to eat it, shared it with others, or saved it for later.
>
>
>
Starting a new chat:
Prompt:
>
> Alice had blue shoes. Therefore, Alice had to shoe the blues.
>
>
>
Result:
>
> This statement is a play on words. "Alice had blue shoes" means that Alice owned a pair of blue-colored shoes. "Shoe the blues" is a pun on the phrase "sing the blues," which means to express one's troubles or sadness, often through music. In this case, the pun implies that because Alice had blue shoes, she had to express her sadness or troubles, possibly through wearing her blue shoes.
>
>
>
These examples are not cherry picked.
Upvotes: 1 |
2022/03/22 | 859 | 3,633 | <issue_start>username_0: I'm training an autoencoder, that does not downsample images but processes them in the same size. For example, a 256x256 input will always be processed at 256x256 resolution, only the channels increase deeper in the network. This design is due to the next stage for which the model will be used. L2 regularization on activations is added since this is essentially a sparse autoencoder.
However, I get some strange artifacts in the corners when the model converges:
[](https://i.stack.imgur.com/G4RXw.png)
[](https://i.stack.imgur.com/5TQzR.png)
The left image is the input, the image on the right is the output. The artifacts can be seen on the top-left and top-right corners of the output image.
Can someone explain what causes these artifacts, and how they can be fixed?<issue_comment>username_1: Assuming that you're using convolutional layers, those artifacts may be related to the boundary conditions used. The convolution kernels have a spatial support of say 3x3 pixels, meaning that the response at a position is a function of the corresponding inputs in a 3x3 neighborhood at that position. If the position is adjacent to a boundary, say at the upper left pixel, then the south-west, west, north-west, north, and north-east pixels are outside the image region.
If you only use so-called "valid" pixels, that is, you only get responses for those positions where the entire convolution kernel is inside the image region, then you're ignoring some information at the "invalid" pixels. On the other hand, we are missing information at those invalid pixels, so an assumption must be made about those missing pixels.
For your application, in the internal layers, where the expected features are gradient-like / oscillating and localized, you may wish to try zero-padding, because that's like an assumption that one doesn't assume new feature information outside the region.
However, at the first layer itself, the convolution is with a natural image (as in your example), so you need to predict the pixel values at the border, so there you might want to experiment with "repeat" or "reflecting"/"mirror" boundary conditions.
Upvotes: 3 [selected_answer]<issue_comment>username_2: That's a classic checkerboard artifact. I would guess you're using a CNN as encoder/decoder architectures, since is well known that convolution layers, especially at the upsampling phase, cause these kind of issues.
The use of L2 regularization is also a potential culprit. Pixelwise losses are also known to lead to noisy images, since they don't allow reconstruction freedom to the model.
To understand why you can check this [blog post](https://distill.pub/2016/deconv-checkerboard/), in this case gifs are more understandable than a thousand words, but long story short, is mostly related to emphasized overlapping region generated by the combinations of kernel/stride sizes.
Solutions:
* use some more complex but not pixelwise regularization like [perceptual loss](https://deepai.org/machine-learning-glossary-and-terms/perceptual-loss-function)
* remove ConvTranspose2D layers if you're using any, and move to other upsampling layers like Upsample (with bilinear or cubic interpolation)
* in my experience a nice trick that works pretty well in production when the artifact are only on the edges or corners is to pad the input image (with reflect setting) and then crop back the generated image in the original size. This way the model generate artifacts only in non important areas of the image.
Upvotes: 2 |
2022/03/29 | 695 | 3,130 | <issue_start>username_0: I am training on yolo and I had a small dataset. I decided to increase it by augmenting it with rotation, shearing, etc to increase the size and increase accuracy.
Now I have seen augmented datasets labeled as with and without original images.
I was wondering if there is difference between training with and without original images besides there just being more images?<issue_comment>username_1: Assuming that you're using convolutional layers, those artifacts may be related to the boundary conditions used. The convolution kernels have a spatial support of say 3x3 pixels, meaning that the response at a position is a function of the corresponding inputs in a 3x3 neighborhood at that position. If the position is adjacent to a boundary, say at the upper left pixel, then the south-west, west, north-west, north, and north-east pixels are outside the image region.
If you only use so-called "valid" pixels, that is, you only get responses for those positions where the entire convolution kernel is inside the image region, then you're ignoring some information at the "invalid" pixels. On the other hand, we are missing information at those invalid pixels, so an assumption must be made about those missing pixels.
For your application, in the internal layers, where the expected features are gradient-like / oscillating and localized, you may wish to try zero-padding, because that's like an assumption that one doesn't assume new feature information outside the region.
However, at the first layer itself, the convolution is with a natural image (as in your example), so you need to predict the pixel values at the border, so there you might want to experiment with "repeat" or "reflecting"/"mirror" boundary conditions.
Upvotes: 3 [selected_answer]<issue_comment>username_2: That's a classic checkerboard artifact. I would guess you're using a CNN as encoder/decoder architectures, since is well known that convolution layers, especially at the upsampling phase, cause these kind of issues.
The use of L2 regularization is also a potential culprit. Pixelwise losses are also known to lead to noisy images, since they don't allow reconstruction freedom to the model.
To understand why you can check this [blog post](https://distill.pub/2016/deconv-checkerboard/), in this case gifs are more understandable than a thousand words, but long story short, is mostly related to emphasized overlapping region generated by the combinations of kernel/stride sizes.
Solutions:
* use some more complex but not pixelwise regularization like [perceptual loss](https://deepai.org/machine-learning-glossary-and-terms/perceptual-loss-function)
* remove ConvTranspose2D layers if you're using any, and move to other upsampling layers like Upsample (with bilinear or cubic interpolation)
* in my experience a nice trick that works pretty well in production when the artifact are only on the edges or corners is to pad the input image (with reflect setting) and then crop back the generated image in the original size. This way the model generate artifacts only in non important areas of the image.
Upvotes: 2 |
2022/03/30 | 2,178 | 8,483 | <issue_start>username_0: In Deep Reinforcement Learning (DRL) I am having difficulties in understanding the difference between a *Loss function*, a *reward/penalty* and the integration of both in DRL.
* Loss function: Given an output of the model and the ground truth, it
measures "how good" the output has been. And using it, the parameters
of the model are adjusted. For instance, MAE. But if you were working
in Computer Vision quality, you could use, for instance, SSIM.
* Reward: Given an agent (a model) and an environment, once the agent
performs an action, the environment gives it a reward (or a penalty)
to measure "how good" the action has been. Very simple rewards are +1
or -1.
So I see both the loss function and the reward/penalty are the quantitative way of measuring the output/action and making the model to learn. Am I right?
Now, as for DRL. I see the typical diagram where the agent is modelled using a Neural Network (NN).
[](https://i.stack.imgur.com/CS6bI.png)
I am trying to interpret it, but I do not understand it.
Is it the policy related the loss function somehow? Where is the loss function? How does the reward feed the NN? Is it a parameter for the loss function?
Maybe my confusion has to do with identifying NN with supervised learning, or with not getting this with Q-learning or so.. Can anyone help?<issue_comment>username_1: **1. Question: The difference between loss and reward/penalty**
>
> So I see both the loss function and the reward/penalty are the quantitative way of measuring the output/action and making the model to learn. Am I right?
>
>
>
You are partially right:
You could interpret the negative reward as a *loss* that you want to minimize. But the model cannot learn from the reward directly. The reason for that is that you (usually) cannot formulate the reward as a differentiable function of the model parameters. Hence, you cannot compute a gradient purely from the reward. You need a second function - the loss - that is
1. Differentiable and
2. Depends on the parameters inside your model.
Only then you can compute a gradient w.r.t. the model parameters and make the model learn. This already answers another question:
>
> Is it the policy related the loss function somehow?
>
>
>
Yes! The policy HAS to be a part of the loss function, otherwise you wouldn't be able to do gradient decent to optimize the model.
A simple way to get from reward to the loss is implemented in the **REINFORCE** algorithm. To understand its loss term, you have to know that the model does not output definitive actions, but rather a probability distribution over all possible actions. Here is the loss function that REINFORCE uses to optimize the model:
$loss = -log\\_likelihood(action) \cdot return$
As you can see, the loss is the product of the negative log likelihood of the action and the $return$. The return correlates with the reward (Return is the discounted reward which distributes the reward received at timestep $t$ backwards to also reward actions that led to the reward). Intuitively this means that for a large reward, the model wants to be very certain about which action to take. So there you have it: The reward reflects how successful you are in the environment and the loss is the optimization objective maximizing the probability to take *good* actions.
**2. Question: The Schematic doesn't include the loss function**
The image you posted depicts how you collect the data which you use to optimize the model. You would run this loop of taking an action and receiving a reward until you have a full batch of data. On this batch you would then compute the loss and update the model. It's quite important for reinforcement learning to gather batches and not use single steps for optimization, because otherwise the resulting gradient would be very noisy and in most cases prevent proper optimization.
The main issue why your gradient would be noisy is the **credit assignment problem**:
Lets assume the environment is a grid world and the task is to walk forward for 7 steps. You start at $S$ and you will get a reward as soon as you reach location $G$:

The reward will show you that you have done *something* right but multiple actions where responsible for getting the reward (not just the last step forward). However, you never exactly know which actions where the *right* actions and which actions where actually bad. You might have taken a very inefficient route to the goal.
The problem that you don't know which actions contributed to getting the reward is called the *credit assignment problem*. And in fact you can only have a good heuristic to assign the reward. This has to be compensated by computing the model update on batches rather than single steps. One such heuristic is the [general advantage estimate](https://arxiv.org/abs/1506.02438). This is a function that you apply to your reward before plugging it into the loss function.
>
> Maybe my confusion has to do with identifying NN with supervised learning
>
>
>
One major difference of supervised learning and reinforcement learning lies in the *credit assignment problem*: In supervised learning you input a sample and you know what should come out. In RL you only have a rough estimate on how good you where but you will (usually) never know what should come out of your model, because there are multiple possible ways to reach the goal.
Hope this helps.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Ultimately, in RL, the policy is what you want to find. It's the solution to the Markov Decision Process (MDP). But you don't want to find any policy, but the *optimal policy*, i.e. the one that will make the agent collect the highest amount of reward in the long run (i.e. the highest *return*), if followed.
In deep RL, the policy might be represented by a neural network, which gets a state as input and produces a probability distribution over actions, which we can be denoted by $\pi(a \mid s; \theta)$, where $\theta$ is the parameter vector. If you change $\theta$, you also change the output of the policy.
The reward function function tells how good the actions that the agent takes are. So, it can be defined as the function $r : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{R}$, where $\mathcal{R} \subset \mathbb{R}$ is the reward space. So, $r(s, a)$ is the reward that the agent receives for taking action $a$ in state $s$. For example, in the game of chess, if you win the game, $r$ could return $1$, while, if you lose the game, it could return $-1$. The reward function is usually pre-defined, in the sense that it's part of the problem definition. You don't have to learn it, although you can learn reward functions with *inverse RL* techniques.
So, the reward function is not the objective/loss function, but the objective function is usually defined in terms of the reward function, in the same way that the mean squared error (MSE) in supervised learning is defined in terms of the correct labels or targets.
Now, what could be the loss function in RL? It depends on how you train the RL agent. For example, in [DQN](https://www.cs.toronto.edu/%7Evmnih/docs/dqn.pdf), the loss function is
$$
L\_{i}\left(\theta\_{i}\right)=\mathbb{E}\_{s, a \sim \rho(\cdot)}\left[\left(y\_{i}-Q\left(s, a ; \theta\_{i}\right)\right)^{2}\right]
$$
where
$$
y\_{i}=\mathbb{E}\_{s^{\prime} \sim \mathcal{E}}\left[\underbrace{r}\_{\text{Reward}}+\gamma \max \_{a^{\prime}} Q\left(s^{\prime}, a^{\prime} ; \theta\_{i-1}\right) \mid s, a\right]
$$
is the target value for $Q\left(s, a ; \theta\_{i}\right)$, which is what we're trying to learn and it's represented by the neural network with parameters $\theta\_{i}$. $Q$ is known as the *value function*, which is defined as the expected return, from which we can derive the policy. So, it gets as input a state and an action, not the reward. It produces an estimate of the expected return, which is defined as the sum of rewards.
This answer should answer all your questions and doubts. See also [this answer](https://ai.stackexchange.com/a/14168/2444) about the relationship between supervised learning and reinforcement learning.
Upvotes: 2 |
2022/04/04 | 523 | 2,272 | <issue_start>username_0: I have a task I want to solve with neural networks. The task is predicting a certain vector of dimension K. The problem is that the inputs to the networks are sparse.
The input is a vector of size N, where N is huge (> 1M) and for most cases, the vast majority of the entries (> 99%) in the input vector are 0. Very rare examples however do have almost full inputs. It's clear that the model is hard to train, as there might be huge weight imbalance within examples, while the target vectors need not have such an imbalance.
I do have a working model, but could you point me to some papers / relevant literature about training a network whose inputs are so sparse? Perhaps there are some techniques that could be useful (maybe some preprocessing steps on the inputs, or something along those lines).
Any hint is appreciated!<issue_comment>username_1: What I did when I encountered sparse inputs was to preprocess the data to only include vectors with more than n (threshold) nonzero values.
Not really an elegant solution (since you give up a huge part of your data), but you can try tweaking the threshold for best results.
Upvotes: 0 <issue_comment>username_2: The default solution to handle sparse or one-hot encoded input data is to use embeddings([PyTorch Embeddings](https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html)).
The binary, n-dimensional input data is then transformed to a small dimensional(e.g., 4-128 dim) vector containing real values.
Upvotes: -1 <issue_comment>username_3: If you’re writing your own training code, then you can optimize for sparse inputs at two points in the algorithm:
1. Forward-propagation from the input layer to any other layer;
2. Computing the error gradients of weights between the input layer and any other layer.
Both of these steps involve sums over all input samples, which can be turned into sums over just the nonzero samples. Of course, this requires that you give the input matrices a sparse representation, like a list of (input #, sample #, value) fields.
Unfortunately, even if the input layer is sparse, the hidden/output layers generally are not sparse if you have fully-connected layers, so I think that’s all the computational speed up that’s possible.
Upvotes: 0 |
2022/04/05 | 1,162 | 4,043 | <issue_start>username_0: I'm facing the problem of overfitting and I can't deal with it - I tried experimenting with optimizer, but nothing seems appropriate. My model has extremely poor performance on testing data and the loss even rises. Is there anything I missed during the model architecture planning or training?
I'm working on GTSRB.
```
n_epochs = 100
n_train = 4000
n_test = 1000
def load_split(basePath, subset_type, n_samples):
csvPath = basePath + '\\' + subset_type
#intialize the list of data and labels
data = []
labels = []
# load the contents of the CSV file, remove the first line (since it contains the CSV header)
rows = open(csvPath).read().strip().split("\n")[1:n_samples + 1]
random.shuffle(rows)
#loop over the rows of csv file
for (i, row) in enumerate(rows):
#check to see if we should show a status update
if i > 0 and i % 1000 == 0:
print("[INFO] processed {} total images".format(i))
# split the row into components and then grab the class ID and image path
(label, imagePath) = row.strip().split(",")[-2:]
# derive the full path to the image file and load it
imagePath = os.path.sep.join([basePath, imagePath])
#print(imagePath)
image = io.imread(imagePath)
#resize the image to be 32x32 pixels, ignoring aspect ratio, and perform CLAHE.
image = transform.resize(image, (32, 32))
image = exposure.equalize_adapthist(image, clip_limit = 0.1)
#update the list of data and labels, respectively
data.append(image)
labels.append(int(label))
#convert the data and labels into numpy arrays
data = numpy.array(data)
labels = numpy.array(labels)
#return a tuple of the data and labels
return (data, labels)
print("[INFO] loading training and testing data...")
(train_images, train_labels) = load_split(DATASET_PATH, 'Train.csv', n_train)
(test_images, test_labels) = load_split(DATASET_PATH, 'Test.csv', n_test)
# Normalize pixel values within 0 and 1
train_images = train_images / 255
test_images = test_images / 255
train_labels = to_categorical(train_labels, 43)
test_labels = to_categorical(test_labels, 43)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[32, 32, 3]))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dropout(0.4))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dropout(0.4))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dense(43, activation="softmax"))
model.summary()
#Compiling the model
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
#Fitting the model
history= model.fit(train_images,train_labels, epochs=100, batch_size=4,validation_data=(test_images,test_labels))
```
[](https://i.stack.imgur.com/HSJc0.png)<issue_comment>username_1: Have you tried increasing the batch size? - try 64, 128 or 256
Using a very smaller batch size (4 in your case) may not be optimal for the model to converge to the global optima.
Upvotes: 2 <issue_comment>username_2: I don't think this is an overfitting problem, it looks more like the model is not learning anything, but it would be useful to also see the performance on the training data to be sure. By not specifying the learning rate in the optimizer, you are using the default learning rate of 1e-2 which seems quite high. Try passing the optimizer with
```
keras.optimizers.SGD(learning_rate=1e-3)
```
instead for a learning rate of 1e-3 (or maybe try something even lower like 1e-4). And as username_1 has mentioned, the batch size is pretty small. If you want to use smaller batch sizes you generally need a smaller learning rate as well. You can also try using other optimizers like Adam, but I think the learning rate is mainly responsible for the bad performance.
Upvotes: 0 |
2022/04/11 | 825 | 3,771 | <issue_start>username_0: [Siamese Neural Networks](https://www.cs.cmu.edu/%7Ersalakhu/papers/oneshot1.pdf) are a type of neural network used to compare two instances and infer if they belong to the same object. They are composed by two parallel identical neural networks, whose output is a vector of features. This vector of features is then used to infer the similarity between the two instances by measuring a distance metric.
I was wondering, why not using instead a single neural network that receives as input the two objects that are being compared (e.g. two images) and directly outputs the similarity score? Wouldn't it be better to let the model compare some features of the intermediate layers? Why the Siamese Neural Networks are used for this task and what are the benefits of a Siamese Neural Network over a single neural network that receives as input two instances (e.g. two images) and directly outputs the distance score?<issue_comment>username_1: I come up with multiple advantages for siamese against a single neural network for similarity measuring:
**Training Phase.** If using a single network to replace Siamese, it might be required a double number of parameters (weights) for learning. Hence, training the network will likely converge slower and the network will be more volatile to noise.
**Testing Phase.** Note that these similarity measurements are used in the applications like face recognition. Now, suppose we are going to use the model in such a system. If we have implemented the model by the Siamese, we would only need to compute the output of the model for the input once, and then use the cached results for the existing images in the database, and eventually fasly compute the similarity measures. *On the other hand*, if we have implemented the measurement by a single neural network, we should compute per query the result for all combination of the input and images in the background. Hence, in the latter, we cannot cache the results for the existing data in the database. *Therefore, single neural network implementation will have much more intensive query time for massin dataset than Siamese implementation.*
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to @Omg's answer note that Siamese networks are typically used in situations where applying `(A,B)` to the inputs must generate the same output as applying `(B,A)` (i.e. the similarity measure of `A` to `B` is the same as the similarity of `B` to `A`).
With a network with separate weights, this is not guaranteed. One way to get close to this is to not only use samples `(A,B)` as training input but also (equally often) `(B,A)`. Effectively this doubles the number of training steps (and therefore training time) and the network output is still not guaranteed to be symmetric.
By sharing weights, the symmetry of the response of the network (`(A,B)` gives the same output as `(B,A)`) is guaranteed by design.
Upvotes: 2 <issue_comment>username_3: Assuming you want to train an object detector to detect people in an image.
The detector's neural networks learn to identify features that help them detect people. These features range from low-level aspects like edges to high-level characteristics such as the presence of feet, hands, and more.
If you use the features from this network for person re-identification, you won't obtain good results because the learned features do not focus on extracting differences between individuals.
However, if you employ a Siamese neural network with a loss function like Hinge, triplet,..., which involves comparing predictions between positive and negative matches, the network learns to extract features that highlight the differences between individuals, such as clothing color and more.
Upvotes: 0 |
2022/04/13 | 765 | 3,453 | <issue_start>username_0: I'm working on a project that would benefit from using A.I. or machine learning to analyse news feeds from a variety of websites and grade each article between 0 and 10. We would manually grade hundreds of articles to train the A.I. on what we like and what we don't like using the scoring range. The A.I. is expected to learn how we grade by identifying similarities between articles. When the A.I. starts to grade similar to humans, then we would go more hands of and leave this task to the A.I.
Not sure where to start with A.I. what tools and approaches would be the easiest to achieve this?<issue_comment>username_1: I come up with multiple advantages for siamese against a single neural network for similarity measuring:
**Training Phase.** If using a single network to replace Siamese, it might be required a double number of parameters (weights) for learning. Hence, training the network will likely converge slower and the network will be more volatile to noise.
**Testing Phase.** Note that these similarity measurements are used in the applications like face recognition. Now, suppose we are going to use the model in such a system. If we have implemented the model by the Siamese, we would only need to compute the output of the model for the input once, and then use the cached results for the existing images in the database, and eventually fasly compute the similarity measures. *On the other hand*, if we have implemented the measurement by a single neural network, we should compute per query the result for all combination of the input and images in the background. Hence, in the latter, we cannot cache the results for the existing data in the database. *Therefore, single neural network implementation will have much more intensive query time for massin dataset than Siamese implementation.*
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to @Omg's answer note that Siamese networks are typically used in situations where applying `(A,B)` to the inputs must generate the same output as applying `(B,A)` (i.e. the similarity measure of `A` to `B` is the same as the similarity of `B` to `A`).
With a network with separate weights, this is not guaranteed. One way to get close to this is to not only use samples `(A,B)` as training input but also (equally often) `(B,A)`. Effectively this doubles the number of training steps (and therefore training time) and the network output is still not guaranteed to be symmetric.
By sharing weights, the symmetry of the response of the network (`(A,B)` gives the same output as `(B,A)`) is guaranteed by design.
Upvotes: 2 <issue_comment>username_3: Assuming you want to train an object detector to detect people in an image.
The detector's neural networks learn to identify features that help them detect people. These features range from low-level aspects like edges to high-level characteristics such as the presence of feet, hands, and more.
If you use the features from this network for person re-identification, you won't obtain good results because the learned features do not focus on extracting differences between individuals.
However, if you employ a Siamese neural network with a loss function like Hinge, triplet,..., which involves comparing predictions between positive and negative matches, the network learns to extract features that highlight the differences between individuals, such as clothing color and more.
Upvotes: 0 |
2022/04/14 | 756 | 3,282 | <issue_start>username_0: I am currently trying to practice reinforcement learning for an agent on a grid. The grid is deterministic. Since the grid is deterministic, to calculate the value for each grid square from the reward and next state, we could simply apply the following Bellman equation:
$$V(s)=\max\_a(R(s,a)+\gamma V(s'))$$
and not
$$V(s)=\max\_a(R(s,a)+\gamma\sum\_{s'}P(s,a,s')V(s'))$$
which would be used for non-deterministic grids?<issue_comment>username_1: I come up with multiple advantages for siamese against a single neural network for similarity measuring:
**Training Phase.** If using a single network to replace Siamese, it might be required a double number of parameters (weights) for learning. Hence, training the network will likely converge slower and the network will be more volatile to noise.
**Testing Phase.** Note that these similarity measurements are used in the applications like face recognition. Now, suppose we are going to use the model in such a system. If we have implemented the model by the Siamese, we would only need to compute the output of the model for the input once, and then use the cached results for the existing images in the database, and eventually fasly compute the similarity measures. *On the other hand*, if we have implemented the measurement by a single neural network, we should compute per query the result for all combination of the input and images in the background. Hence, in the latter, we cannot cache the results for the existing data in the database. *Therefore, single neural network implementation will have much more intensive query time for massin dataset than Siamese implementation.*
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to @Omg's answer note that Siamese networks are typically used in situations where applying `(A,B)` to the inputs must generate the same output as applying `(B,A)` (i.e. the similarity measure of `A` to `B` is the same as the similarity of `B` to `A`).
With a network with separate weights, this is not guaranteed. One way to get close to this is to not only use samples `(A,B)` as training input but also (equally often) `(B,A)`. Effectively this doubles the number of training steps (and therefore training time) and the network output is still not guaranteed to be symmetric.
By sharing weights, the symmetry of the response of the network (`(A,B)` gives the same output as `(B,A)`) is guaranteed by design.
Upvotes: 2 <issue_comment>username_3: Assuming you want to train an object detector to detect people in an image.
The detector's neural networks learn to identify features that help them detect people. These features range from low-level aspects like edges to high-level characteristics such as the presence of feet, hands, and more.
If you use the features from this network for person re-identification, you won't obtain good results because the learned features do not focus on extracting differences between individuals.
However, if you employ a Siamese neural network with a loss function like Hinge, triplet,..., which involves comparing predictions between positive and negative matches, the network learns to extract features that highlight the differences between individuals, such as clothing color and more.
Upvotes: 0 |
2022/04/20 | 1,202 | 4,598 | <issue_start>username_0: I don't understand the difference between a policy and rewards. Sure, a policy tells us what to do, but isn't the output of a neural network trained on rewards basically a policy (i.e. choose the maximum reward)? What is different about the policy? An extra softmax applied?<issue_comment>username_1: Typically, the answer to a control problem in reinforcement learning (RL), is "What is the policy that maximises total reward?".
In really simple scenarios, that you might study to understand RL basics, this can be so obvious that you *could* just search ahead and discover the correct action without really using RL.
However, sticking to the formal definitions in RL allows you to tackle harder problems, where it is not obvious how to decide what to do, or maybe even how to access the best rewards.
In the formalism of RL:
* Reward is a real-valued signal received after each time step. For RL theory to be useful, the distrbution of reward values should be the same if start state and action are the same (you may also make the distribution depend on the next state, but that doesn't change the rule, it just means reward and next state will be correlated).
* A policy is a function for action choice, it takes the state as input, and returns a *distribution* over all possible actions $\pi(a | s): \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R} = \mathbb{Pr}\{A\_t=a|S\_t=s\}$. It may be deterministic and directly return a chosen action $\pi(s): \mathcal{S} \rightarrow \mathcal{A}$
These are clearly very different things. Often we are interested in finding a policy that maximises an aggregate of rewards, but that doesn't make the policy simply related to rewards. Think of a chess game - which piece should you move to win? There is no immediate reward, and the consequence for a good or bad move will happen much later. The relationship between the policy in an early game state, and the reward at the end (e.g. +1 for a win, -1 for a loss) is not at all clear.
Only in the very simplest of scenarios, where taking actions directly led to already-predictable rewards, could you use your idea of $\pi(s) = \text{argmax}\_a r(s, a)$ where $r(s,a)$ was a reward function. For a start, this will not help you look ahead more than one time step. What if the best reward now was followed by a really bad reward?
RL has the toolkit you can use to decide how to offset immediate vs future gains, and also how to learn what to do when you don't already have a simple function $r(s,a)$ which tells you what is going to happen in advance.
Upvotes: 0 <issue_comment>username_2: A (stochastic) **policy** is a set of conditional probability distributions, $$\pi(a \mid S=s), \forall s \in \mathcal{S}.$$ If the policy is deterministic, then it is a function $$\pi: \mathcal{S} \rightarrow \mathcal{A},$$ so $\pi(s) = a$ is the action that policy $a$ returns in the state $s$ - it always produces this same action for a given state, unless it's a [non-stationary policy](https://ai.stackexchange.com/q/13088/2444). A policy is also called a *strategy* (in game theory). To be usable, a stochastic policy must be turned into a [decision rule](https://en.wikipedia.org/wiki/Decision_rule), i.e. you need to sample from it. A stochastic policy generalises a deterministic one.
The **rewards** are the outputs of the **reward function**. A reward function can be deterministic (which is often the case) or stochastic. It can be defined as $$R : \mathcal{S} \times \mathcal{A} \rightarrow \mathcal{R},$$ where $\mathcal{R} \in \mathbb{R}$ is the reward space. If it's stochastic, then
\begin{align}
R(s, a)
&= \mathbb{E}\left[ R\_t \mid S\_t = s, A\_t = a\right] \\
&=\sum\_r r p(r \mid S\_t = a, A\_t=a),
\end{align}
where $R\_t$ is the random variable that represents the reward at time step $t$ and assuming a finite MDP. Stochastic reward functions generalise deterministic ones.
So, policies are probability distributions or functions, while rewards are numbers. So, **there's a difference between their definitions, even though they are related**.
How are they related? In different ways. The most important one is that an optimal policy for a given MDP is the one that, if followed, maximises the expected return, which is a function of the reward (typically, a discounted sum of rewards). The definition of an optimal policy makes more sense if you also know the definition of a **value function** - I recommend you take [Sutton & Barto's book](http://incompleteideas.net/book/RLbook2020.pdf) and read the relevant sections.
Upvotes: 2 [selected_answer] |
2022/04/21 | 899 | 3,736 | <issue_start>username_0: I'm facing a problem. I'm working on mixed data model with NN (MLP & Word Embedding). My results are not pretty good. And I observed that the proportionality of my data are corelated with my classification results. I explain:
[](https://i.stack.imgur.com/PwzMR.png)
As you can see, I have more LIVB than others data. The problem is that the predictions of my model are **only** LIVB 
And I don't understand why ? Is it a high variance ? Is it a high bias ? What methods for classification problem should I use to detect the error ? Should I have more features ? Is my model is wrong ? Can someone has this problem before ?
Thanks for your help !<issue_comment>username_1: I see two main issues here:
* you have really few data
* you're using a generic MLP
What you observe if just overfitting. You multi layer perceptron is just learning to predict the majority class cause that's the class that lead to the lowest error possible when chosen all the time.
For sure more features will help, along with a different architecture (CNN would be a start). But considering how few training instances you have, i wouldn't expect great results anyway.
probably to maximize you're chances of training something that will learn to predict also the minority classes with only support 5 you should consider moving to finetuning pretrained models like [BERT](https://towardsdatascience.com/how-to-fine-tune-bert-transformer-with-spacy-3-6a90bfe57647).
Upvotes: 1 <issue_comment>username_2: In my understanding , there could be multiple problems here:
1. Try to check the label on the training data manually -- you will be surprised to see that the training data could be mislabeled (i.e. all your training examples could be similar to LIVB). Also do the same thing for your test data.
2. You could varying the parameters of your MLP.
3. You could try context based embedding like a sentence encoder with a CNN as pointed out by @edoardo-guerriero.
4. Also instead of MLP, try with a simpler algorithm like multi-class logistic regression. If the logistic regression works good with your current word embedding, then you know the issue is with the MLP ( neural network) architecture.
Upvotes: 0 <issue_comment>username_3: Despite how software might work, neural networks do not return labels. Neural networks return probabilities of class membership (typically fairly poor ones, which is a topic for a separate question). If you make probability predictions instead of having your software tell you the most probable category, I expect you to find that you have more diversity in those predictions than “LIVB every time”.
What’s happening is that LIVB is the most likely category going into the problem, and you need considerable evidence to shake your prior belief that LIVB is the most likely outcome. You are unable to produce enough evidence for another category to shake the mode away from giving LIVB the highest probability. Thus, this seems to be a matter of bias: your model lacks the ability to strongly discriminate between categories and tends to fall back on its prior probability that LIVB is most likely.
Annoyingly, it might be that this is just how your problem works: LIVB might always be the most likely outcome.
Finally, I agree with other comments that there are too few observations for a neural network to have much of a shot of being useful. Neural networks are a great way to get a lot of discriminative ability in order to get the model to scream, “This is not LIVB!” However, you probably lack the data needed for a large network not to overfit.
Upvotes: 1 |
2022/04/22 | 808 | 3,356 | <issue_start>username_0: I am trying to understand the concept of evaluating the machine translation evaluation scores.
I understand how what BLEU score is trying to achieve. It looks into different n-grams like BLEU-1,BLEU-2, BLEU-3, BLEU-4 and try to match with the human written translation.
However, I can't really understand what METEOR score is for evaluating MT quality. I am trying understand the rationale intuitively. I am already looking into different blog post but can't really figure out.
How are these two evaluation metrics different and how are they relevant?<issue_comment>username_1: Both BLEU and METEOR are meant to evaluate the overall translation quality. METEOR shows a slightly better correlation with human judgment than BLEU, however, it relies on n-gram alignment between the translation hypothesis and reference that needs language-specific paraphrase tables. The quality of the table heavily influences the evaluation quality. I think BLEU was preferred because of its simplicity and language independence. Nowadays, there are many evaluation metrics that correlate much better with human judgment (such as [BLEURT](https://github.com/google-research/bleurt), [BertScore](https://github.com/Tiiiger/bert_score), or [COMET](https://github.com/Unbabel/COMET)) and they start to be preferred in the MT community (cf. papers: [Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine ranslation Evaluation Metrics](https://aclanthology.org/2020.acl-main.448/), [To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation](https://aclanthology.org/2021.wmt-1.57)).
BLEU score is an average of n-gram precisions, weighted by the so-called brevity penalty that penalizes short high-precision, but low-recall hypotheses.
METEOR computes both precision and recall. Here, the precision corresponds to a proportion of words that are in the hypothesis and are correct. The recall is a ratio of how many words from the translation hypothesis appeared in the hypothesis. To do so, we need to somehow decide which words are correct. This is done by n-gram alignment between the hypothesis and the reference. If there is no exact match between the two, a table of paraphrases is used. To penalize sentences that contain the correct words, but in the wrong order, there is a reordering penalty term that penalizes such sentences.
Both metrics measure the same aspects of the translation, but slightly differently:
1. Precision
* BLEU: directly via n-gram precision
* METEOR: directly in the alignment graph
2. Recall
* BLEU: indirectly via including the brevity penalty
* METEOR: directly in the alignment graph
3. Fluency
* BLEU: directly by considering longer n-grams
* METEOR: indirectly from the properties of the alignment graph
Upvotes: 1 <issue_comment>username_2: **BLEU** is a widely-used metric for evaluating the quality of a machine translation output by measuring its correlation to reference translations. It is based on n-gram precision.
<https://machinetranslate.org/bleu>
**METEOR**, on the other hand, is a more advanced metric that also compares a machine translation output to reference translations, but it also takes into account additional information such as synonyms, word forms, and sentence structure.
<https://machinetranslate.org/meteor>
Upvotes: 0 |
2022/04/23 | 780 | 2,947 | <issue_start>username_0: Say I want to predict the price of a gemstone based on its colour.
I have two options:
* averaging over its colour on an RGB scale, or
* using its textual description.
If I was to choose the latter, how would I go about feeding this to my neural network?
Priori knowledge: Usually a gemstone is defined by its colour and the "degree" of this colour: Example fancy bright green.
Here I could obviously let every combination of colour and degree be its own value in the one-hot vector. To implement this I could use some sort of hash function, if this makes sense, how specifically would I make a hash function that could do this?
If this solution doesn't make sense, what would you suggest?
Tough example of data:
"Natural Fancy Deep Yellowish Brown"<issue_comment>username_1: If the order of words doesn't matter in the description of the stone, you could use a [bag of words model](https://en.wikipedia.org/wiki/Bag-of-words_model). You don't need the [hashing trick](https://en.wikipedia.org/wiki/Feature_hashing) because there's likely only a small fixed set of words used to describe stones. Let's call this set of words the "vocabulary", and denote $N$ its size.
You assign each word an index beforehand, and then, for each stone description, you populate a vector $V \in \mathbb{R}^N$, with $V\_i =$ number of times the ith word in the vocabulary appears in the stone description. Then, $V$ is the input to your neural network.
For example, if $N = 10$ and "natural" is the word with index 3, "fancy" with index 0, "deep" with index 8, "yellowish" with index 2 and "brown" with index 6, the description "Natural Fancy Deep Yellowish Brown" becomes $V=[1, 0, 1, 1, 0, 0, 1, 0, 1, 0]$.
If the description of the stone contains an arbitrary number of words and the order matters, I would do the following. With $N$ the size of your vocabulary, for each word in the description do $\text{input} = \text{hash(word)} \mod N$. For example, in Python
```
import hashlib
input = []
for word in description:
hashed_word = hashlib.sha1(word.encode('utf-8'))
input.append(
int(hashed_word.hexdigest(), 16)
```
Then, I would use $\text{input}$ as the input to an RNN. This way, you can handle descriptions with arbitrary lengths.
Upvotes: 2 <issue_comment>username_2: Based on my own experience, where I have tried to predict something that varies and is not exact (e.g. water, hot, cold, tepid, not quite so hot etc...) perhaps a dynamic of fuzzy logic could be applied. So you have something that is slightly more green or slightly less green, for example, based on the RGB values
Trying to research this myself via Google leads me mainly to papers on stock exchange prediction, but I mean you can combine the aspects of fuzzy logic with a neural network here, which might make the colours varying a lot easier to handle rather than using fuzzy logic for the entire project, just the colours.
Upvotes: 0 |
2022/04/24 | 1,521 | 5,530 | <issue_start>username_0: I am confused on a conceptual scale how I would be able to model a multi-agent reinforcement learning problem when each agent performing an action would take different durations to complete the action. This means that a certain action is performed over multiple steps and the learning sample would have that action attached to it (with different observations and rewards, possibly).
An example of this situation would be where vehicles on a 2-lane road can perform lane changing actions, but each of these actions may take anywhere between 2 - 5 seconds (or learning steps) to complete.
So, what action would need to be passed at every step? I am using RLlib framework. Is it even possible to do this? Or do all these agents have to have the same action duration / step length for any RL algorithm to work?
I would greatly appreciate if anyone could point me in the right direction on bypassing this mental block, it is driving me crazy.<issue_comment>username_1: You could take a look into [options](https://ai.stackexchange.com/a/13255/2444), (discrete-time) semi-MDPs, and multi-agent RL.
An option is a generalisation of an action. Mathematically, it's defined as a tuple $\langle\mathcal{I}, \pi, \beta\rangle$ composed of
* an initiation set $\mathcal{I} \subseteq \mathcal{S}$,
* a policy $\pi: \mathcal{S} \times \mathcal{A} \rightarrow [0, 1]$, which gives the probability of taking a certain action in a certain state, and
* a termination condition $\beta: \mathcal{S}^+ \rightarrow [0, 1]$, which gives the probability of terminating in a certain state.
The policy is the function that you use to behave from a state in the initiation set until a termination condition is met.
A semi-MDP is a special MDP where actions can take a variable amount of time. So, a set of options induces a semi-MDP.
The framework of options was initially introduced in a single-agent setting [here](https://www.sciencedirect.com/science/article/pii/S0004370299000521). However, I found a few papers that extend it to the multi-agent setting
* [Hierarchical Multi-Agent Reinforcement Learning](https://dl.acm.org/doi/abs/10.1145/375735.376302) (2001, AAMAS)
* [Using Multi-Agent Options to Reduce Learning Time in Reinforcement Learning](https://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS15/paper/viewFile/10420/10357) (2015, AAAI)
* [Multi-agent Hierarchical Reinforcement Learning with Dynamic Termination](https://arxiv.org/pdf/1910.09508.pdf) (2019, pre-print)
I've only quickly skimmed through them, so I don't know if the approaches proposed in these papers are suitable for your case (and this also depends on whether your agents are cooperative, adversarial, etc.), and I also don't know if they have any free/available implementation on the web, but I think the information in this answer should put you in the right direction.
Upvotes: 2 <issue_comment>username_2: " ... how I would be able to model a multi-agent reinforcement learning problem when each agent performing an action would take different durations to complete the action.
... where vehicles on a 2-lane road can perform lane changing actions":
* "Reinforcement Learning Baselines (from OpenAI) applied to Autonomous Driving
This Research is aiming to address RL approaches to solve Urban driving scenarios such as (but not limited ): Roundabout, Merging, Urban/Street navigation, **Two way navigation** (pass over the opposite direction lane), self parking, etc...".
* GitHub - [Reinforcement Learning based Autonomous Driving (AD)](https://github.com/pinakigupta/BehaviorRL)
* "Simulation of Urban MObility" (SUMO) is an open source, highly portable, microscopic traffic simulation package designed to handle large road networks and different modes of transport.
+ Paper - [Using Deep Reinforcement Learning to Coordinate Multi-Modal Journey Planning with Limited Transportation Capacity](https://www.researchgate.net/publication/354696888_Using_Deep_Reinforcement_Learning_to_Coordinate_Multi-Modal_Journey_Planning_with_Limited_Transportation_Capacity) - September 2021 - Conference: SUMO User Conference 2021, At: Virtual - Berlin, Germany
+ GitHub - [Simulation of Urban MObility](https://github.com/eclipse/sumo/tree/v1_5_0)
+ GitHub - [RLlib+SUMO Utils](https://github.com/lcodeca/rllibsumoutils)
Python3 library able to connect the RLlib framework with the SUMO simulator.
* [A Methodology to Build Decision Analysis Tools
Applied to Distributed Reinforcement Learning](https://hal.archives-ouvertes.fr/hal-03613558) - <NAME>, <NAME>, <NAME>, <NAME> - Submitted on 18 Mar 2022
... In this context, a significant effort is made by researchers to find an efficient trade-off between the accuracy of the results, the **computing time** and the energy consumption.".
* [Kernel-Based Reinforcement Learning: A Finite-Time Analysis](https://proceedings.mlr.press/v139/domingues21a.html) - <NAME>, <NAME>, <NAME>, <NAME>, <NAME> - Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2783-2792, 2021.
**Abstract** - We consider the exploration-exploitation dilemma in finite-horizon reinforcement learning problems whose state-action space is endowed with a metric. We introduce Kernel-UCBVI, a model-based optimistic algorithm that leverages the smoothness of the MDP and a non-parametric kernel estimator of the rewards and transitions to efficiently balance exploration and exploitation.".
Upvotes: 0 |
2022/04/28 | 847 | 3,486 | <issue_start>username_0: I'm constructing a feed forward neural network that predicts whether a patient will get a stroke or not. However, my dataset is very unbalanced. Out of 5111 rows, 250 contain patients that have had a stroke (1) and 4861 that did not (0). The accuracy is (as a result of this, I suppose) very high (89% on the first epoch, and 95% on the second, then it stays at 95%). What would be the best thing to do about this?<issue_comment>username_1: 4861/5111 is about 95.1%, so it looks like your classifier is probably predicting every patient as "no stroke" (i.e. it is not really doing anything useful). The thing to do is to work out the costs of false-positive (predicting a stroke but it didn't happen) and false-negative (predicting they won't have a stroke but they did) errors. Then factor those into your training process. This can be done in two ways: Firstly by using a weighted training criterion so you can factor the costs in explicitly; or secondly by resampling the dataset to have a greater proportion of "stroke" patients.
The presence of an imbalance is **very** rarely a justification in itself for resampling the data. The real reason to do so is that tasks with an imbalance very often have unequal misclassification costs, for instance if you tell someone they are going to have a stroke when they won't, you will have scared them rather badly, and you may spend some more money on more testing, but that is probably about it. If you tell someone they are not at risk of a stroke when they are, they may go home, have a stroke and become severely disabled as a result, or even die. So the false-negative cost is likely to be much higher and that will boost recognition of the positive cases. The amount of resampling required depends **only** on the costs; the degree of imbalance is entirely irrelevant.
BTW, rather than look at accuracy, you might want to look at a related metric, which is the improvement over a classifier that always predicts the majority class. Something like:
$$\frac{\mathrm{Accuracy} - \pi}{1 - \pi}$$
where $\pi$ is the proportion of the most common class in the dataset. In this case, your classifier is going to score somewhere close to zero as it is probably just going by the majority class, and a score of zero shows clearly that it isn't doing a good job. A score of 1 would be perfect classification. A negative score shows the model is worse than just guessing. It is an affine transformation of accuracy though, so it is still measuring the same basic thing, just on a more interpretable scale.
Upvotes: 1 <issue_comment>username_2: You can use a data augmentation technique like SMOTE to oversample the minority class. It will help you have a more balanced dataset. Here is a nice guide on it:
<https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/>
Upvotes: 0 <issue_comment>username_3: You should use another classification metric for evaluating your model. I would just look at the *confusion matrix* to see how the model performance on the "interesting class" (minority class).
To overcome your imbalanced dataset, you could upsample the minority class. Bootstrapping is a great staring point. Then advance with SMOTE or something else. This article might give you some ides; [8 Tactics to Combat Imbalanced Classes in Your Machine Learning Dataset](https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/)
Upvotes: 0 |
2022/05/03 | 1,078 | 4,194 | <issue_start>username_0: I am looking at [this lecture](https://youtu.be/6e65XfwmIWE?t=1175), which states (link to exact time):
>
> What the triplet loss allows us in contrast to the contrastive loss is
> that we can learn a ranking. So it's not only about similarity, being
> closer together or being further apart, but now we want to learn how
> much closer am I compared to another image.
>
>
>
The contrastive loss
$L(A, B) = y|f(A) - f(B)| + (1-y)max(0, m-|f(A) - F(B)|)$
would push similar samples together, and dissimilar samples apart.
The triplet loss
$L(A, P, N) = max(0, |f(A) - f(P)| - |f(A) - f(N)| + m) $
would push the positive close to the anchor, and the negative away from the anchor.
---
I fail to see why the quoted claim is or isn't true in either of these losses. To me, it looks like "same" samples are pushed together, and "different" samples are pushed apart by both.
Furthermore, with the contrastive loss, the distance in the embedding space would be, as I understand it, the ranking- which is claimed to only exist with the triplet loss.
Is there a clearer reference for this, or just a simple answer?<issue_comment>username_1: 4861/5111 is about 95.1%, so it looks like your classifier is probably predicting every patient as "no stroke" (i.e. it is not really doing anything useful). The thing to do is to work out the costs of false-positive (predicting a stroke but it didn't happen) and false-negative (predicting they won't have a stroke but they did) errors. Then factor those into your training process. This can be done in two ways: Firstly by using a weighted training criterion so you can factor the costs in explicitly; or secondly by resampling the dataset to have a greater proportion of "stroke" patients.
The presence of an imbalance is **very** rarely a justification in itself for resampling the data. The real reason to do so is that tasks with an imbalance very often have unequal misclassification costs, for instance if you tell someone they are going to have a stroke when they won't, you will have scared them rather badly, and you may spend some more money on more testing, but that is probably about it. If you tell someone they are not at risk of a stroke when they are, they may go home, have a stroke and become severely disabled as a result, or even die. So the false-negative cost is likely to be much higher and that will boost recognition of the positive cases. The amount of resampling required depends **only** on the costs; the degree of imbalance is entirely irrelevant.
BTW, rather than look at accuracy, you might want to look at a related metric, which is the improvement over a classifier that always predicts the majority class. Something like:
$$\frac{\mathrm{Accuracy} - \pi}{1 - \pi}$$
where $\pi$ is the proportion of the most common class in the dataset. In this case, your classifier is going to score somewhere close to zero as it is probably just going by the majority class, and a score of zero shows clearly that it isn't doing a good job. A score of 1 would be perfect classification. A negative score shows the model is worse than just guessing. It is an affine transformation of accuracy though, so it is still measuring the same basic thing, just on a more interpretable scale.
Upvotes: 1 <issue_comment>username_2: You can use a data augmentation technique like SMOTE to oversample the minority class. It will help you have a more balanced dataset. Here is a nice guide on it:
<https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/>
Upvotes: 0 <issue_comment>username_3: You should use another classification metric for evaluating your model. I would just look at the *confusion matrix* to see how the model performance on the "interesting class" (minority class).
To overcome your imbalanced dataset, you could upsample the minority class. Bootstrapping is a great staring point. Then advance with SMOTE or something else. This article might give you some ides; [8 Tactics to Combat Imbalanced Classes in Your Machine Learning Dataset](https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/)
Upvotes: 0 |
2022/05/05 | 809 | 3,372 | <issue_start>username_0: Suppose I use a *tansig* activation function in the output layer of an artificial neural network giving me outputs in the range $[-1,1]$ and my model is applied to a binary classification problem, should my target labels be -1 and 1 or 0 and 1? I've always used 0 and 1, but now I'm questioning this.<issue_comment>username_1: 4861/5111 is about 95.1%, so it looks like your classifier is probably predicting every patient as "no stroke" (i.e. it is not really doing anything useful). The thing to do is to work out the costs of false-positive (predicting a stroke but it didn't happen) and false-negative (predicting they won't have a stroke but they did) errors. Then factor those into your training process. This can be done in two ways: Firstly by using a weighted training criterion so you can factor the costs in explicitly; or secondly by resampling the dataset to have a greater proportion of "stroke" patients.
The presence of an imbalance is **very** rarely a justification in itself for resampling the data. The real reason to do so is that tasks with an imbalance very often have unequal misclassification costs, for instance if you tell someone they are going to have a stroke when they won't, you will have scared them rather badly, and you may spend some more money on more testing, but that is probably about it. If you tell someone they are not at risk of a stroke when they are, they may go home, have a stroke and become severely disabled as a result, or even die. So the false-negative cost is likely to be much higher and that will boost recognition of the positive cases. The amount of resampling required depends **only** on the costs; the degree of imbalance is entirely irrelevant.
BTW, rather than look at accuracy, you might want to look at a related metric, which is the improvement over a classifier that always predicts the majority class. Something like:
$$\frac{\mathrm{Accuracy} - \pi}{1 - \pi}$$
where $\pi$ is the proportion of the most common class in the dataset. In this case, your classifier is going to score somewhere close to zero as it is probably just going by the majority class, and a score of zero shows clearly that it isn't doing a good job. A score of 1 would be perfect classification. A negative score shows the model is worse than just guessing. It is an affine transformation of accuracy though, so it is still measuring the same basic thing, just on a more interpretable scale.
Upvotes: 1 <issue_comment>username_2: You can use a data augmentation technique like SMOTE to oversample the minority class. It will help you have a more balanced dataset. Here is a nice guide on it:
<https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/>
Upvotes: 0 <issue_comment>username_3: You should use another classification metric for evaluating your model. I would just look at the *confusion matrix* to see how the model performance on the "interesting class" (minority class).
To overcome your imbalanced dataset, you could upsample the minority class. Bootstrapping is a great staring point. Then advance with SMOTE or something else. This article might give you some ides; [8 Tactics to Combat Imbalanced Classes in Your Machine Learning Dataset](https://machinelearningmastery.com/tactics-to-combat-imbalanced-classes-in-your-machine-learning-dataset/)
Upvotes: 0 |
2022/05/05 | 692 | 2,879 | <issue_start>username_0: It's mentioned [here](https://medium.com/octavian-ai/a-simple-explanation-of-the-inception-score-372dff6a8c7a) that there is no measure of intra-class diversity with the inception score:
>
> If your generator generates only one image per classifier image class,
> repeating each image many times, it can score highly (i.e. there is no
> measure of intra-class diversity)
>
>
>
However, isn't it "easy" to look at the variance of the outputs of the classifier for a given class (e.g. if you only output 0.97 for all the images of your GAN class then there is no intra-class diversity but if you output 0.97, 0.95, 0.99, 0.92, there is diversity?). I'm struggling to understand why this is hard to do (but I might be missing something!).<issue_comment>username_1: **For reference, a recap of Inception Score:**
The inception score is computed by comparing the categorical output distributions of an inception model, given examples from real vs synthetic images. If the synthetic images produce similar class distributions as the real images, the inception score is high, otherwise it is low.
>
> However, isn't it "easy" to look at the variance of the outputs of the classifier for a given class
>
>
>
Say you want to generate multiple horses and the model learns to generate horses with different colors but always in the same pose - then your class probabilities will vary, but I wouldn't call this very diverse horse generation. This is how I would understand what is meant by your cited statement.
The output distributions from the inception model contain class information but very little information of specific image features. Thus, the inception score cannot be sensitive to intra-class variations of the generator.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Adding on top of username_1 answer:
regarding the variance, it is unfortunately not so straight forward. The problem being that most deep learning models are not [calibrated](https://scikit-learn.org/stable/modules/calibration.html), hence small intra-class variation might lead to large probability variations for the winning class. Maybe one way to account for this issue would be to compute the [mutual information](https://en.wikipedia.org/wiki/Mutual_information) between the generated predictions and a prior expected distribution, for example uniform probability distribution 1/n with n number of expected difference within class, like horse poses to use the same example as username_1e, but I found no reference about similar attempts, plus coming up with a proper prior expected distribution doesn't sound trivial at all. I guess the reason is that the inception score was designed for generic GANs (i.e. GANs trained to generate generic classes from CIFAR and similar dataset) so without having in mind to measure variability within classes.
Upvotes: 2 |
2022/05/11 | 998 | 3,280 | <issue_start>username_0: Let's say that I have a set of trajectories $\mathcal{D} = \{\tau\_1, \dots, \tau\_n\}$ produced by an agent acting in a (episodic) MDP with a fixed policy $\pi$. I would like to estimate the $Q$ function of $\pi$ from $\mathcal{D}$. Just to be clear, each trajectory $\tau\_j$ is a finite sequence
$$
\tau\_j = s\_0^j, a\_0^j, r\_0^j s\_1^j, a\_1^j, r\_1^j \dots, s\_{N\_j}^j
$$
representing an episode performed w.r.t. $\pi$.
What would be the standard approach in this case? Better use TD learning or Monte Carlo?<issue_comment>username_1: Your trajectories must contain rewards, so I'm assuming you've forgotten them in your original post, i.e., we must have $$\tau\_j = (s\_0^j, a\_0^j, r\_1^j, ..., s\_{N\_j}, a\_{N\_j}, r\_{N\_j+1})$$
Given that you have access to the full trajectories, I would use the Monte Carlo estimates. You want to use TD methods when you need to estimate $Q^\pi$ incrementally as new transitions $(s\_t, a\_t, r\_{t+1})$ arrive.
You can estimate $Q^\pi$ as follows (adapted from the [Sutton&Barto book](http://incompleteideas.net/book/RLbook2020trimmed.pdf), chapter 5.1):
Initialise $\text{Returns}(s,a)$ to an empty list for all $(s, a)$ pairs in the trajectories.
For each $j$:
Initialise the cumulative discounted reward $G$ to 0.
For $t$ in ${N\_j, N\_{j-1}, ..., 0}$:
Set $G = \gamma G + r\_{t+1}$
If $(s\_t, a\_t)$ is **not** in $((s\_0, s\_0), ... (s\_{t-1}, a\_{t-1}))$:
Append $G$ to $\text{Returns}(s\_t, a\_t)$.
Set $Q^\pi(s\_t, a\_t) = \text{Average}(\text{Returns}(s\_t, a\_t))$
For completeness the method is called "First visit Monte Carlo" because we only update the estimate of $Q^\pi$ using first visits to an $(s, a)$ pair.
Upvotes: 2 <issue_comment>username_2: >
> What would be the standard approach in this case? Better use TD learning or Monte Carlo?
>
>
>
Both should be fine, but they *might* lead to different estimates, if both these things apply:
* The amount of data is relatively small compared to all possibilities from the given environment and policy.
* Either the policy or the environment are stochastic.
The difference is that for each state/action pair estimated:
* Monte Carlo will estimate based on overall average returns, ignoring individual state transitions and policy choices.
* Temporal Difference will estimate based on observed state transitions and policy choices.
There is a good example of what this might mean numerically in [Sutton & Barto](http://incompleteideas.net/book/RLbook2020trimmed.pdf) chapter 6, example 6.4. In that case it shows an advantage to TD learning when some states might be sparsely represented in the data whilst others have more instances. Monte Carlo learning will only learn the value of those rarer states from the trajectories where they occur, whilst TD learning will be able to use estimates of *other* trajectories, provided two or more trajectories overlap later on.
This doesn't necessarily make TD learning better. If a trajectory that overlaps with others also happens to include an unusual policy choice, state transition or reward, this may spread sample bias into multiple estimates, whilst Monte Carlo would be affected less by such an outlier.
Upvotes: 2 [selected_answer] |
2022/05/13 | 649 | 2,630 | <issue_start>username_0: I have seen many places that features and inputs have been used interchangeably when talking about machine learning especially deep neural networks. I want to know if they are indeed the same thing or there is a difference between between the two.<issue_comment>username_1: An **input** usually refers to an *example* (sometimes also known as *sample*, *observation* or *data point*) $x$ from a dataset that you pass to the model. For example, in supervised learning, you have a labelled dataset $D = \{(x\_i, y\_i)\}\_{i=1}^N$, where $x\_i$ is the $i$th input and $y\_i$ the corresponding label (aka target or output).
This is similar to the terminology used for [functions](https://www.mathsisfun.com/sets/function.html). For example, if you have the function $f: \mathcal{X} \rightarrow \mathcal{Y}$, then $x \in \mathcal{X}$ is the input and $f(x) = y \in \mathcal{Y}$ is the output of the function for that input $x$. In fact, models (like neural networks or linear regression models) are functions.
Examples
* In image classification, an image can be an input
* In machine translation, an input can be a sentence or a word (depending on the model)
* In reinforcement learning, an input can be a state
A **feature** is an *attribute* associated with an input or sample. For example, a feature of an image could be a pixel. The feature of a state could be the Euclidean distance to the goal state. An input can be composed of multiple features.
It's possible that people also refer to features as inputs (in fact, if you pass e.g. an image to a model, you're also passing the pixels, the features, which are thus also inputs to the model). There are also other terms used to refer to these. For example, in statistics, people may refer to features as independent variables (or regressors), and maybe a sample refers to a dataset rather than a single observation. So, you should always take into account the context when reading these terms.
For more info, you could read [this](https://en.wikipedia.org/wiki/Dependent_and_independent_variables) and [this](https://en.wikipedia.org/wiki/Unit_of_observation) Wikipedia articles.
Upvotes: 4 [selected_answer]<issue_comment>username_2: An **input** involves everything as a data. For example in image classification, all images are input to model while **features** are the kind of specific properties of that input images based on model will decide the results.e.g in dog and cat dataset all images of dogs and cats is a input to model and properties such as **ears, nose, other facial differences** are the features of that input data.
Upvotes: 0 |
2022/05/22 | 886 | 3,518 | <issue_start>username_0: The term 'multilayer perceptron' has been used in literature in various ways in the literature.
I am presenting some of them below
1. As a feed-forward neural network [[1]](https://www.sciencedirect.com/topics/veterinary-science-and-veterinary-medicine/feedforward-neural-network).
2. As a fully connected feed-forward neural network [[2]](https://boostedml.com/2020/04/feedforward-neural-networks-and-multilayer-perceptrons.html).
3. As a fully connected feed-forward neural network in which each hidden layer has the same number of neurons.
4. As a fully connected feed-forward neural network in which each hidden layer has the same number of neurons and same activation function.
Afaik, the first definition is generally used, but it seems that there are many alternative definitions.
In this context, I want to know all the possible definitions that are floating in the literature for the word 'multilayer perception'.
I am asking this question because there can be several interpretations if we consider the words of 'multilayer perception' alone as the names suggests the only property required is multiple number of layers.<issue_comment>username_1: An **input** usually refers to an *example* (sometimes also known as *sample*, *observation* or *data point*) $x$ from a dataset that you pass to the model. For example, in supervised learning, you have a labelled dataset $D = \{(x\_i, y\_i)\}\_{i=1}^N$, where $x\_i$ is the $i$th input and $y\_i$ the corresponding label (aka target or output).
This is similar to the terminology used for [functions](https://www.mathsisfun.com/sets/function.html). For example, if you have the function $f: \mathcal{X} \rightarrow \mathcal{Y}$, then $x \in \mathcal{X}$ is the input and $f(x) = y \in \mathcal{Y}$ is the output of the function for that input $x$. In fact, models (like neural networks or linear regression models) are functions.
Examples
* In image classification, an image can be an input
* In machine translation, an input can be a sentence or a word (depending on the model)
* In reinforcement learning, an input can be a state
A **feature** is an *attribute* associated with an input or sample. For example, a feature of an image could be a pixel. The feature of a state could be the Euclidean distance to the goal state. An input can be composed of multiple features.
It's possible that people also refer to features as inputs (in fact, if you pass e.g. an image to a model, you're also passing the pixels, the features, which are thus also inputs to the model). There are also other terms used to refer to these. For example, in statistics, people may refer to features as independent variables (or regressors), and maybe a sample refers to a dataset rather than a single observation. So, you should always take into account the context when reading these terms.
For more info, you could read [this](https://en.wikipedia.org/wiki/Dependent_and_independent_variables) and [this](https://en.wikipedia.org/wiki/Unit_of_observation) Wikipedia articles.
Upvotes: 4 [selected_answer]<issue_comment>username_2: An **input** involves everything as a data. For example in image classification, all images are input to model while **features** are the kind of specific properties of that input images based on model will decide the results.e.g in dog and cat dataset all images of dogs and cats is a input to model and properties such as **ears, nose, other facial differences** are the features of that input data.
Upvotes: 0 |
2022/05/23 | 1,013 | 4,522 | <issue_start>username_0: **I plan to use my predictions as ground truth to continue training my model. These predictions are of course reviewed during this process. Is there an argument against that (reinforcement of slight mistakes/overfitting etc.)?**
---
Here my specific use case described:
I am using detectron's faster R-CNN implementation to train a (pretrained) model to find defects of a machine part in images.
The goal is to find bounding boxes around these defects and to label them.
A colleague labeled some of the images (1500 images, making up 20% of the whole set) and I used those to train my model. Then I had the model predict defects on all 7500 images. My colleague asked me if he can review the predictions (and adjust/add if necessary) so he doesn't have to label the remaining images from scratch, and then I would like to continue the training with all the images.<issue_comment>username_1: The answer is: It depends.
What you describe is a strategy often used to save time and costs for labelling data. It is important that the data you have already labelled (the 20%) is representative of the rest of data (the 80%). At the very least, you must have all classes in those 20%.
It is also important that you select a good detection model to have reliable predictions. Faster RCNN should be a good choice.
However, whether 20% labelled data is enough is difficult to tell. It depends on your data.
Your strategy itself is common. I'd just try whether 20% labelled data is enough. You can also fine-tune your faster RCNN model in between, say after 40% of the data is labelled to improve predictions further.
Upvotes: 2 <issue_comment>username_2: Using the (unchecked) predictions of the model as training data is an approach known as "pseudo-labeling". It can help in certain situations, depending on the underlying structure of your dataset, but you have to be a bit careful about how you use it (e.g. only using high-confidence predictions as your pseudo-labels) and you always want to keep your pseudo-labels separate from your true labels, so you can potentially update them as your model changes.
---
But it sounds like you're not using the raw predictions as labels, but rather using the predictions of the models as a pointer to (currently unlabeled) examples which you then will manually label.
"Training on errors" is recognized mode of augmenting your training dataset, especially for "on-line" style systems where you're getting a continuous stream of new examples. The concept is to identify those examples which are predicted either inaccurately or with low confidence, identify the accurate labels for these instances, and then include them with the rest of the training set to help improve the predictions for similar sorts of examples in the future.
In contrast to pseudo-labeling, you're looking to correct the *low* confidence examples or the incorrectly predicted. Adding in high-confidence examples doesn't gain you much, as your current training set is already sufficient to correctly predict these. And with an on-line model where you're continually getting new examples, adding the well-predicted examples to your training set does potentially cause issues with subclass imbalance issues, as "normal" examples are expected to swamp out the rare outliers.
---
But it sounds like you have a fixed-size training set. In that case, the standard recommendation for the best course of action of dealing with unlabeled data applies: "pay someone to label it for you". What you're looking for is *accurate* labeling. How you get that is left somewhat nebulous, so long as the labeling is accurate. Using model results as a starting point is perfectly valid, assuming that whoever is doing the checking/correction is willing to actually do all the corrections (to the same quality level as a "from scratch" prediction) and won't wave through the model predictions as "ehh, good enough".
In addition to label accuracy, another issue may be selection bias (that is, the model may have certain subsets of examples which it performs worse/better on, and picking which examples to include in labeling on that basis may bias future training). But if you have a pre-determined, fixed-size training set this is not really an issue if you label all of them (or a model-independent random subset). The selection bias comes not from the initial model predictions/selection, but instead the (model-independent) selection of the examples to be labeled.
Upvotes: 4 [selected_answer] |
2022/05/24 | 1,156 | 4,862 | <issue_start>username_0: Classical gradient descent algorithms sometimes overshoot and escape minima as they depend on the gradient only. You can see such a problem during the update from point 6.
[](https://i.stack.imgur.com/XJu8M.jpg)
In classical GD algorithm, the update equation is
$$\theta\_{t+1} = \theta\_{t} - \eta \times \triangledown\_{\theta} \ell$$
In the momentum based GD algorithm, the update equations are
$$v\_0 = 0$$
$$v\_{t+1} = \alpha v\_t + \eta \times \triangledown\_{\theta} \ell $$
$$\theta = \theta - \eta \times \triangledown\_{\theta} \ell$$
I am writing all the equations concisely by removing the obvious variables used such as inputs to loss functions. In the lecture I'm listening to, the narrator says that **momentum-based GD helps during the update at point 6 and the update will not lead to point 7 as shown in the figure and goes towards minima.**
But for me, it seems that even momentum-based GD will go to point 7 and the update at point 7 will be benefited from the momentum-based GD as it does not lead to point 8 and goes towards minima.
Am I correct? If not, at which point does the momentum-based GD actually help?<issue_comment>username_1: The answer is: It depends.
What you describe is a strategy often used to save time and costs for labelling data. It is important that the data you have already labelled (the 20%) is representative of the rest of data (the 80%). At the very least, you must have all classes in those 20%.
It is also important that you select a good detection model to have reliable predictions. Faster RCNN should be a good choice.
However, whether 20% labelled data is enough is difficult to tell. It depends on your data.
Your strategy itself is common. I'd just try whether 20% labelled data is enough. You can also fine-tune your faster RCNN model in between, say after 40% of the data is labelled to improve predictions further.
Upvotes: 2 <issue_comment>username_2: Using the (unchecked) predictions of the model as training data is an approach known as "pseudo-labeling". It can help in certain situations, depending on the underlying structure of your dataset, but you have to be a bit careful about how you use it (e.g. only using high-confidence predictions as your pseudo-labels) and you always want to keep your pseudo-labels separate from your true labels, so you can potentially update them as your model changes.
---
But it sounds like you're not using the raw predictions as labels, but rather using the predictions of the models as a pointer to (currently unlabeled) examples which you then will manually label.
"Training on errors" is recognized mode of augmenting your training dataset, especially for "on-line" style systems where you're getting a continuous stream of new examples. The concept is to identify those examples which are predicted either inaccurately or with low confidence, identify the accurate labels for these instances, and then include them with the rest of the training set to help improve the predictions for similar sorts of examples in the future.
In contrast to pseudo-labeling, you're looking to correct the *low* confidence examples or the incorrectly predicted. Adding in high-confidence examples doesn't gain you much, as your current training set is already sufficient to correctly predict these. And with an on-line model where you're continually getting new examples, adding the well-predicted examples to your training set does potentially cause issues with subclass imbalance issues, as "normal" examples are expected to swamp out the rare outliers.
---
But it sounds like you have a fixed-size training set. In that case, the standard recommendation for the best course of action of dealing with unlabeled data applies: "pay someone to label it for you". What you're looking for is *accurate* labeling. How you get that is left somewhat nebulous, so long as the labeling is accurate. Using model results as a starting point is perfectly valid, assuming that whoever is doing the checking/correction is willing to actually do all the corrections (to the same quality level as a "from scratch" prediction) and won't wave through the model predictions as "ehh, good enough".
In addition to label accuracy, another issue may be selection bias (that is, the model may have certain subsets of examples which it performs worse/better on, and picking which examples to include in labeling on that basis may bias future training). But if you have a pre-determined, fixed-size training set this is not really an issue if you label all of them (or a model-independent random subset). The selection bias comes not from the initial model predictions/selection, but instead the (model-independent) selection of the examples to be labeled.
Upvotes: 4 [selected_answer] |
2022/05/29 | 1,021 | 4,636 | <issue_start>username_0: I am playing around with a DRL agent in a stock-trading environment.
I have normalized all the external input data (the features that my agent will use). However, what about characteristics that don't come from the environment?
For example, I have included things like "current account balance" and "current unrealized gain" in my observation space (as I believe it's useful). However, I don't know how I could normalize these values, given that they are dependent on what actions the agent took, which changes every time etc.
Any feedback or advice is appreciated.
Will it be detrimental if I don't normalize these values (as long as they're reasonably within the orders of magnitude of my other normalized variables)?
I guess a simple example would be like if a robot was being trained to pick up balls, and one of the observations was "current number of balls picked up", how would you normalize that value, given that it's just a count that could technically go to infinity?<issue_comment>username_1: The answer is: It depends.
What you describe is a strategy often used to save time and costs for labelling data. It is important that the data you have already labelled (the 20%) is representative of the rest of data (the 80%). At the very least, you must have all classes in those 20%.
It is also important that you select a good detection model to have reliable predictions. Faster RCNN should be a good choice.
However, whether 20% labelled data is enough is difficult to tell. It depends on your data.
Your strategy itself is common. I'd just try whether 20% labelled data is enough. You can also fine-tune your faster RCNN model in between, say after 40% of the data is labelled to improve predictions further.
Upvotes: 2 <issue_comment>username_2: Using the (unchecked) predictions of the model as training data is an approach known as "pseudo-labeling". It can help in certain situations, depending on the underlying structure of your dataset, but you have to be a bit careful about how you use it (e.g. only using high-confidence predictions as your pseudo-labels) and you always want to keep your pseudo-labels separate from your true labels, so you can potentially update them as your model changes.
---
But it sounds like you're not using the raw predictions as labels, but rather using the predictions of the models as a pointer to (currently unlabeled) examples which you then will manually label.
"Training on errors" is recognized mode of augmenting your training dataset, especially for "on-line" style systems where you're getting a continuous stream of new examples. The concept is to identify those examples which are predicted either inaccurately or with low confidence, identify the accurate labels for these instances, and then include them with the rest of the training set to help improve the predictions for similar sorts of examples in the future.
In contrast to pseudo-labeling, you're looking to correct the *low* confidence examples or the incorrectly predicted. Adding in high-confidence examples doesn't gain you much, as your current training set is already sufficient to correctly predict these. And with an on-line model where you're continually getting new examples, adding the well-predicted examples to your training set does potentially cause issues with subclass imbalance issues, as "normal" examples are expected to swamp out the rare outliers.
---
But it sounds like you have a fixed-size training set. In that case, the standard recommendation for the best course of action of dealing with unlabeled data applies: "pay someone to label it for you". What you're looking for is *accurate* labeling. How you get that is left somewhat nebulous, so long as the labeling is accurate. Using model results as a starting point is perfectly valid, assuming that whoever is doing the checking/correction is willing to actually do all the corrections (to the same quality level as a "from scratch" prediction) and won't wave through the model predictions as "ehh, good enough".
In addition to label accuracy, another issue may be selection bias (that is, the model may have certain subsets of examples which it performs worse/better on, and picking which examples to include in labeling on that basis may bias future training). But if you have a pre-determined, fixed-size training set this is not really an issue if you label all of them (or a model-independent random subset). The selection bias comes not from the initial model predictions/selection, but instead the (model-independent) selection of the examples to be labeled.
Upvotes: 4 [selected_answer] |
2022/05/31 | 793 | 2,942 | <issue_start>username_0: We are making a classification model that takes a clip of a movie as an input and predicts who the director is. Roughly speaking, it will be a model that understands film directors' unique style.
We are going to extract 5 features from a movie: a visual-feature vector from ResNet pretrained on ImageNet, an audio-feature vector from an audio model, shot type of a frame (one-hot encoding), emotion detection, and a color scheme of a frame. In the end, we are going to concatenate all these feature vectors and give it as a input for our classification model.
We find a tool that can extract ****color scheme**(or palette) of an image as below. It both has **information about colors and their proportion**. However, **I can't think a smart way to convert this information into 1-d vector form. Any ideas?****
Of course I know the ResNet will get information about colors but the importance of color will be degraded in ResNet. I think the color is very important feature in defining a director's style and thus I want to use a color feature separately.
[](https://i.stack.imgur.com/2JuPZ.jpg)
[](https://i.stack.imgur.com/kDSTb.jpg)<issue_comment>username_1: My 2 suggestions would be to:
1. Sum the hex coding of the color multiplied by the prevalence. For example [80, 80, 80] (grey) is used 7% of the time so `color_features += [80, 80, 80] * 0.07`.
2. You determine a preset number of color bins (maximally distinct colors used as protoypes for the bins). Bin the colors based on distance, and add the prevalence of all colors in that bin together to a float value.
Example of option 2:
```
# colors is a list of (color, prevalence) tuples.
colors = [(DDDDDD, 0.30), (EEEEEE, 0.40), (111111, 0.10), (222222, 0.20)]
prototypes = [FFFFFF, 000000] # White and black
# colors[0] and colors[1] are close to white
# colors[2] and colors[3] are close to black
# For each prototype we add the prevalences together that go with that prototype
features = [0.70, 0,30]
```
The first option is probably too reductionist, while the second may be too generalized or too complex depending on the number of bins being too low or too high, respectively.
Both options are based on intuition from experience, rather than any literature or empirical evidence.
Upvotes: 2 <issue_comment>username_2: I think the tool you found is useful for a human and to get nice visuals but I also think it's totally useless for feature extraction.
If you want to pass explicit information about colors simply concatenate 3 normalized histograms for each color channel. You're guarantee to have always a fixed sized color feature vector (n\_bins \* 3) and you can't literally pass more information about colors, the histograms contain even more information than a compressed color palette.
Upvotes: 2 |
2022/06/04 | 1,278 | 5,437 | <issue_start>username_0: Youtube was recently suggesting to me videos of people training NEAT neural networks for video games. I've noticed that often the training process was quite slow (for example in [this](https://www.youtube.com/watch?v=a8Bo2DHrrow) Trackmania example).
Is there a way (algorithmic approach or an idea) to easily simulate video games, without actually rendering the pixels on screen and making the training much quicker? In addition to that, if you also know of a tool that does that, please, share it with me.
---
**Update:**
Following the Trackmania example, I found out the youtuber uses a standard tool called "[TMInterface](https://donadigo.com/tminterface/)", and the webpage of the tool states it is a "TAS tool" - [Tool-assisted speedrun](https://en.wikipedia.org/wiki/Tool-assisted_speedrun#Method).
I'll investigate how such tools work and whats the idea behind them (and if indeed they do what I think they do). Will update!<issue_comment>username_1: It depends on the game environment and on the model being trained.
* If you are training an agent that uses vision to decide action, then typically you need a copy of the rendered screen:
+ If that comes from a modern game using GPU acceleration to generate output, then you will likely need to leave that as is, and also transfer the rendered output back to the main memory, so it will likely be slower than running the game normally. There may be game settings that help improve speed.
+ If the output is from a game emulator for older games (such as Atari), then it probably won't use a GPU, and you will not need to spend time rendering to the screen. The emulator may be able to run in a mode that doesn't render, whilst still giving in-memory access to the screen buffer. It may also be able to run faster than real time. It will depend on emulator configuration settings that are available to you.
* If the game is written "natively" in a simulator or semi-native environment like Pygame or Box2D, you should be able to disable rendering and save the time. This is how a lot of OpenAI Gym's environments are set up, and these do run a lot faster when not rendering - a simple flag you can set when starting the environment and/or starting an episode. You may even notice with these environments that putting the view window for the environment behind another window will speed things up considerably (but for automation it is more reliable to disable rendering).
When the environments have been set up via a shared library, like Open AI's Gym, then the internal logic for deciding rendering (or not) will be different for each environment, but there may be a standardised config/method argument to determine whether the game engine is rendered to screen. That could apply to nearly all the environments available in the library. You should check the documentation if for example you are trying out NEAT on some Atari games that have been ready packaged for use with computer agents.
Upvotes: 3 <issue_comment>username_2: Bypassing graphics
------------------
As mentioned in username_1's answer, manipulating the engine to bypass graphics rendering to speed up AI simulation can be a valid approach. I have done that myself.
But there are some caveats to consider/work around:
* The graphics subsystem can be used for core gameplay programming as well. For example, the picking problem (Which object in the scene is the user selecting/clicking on?) [can be solved by rendering and evaluating framebuffers](http://www.opengl-tutorial.org/miscellaneous/clicking-on-objects/picking-with-an-opengl-hack/).
* If the game has an FPS limiter, then you need to raise that FPS limit. Otherwise it quickly becomes the new bottleneck.
* If the game also has real-time timers (i.e. wait five seconds to trigger some event), then your ultra-fast gameplay simulation can cause unintended effects/bugs. This is especially true if you break the aforementioned FPS limit.
* Sometimes games can experience bugs and crashes when running at high framerates. ([Take this PCGamingWiki entry on
*Peter Jackson's King Kong Gamer's Edition* as an example.](https://www.pcgamingwiki.com/wiki/Peter_Jackson%27s_King_Kong_Gamer%27s_Edition#Game_breaking_bugs_at_high_framerates)) With high framerates, gameplay simulation may operate with very small timesteps and glitches in physics and collision detection become more likely. Such effects will only intensify with ultra-high framerates from the accelerated AI simulation.
### DLL injection
It is not uncommon for game mods to [hijack or replace DLLs](https://www.upguard.com/blog/dll-hijacking), including those of the graphics API. If you cannot manipulate the engine itself, then perhaps you can inject an alternative graphics API implementation that consists of stubs and mockups.
Bypass whatever is irrelevant to gameplay
-----------------------------------------
Depending on the game and the goals of your project, you might take even more drastic steps to accelerate the simulation:
* Bypass the audio layer as well. Besides graphics, audio is another major output channel that is often irrelevant to AI simulation.
* Skip through cutscenes and dialogue, like a speedrunner would do.
* If walking is not essential to gameplay, let the avatar teleport instead of simulating all the walking.
* Skip animations in general. Always try to fast-forward to the "gameplay outcome" of an action.
Upvotes: 2 |
2022/06/05 | 1,377 | 5,621 | <issue_start>username_0: I have not been able to find a good explanation of this, other than statements that the algorithm is guaranteed to converge with arbitrary choices for initial values in each state. Is this something to do with the Bellman optimality constraint itself?
It's hard to see that this is true intuitively since my intuition states that there ought to be ways in which an arbitrary choice of values could cause the value function to converge incorrectly compared to the ground truth. For instance, what if a state that in practice had low reward is assigned a very high initial reward value in value iteration? Would the algorithm not construct a value function that highly values trajectories passing through that state?
[](https://i.stack.imgur.com/yHlp9.png)<issue_comment>username_1: >
> Is this something to do with the Bellman optimality constraint itself?
>
>
>
That is part of it, and important for episodic problems without discounting. The Bellman equations link between time steps, providing a *direction* that data about returns is used to bootstrap value estimates. In episodic problems there are terminal states with 0 value by definition. In the first pass through state space, any state that can transition to a terminal state will be exposed to that fixed knowledge, and could be altered to take that transition. This data is backed up through time steps, at least once per value iteration sweep, and maybe more frequently depending on the order of updating states. You can visualise this effect like a "flood fill" starting from terminal states and filling in the table with true values.
In addition, when there is a discount factor (optional for episodic environments, but required for continuing ones), then this progressively *reduces* any bias in existing estimates due to the term $r + \gamma v(s')$ repeatedly - over many updates - multiplying any error in the bootstrap from next state by $\gamma$, the discount factor.
>
> For instance, what if a state that in practice had low reward is assigned a very high initial reward value in value iteration? Would the algorithm not construct a value function that highly values trajectories passing through that state?
>
>
>
In limited cases you can construct a combination of environment and starting value function adversarially, so that it does not converge properly. This would require that there is no discounting (or very high $gamma$ and also high $\theta$ cutoff to stop iterations early), and a **set** of self-consistent high start value estimates that form a loop preventing the iterations from making changes. Any single "rogue" high value estimate should quickly get reduced by the update mechanism (only exception would be a high value state that can loop to itself deterministically and without any discounting applied).
In this question, the OP constructs just such an adversarial start condition: [Is the initialisation of $V(s)$ and $\pi(s)$ really arbitrary in policy iteration?](https://ai.stackexchange.com/questions/34570/is-the-initialisation-of-vs-and-pis-really-arbitrary-in-policy-iteratio)
So in some senses, the initialisation is not *strictly* arbitrary, in that you can set up non-working examples. However, the value iteration algorithm is far more robust than your initial concerns imply, thanks to the effects described above. In addition, if you are implementing a solver then you get to choose initialisation, and can make conservative choices (e.g. don't initialise any value higher than max possible reward - there is no known benefit to doing so for dynamic programming algorithms). These choices are not strictly required, the space of working initialisations is much larger, but you are also not required to attempt to code truly arbitrary value table initialisation.
Upvotes: 3 <issue_comment>username_2: If the *value function* of a state $v(s)$ is relatively high, then you are absolutely correct in saying that a *greedy policy* may choose to visit $s$, since the high $v(s)$ makes it very promising. The key idea here is that the update rule of value iteration will gradually change the value function and likewise will gradually change the policy.
Suppose that the *optimal value function* $v\_\*(s)$ of a specific state $s$ is low, yet the value function $v(s)$ is initialized much higher. Then, **the update rule in the pseudocode you provided will eventually decrease** $v(s)$ to $v\_\*(s)$. To see this fact intuitively, note that $v(s)$ can be decomposed as the expected *return* (sum of rewards) from that state:
$$v(s) = \mathbb{E}[R\_k + R\_{k+1} + R\_{k+2} + \ldots + R\_T|S\_k = s].$$
If the expected return from $s$ is smaller than $v(s)$, then $v(s)$ will decrease with the aforementioned update. The actual update rule approximates the expected return from $s$ using the next reward $R\_k$ and the value function of the next state $v(s')$ via bootstrapping. Note that $v(s')$ can be decomposed as follows:
$$v(s') = \mathbb{E}[R\_{k+1} + R\_{k+2} + \ldots + R\_T|S\_{k+1}=s'].$$
Therefore, $v(s')$ quantifies the part of the return aside from the next reward $R\_k$, and the update rule indeed quantifies the expected return from $s$. If $v(s')$ is initialized too high or low, then the update rule will simultaneously be adjusting it to the optimal $v\_\*(s')$ while adjusting $v(s)$ closer to $v\_\*(s)$.
Once the value functions change as described above, then the greedy policy may also change and favor other states aside from $s$ with higher value functions.
Upvotes: 3 [selected_answer] |
2022/06/06 | 2,816 | 9,963 | <issue_start>username_0: Recently, I had the following question about supervised classification models (e.g. random forest) for longitudinal data.
Suppose I have the following data about students passing a fitness test - the students (each student has an "id") who enroll in a school take a fitness test each year and record their height and weight (at the start of each school year, before the fitness test). They can either pass (1) or fail (0) the fitness test each year. The school is interested in knowing which students are likely to fail the fitness test, so they can focus more attention on these students. Naturally, some students might have taken the fitness test more times than other students.
I simulated some data (using the R programming language) to show how the historical data might look like:
```
score <- c("1","0")
score <- as.numeric(sample(score, 1000, replace=TRUE, prob=c(0.3, 0.7)))
id_sample <- 1:140
id <- sample(id_sample, replace = TRUE, 1000)
height <- abs(rnorm(1000, 150,5))
weight <- abs(rnorm(1000, 75,5))
data = data.frame(id, height, weight, score)
data <- data[order(data$id),]
```
I then added two variables to this data - one to show how many times the fitness test was taken, the another to show the (cumulative) average number of times the test was passed:
```
library(dplyr)
data = data.frame(data %>% group_by(id) %>% mutate(counter = row_number(id)))
data$csum <- ave(data$score, data$id, FUN=cumsum)
data$average <- data$csum/data$counter
```
Now, suppose some of the students are about to take this test again and we would like to predict what their score will be - some of these students are existing students, but some of these students are new and have never taken the test before (i.e. they have no historical data):
```
id_sample <- 1:140
id <- sample(id_sample, replace = FALSE, 23)
height <- abs(rnorm(23, 150,5))
weight <- abs(rnorm(23, 75,5))
new_data = data.frame(id, height, weight)
new_data <- new_data[order(new_data$id),]
id_sample <- 141:200
id <- sample(id_sample, replace = FALSE, 5)
height <- abs(rnorm(5, 150,5))
weight <- abs(rnorm(5, 75,5))
#simulating data for students who never took the test before
n_data = data.frame(id, height, weight)
n_data <- n_data[order(n_data$id),]
test_data = rbind(new_data, n_data)
```
Now, to this test data, (where applicable) I added "longitudinal variables" that take into account the number of times the students took the test and their most recent average cumulative score:
```
#counter
max = data.frame(data %>%
group_by(id) %>%
filter(counter == max(counter)))
colnames(max)[5] <- "max_counter"
max$max\_counter = max$max_counter + 1
test_with_counter = merge(x = test_data, y = max, by = "id", all.x = TRUE)
test = test_with_counter[, c(1,2,3,7,9)]
test$max\_counter[is.na(test$max_counter)] <- 1
test$average[is.na(test$average)] <- 0
#formatting
colnames(test)[2] <- "height"
colnames(test)[3] <- "weight"
colnames(test)[4] <- "counter"
data$csum = NULL
data$score = as.factor(data$score)
```
At this point, there is nothing stopping me from training a supervised classification model (e.g. random forest) to predict the "score" variable for the test data:
```
#skip cross validation for brevity of question
library(randomForest)
rf <- randomForest(score~., data=data)
pred = predict(rf, newdata = test)
print(rf)
Call:
randomForest(formula = score ~ ., data = data)
Type of random forest: classification
Number of trees: 500
No. of variables tried at each split: 2
OOB estimate of error rate: 23.4%
Confusion matrix:
0 1 class.error
0 636 79 0.1104895
```
**My Question:** Does the approach that I have proposed for supervised classification of longitudinal data sound reasonable (e.g. better than "nothing") - or are there any major statistical flaws on this approach (e.g. structural multicollinearity, variance inflation, etc.) ? Or is it better to use some supervised classification model/software implementation that has been specifically designed for longitudinal data (e.g. <https://cran.r-project.org/web/packages/LongituRF/LongituRF.pdf>)? Thanks!
Note:
* This is a rough sketch of the situation I am dealing with - I am also planning to include variables such as "number of days that elapsed since last fitness test".
* The sample data in this stackoveflow question is randomly simulated and obviously wont show any longitudinal trends.
* I have heard that models such as Random Forest have the ability to recover/model around complex interactions and correlations within the data that otherwise need to be explicitly specified in standard supervised models (<https://ishwaran.org/papers/IKBL.AOAS.pdf>).<issue_comment>username_1: >
> I added "longitudinal variables" that take into account the number of times the students took the test and their most recent average cumulative score:
>
> My Question:
>
> a. Does the approach that I have proposed for supervised classification of longitudinal data sound reasonable (better than nothing)
>
> b. are there any major statistical flaws on this approach (e.g. structural multicollinearity, variance inflation, etc.)
>
>
>
You actually have a pretty good grasp of what is going on here.
With regards to your problem there are 2 interactions I see you should keep in mind:
**Predictability and Explain-ability**
you have some set of n factors [x1,x2,...,xn] and one binary label you are trying to predict Pass/Fail, with your positive label being those likely to fail.
So you are trying to figure out **P(Fail | {x1,x2,...xn})**
You want be able to catch as many students as possible that are likely to fail so that you can help them, but with a model that is too complex, you lose the ability to explain *why* they needed the help. This can prevent you from helping future students by being proactive and addressing root causes.
To address your first question:
>
> *Does the (LV) approach ... sound reasonable (better than nothing)*
>
>
>
This approach makes a reasonable (common sense) assumption that when it comes to fitness, past performance acts as a good [Bayesian prior](https://medium.com/analytics-vidhya/introduction-to-bayesian-statistics-for-data-science-and-analytics-part-1-93e38d67fab5).
To address your second question:
>
> are there any major statistical flaws on this approach (e.g. structural multicollinearity, variance inflation, etc.)
>
>
>
Decision trees are [not affected by collinearity](https://arxiv.org/pdf/2111.02513.pdf) so they are a great model to address multicollinearity and variance inflation.
Decision trees also offer the added benefit of [**explainablity**](https://aigents.co/data-science-blog/coding-tutorial/machine-learning-explainability-a-hands-on-introduction), which when it comes to dealing with students helps to mitigate ethical-social issues that might crop up. (why did you help *that* kid and not *my* kid)
Random forests allow you to increase the predictability of the model, but lose some explainability. I would recommend starting with a single decision tree first. However, with random forest you can plot feature importance. [R-doc: feature importance](https://www.r-bloggers.com/2021/07/feature-importance-in-random-forest/)
Be aware that with decision trees it is very easy to [overfit](http://saedsayad.com/decision_tree_overfitting.htm), one of the main ways this happens is by making a tree that is too deep and allowing too few samples at each split [R-doc: decision\_tree](https://www.rdocumentation.org/packages/parsnip/versions/0.1.6/topics/decision_tree).
You can *start* by trying to keep the depth small (3-4), and the min samples not too small (10+) and seeing how far that takes you.
You can increase the depth from there. Remember that you only have 200 samples (which is small for this kind of problem). So if the minimum number of samples to split is 10 that gives you only 20 nodes before they become a leaf.
To reach 20 nodes, you only need a depth of 5.
Hope this helps :)
Upvotes: 2 <issue_comment>username_2: I think there are some things you can do to get it work better.
Suggestions:
* add a column to the input giving number of previous tries at the test. If there was a score to go with it, include the average and standard deviation of those, if you can.
* start with 5-fold CV to get a sense of how well it generalizes, and to get a sense of the spread of your losses/fit-performance
* look at the h2o.ai random forest tool because you can get several
good things out of it that aren't available in all the 'old stuff'.
The "Flow" interface through the browser gives you good plots and
fit-analysis. (They have some very nice speed-ups for random forests in there.)
I have been "that kid" who took physicals and struggled, and took it again.
Having an estimated probability of membership can be more useful than a pass-fail because a 51% +/- 1% chance of passing and a 99% +/- 1% chance in passing can be very different creatures. If you have tried several times and failed, you might be more likely to fail again.
The h2o.ai "Flow" interface makes "pretty" tables and graphs with very little effort, and that can be nice for write-ups to give to a boss, or instructor.
I would not start with a single tree. You can learn all the wrong things. It is a Faustian bargain to pay the price of being much more incorrect to buy the ability to explain those incorrect actions clearly. There are tools for explaining RF, and in particular, any single outcome of a forest is a weighted average of single branches, so you can get the bounds on the axes that drove the decision and say "because it is in this window, the answer is that".
References:
* <https://www.usu.edu/math/adele/forests/ENAR.htm#slide0227.htm>
* <https://julienbeaulieu.github.io/2019/10/16/model-interpretation-with-random-forests-and-going-beyond-simple-predictions/>
Upvotes: 1 |
2022/06/06 | 1,386 | 4,539 | <issue_start>username_0: I've read that the discriminator $D$ validates an image $D(x)$, where $x$ is either a real image or a fake one created by the generator, i.e. $ D(G(x))$.
What does the function of the discriminator return? Is it either 0 (marked as fake) or 1 (discriminator thinks the image is real)? I have read that this function returns the whole $\mathbb{R}$, but I don't understand what the output then means.<issue_comment>username_1: Formally, for an input $x$, $D(x)$ gives you the probability of $x$ being real. In this sense $D:\mathcal{X}\rightarrow [0,1]$, where $\mathcal{X}$ is the input space.
That said, the output of the discriminator is a probability (hence within 0 and 1), and you get the prediction (fake or real) by considering the most probable outcome. Informally $D(x) = fake$ for $D(x) < 0.5$ and real otherwise.
**@edit:** on the issue of Least Squares GAN.
In the first 2 paragraphs of my answer, I considered the case of the original GAN proposed by Goodfellow et al. [1]. Nonetheless other types of GANs exist, that do not employ a sigmoid activation at the output layer of the discriminator. That is the case of the Least Squares GAN of Mao et al. [2], upon which the authors of CycleGAN based themselves on [3].
The authors of [2] raise the issue of using a sigmoid activation in section 3.2:
>
> when updating the generator, this loss function (cross-entropy on sigmoid activations) will cause the problem of vanishing gradients for the samples that are on the correct side of the decision boundary, but are still far from the real data.
>
>
>
The LS-GAN proposes the following workaround: $D(x) \in \mathbb{R}$, thus no sigmoid activation at the end. The loss is then substituted by the least squares loss. As follows, there is no clear encoding for fake and real labels anymore. For that reason the authors of [2] introduce constants $a$ and $b$, such that if $D(x) \approx a \implies x$ is fake, and $D(x) \approx b \implies x$ is real. For the generator, there is another constant $c$ such that $D(G(z)) \approx c \implies G(z)$ is fake. In [3] the authors picked $a = 0$, $b = 1$ and $c = 1$.
References
----------
[1] <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & <NAME>. (2014). [Generative adversarial nets](https://arxiv.org/pdf/1406.2661.pdf). Advances in neural information processing systems, 27.
[2] <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., & <NAME>. (2017). [Least squares generative adversarial networks](https://openaccess.thecvf.com/content_ICCV_2017/papers/Mao_Least_Squares_Generative_ICCV_2017_paper.pdf). In Proceedings of the IEEE international conference on computer vision (pp. 2794-2802).
[3] <NAME>., <NAME>., <NAME>., & <NAME>. (2017). [Unpaired image-to-image translation using cycle-consistent adversarial networks](https://arxiv.org/pdf/1703.10593.pdf). In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
Upvotes: 2 <issue_comment>username_2: Let me try to explain this way, comment if you think it's incorrect.
Assume a simple linear function, $y=f(x)=ax+b$ where $a \in \mathbb{R}^\*$ and $b\in \mathbb{R}$, each value of $y$ is unique, which means we can only get the same $y$ from the same $x$.
In GANs, the Discriminator plays the same role as $f$, with the more complex function and the high dimension input (for example, an MNIST's image will have $32\times32=1024$ dimension), $y$ is no longer unique but still keeps its property, the closer $x$ (same domain), the closer $y$ value.
Post-process the output of the discriminator is to adapt to the loss function, [the original GANs of Ian Goodfellow](https://arxiv.org/abs/1406.2661) limited the output by a sigmoid function to wrap it in a probability range to input it to a logarithm as cross-entropy. [WGAN](https://arxiv.org/abs/1701.07875) measures the Wasserstein distance by the real value from the discriminator or [Hinge loss](https://arxiv.org/pdf/1705.02894.pdf) clamps the output cap as 0.
If you feel hard with the definition of the domain, take a look at the famous problem ["dogs vs cats"](https://www.kaggle.com/c/dogs-vs-cats), it's the binary classification, and the task is to build a deep learning model to distinguish images. There are many types of dogs but they still have common characteristics that make them classified as a "dog" domain. In GANs, the task of the discriminator is the same, separate real and fake domains just like dog and cat.
Upvotes: 0 |
2022/06/07 | 1,476 | 6,679 | <issue_start>username_0: I have read many blog articles making all kinds of broad analogies to explain the exploration/exploitation trade-off. However, I still can't fully grasp it. On an extremely abstract level, I understand why you would want to "try new things to gain information", but then I don't understand why you would want to "exploit" in training. It seems as though it would be better to keep trying as many things as possible to gain the most information.
What is the value of exploitation during training? Intuitively, I would think you would only want to explore during "training" and only exploit in "testing".<issue_comment>username_1: In supervised ML there is no exploration and exploitation.
In reinforcement learning, the agent in each step has many choices.
So the agent can exploit, meaning gaining the highest reward known to him from the next move. Or explore, trying to get a better long-term benefit by trying a different move.
Upvotes: 0 <issue_comment>username_2: Exploitation is important during training to help the network encounter and learn to handle situations that don't occur until the network has successfully navigated other situations.
For example, consider the Atari game Breakout (a common RL benchmark). In this game, the player must move a paddle on the bottom of the screen to bounce a falling ball. The ball accelerates as the game continues, and the network can only get training data with a fast moving ball after successfully exploiting its knowledge of how to play when the ball is moving slowly.
A purely random strategy is possible (and often used for very early training) but only generates data useful for learning the beginning of the game.
Upvotes: 3 <issue_comment>username_3: An algorithm that chooses to always explore during training is unlikely to find an optimal policy because **it will be employing a more random search as opposed to a directed search**. During training, the neural network aims to determine the relation between states or state-action pairs and the reward signal through past experience. If the agent is always exploring during training, it will never use the experience gained from past episodes to influence its training policy and therefore will search more randomly.
Exploitation during training allows the neural net to use its past experience to guide its future actions and avoid random search. There can be many states in which there is an obvious optimal action. After some training, the neural network may be able to quickly learn these states and corresponding optimal actions. By primarily exploiting at those states, the agent will not be wasting training time by exploring suboptimal actions at those states, allowing the agent to focus its exploration on other more uncertain, unexplored, or complex parts of the state space.
For a practical example, consider the original Super Mario Bros game on NES. Let the reward be the number of pixels traveled to the right before Mario loses a life. If Mario is exploring the whole time, it is unlikely that he makes it very far to the right or over many obstacles, let alone to the flagpole. Since it is more rewarding in general to go to the right, Mario's exploitation action at most states is to run to the right. In this manner, Mario will usually run to the right until he reaches an obstacle (e.g. a pipe, pit, staircase, enemy). At that point, Mario may need to explore to overcome the obstacle, but Mario needed to exploit to be able to reach that obstacle in the first place.
Upvotes: 4 [selected_answer]<issue_comment>username_4: Imagine trying to navigate a maze from the outside. Let's say you lose if you get to a dead end, and win if you get to the middle. After some experience by random trials, we know where some dead ends are.
In the future, we should exploit our knowledge and *not* turn directly into a dead end, as this is simply inefficient. If we do this, we shall find the middle quicker :)
---
This leads to interesting quesitons, such as
'How does the algorithm know if the dead end is always a dead end'.
You might be able to see we might want to tune our randomness rate near a dead end. We might sometimes want to try going down that path, just to be sure, but most of the time lets not bother and go somewhere else. This gives some intuition behind POLICY methods in Reinforcement learning (also see multi-armed bandits if interested) :)
Upvotes: 2 <issue_comment>username_5: There is an additional factor to consider about exploration/exploitation trade-off, that sometimes applies in addition to the reason in the [accepted answer](https://ai.stackexchange.com/a/35832/1847) and most other answers here.
Sometimes an agent is required to both act and train itself in a "real" system, or at least one where the rewards are more than just collected training data from a simulation, but also represent actual profits and losses realised by the agent.
This is a common feature of adaptive content display in advertising, which typically uses the simpler k-armed bandit or contextual bandit model - still related to RL, and importantly still affected by the exploration/exploitation trade-off. It is very hard for a machine to model how humans will respond to an advert, so the only reliable measurements are made in production. Each click-through is then real money to someone, so it is important to adapt quickly to incoming data - but due to the variance in results it is also important to still keep testing the non-optimal choice and improve on any early estimates.
In such a scenario, you have to accept *some* non-optimal returns as the cost of finding the best ones through trial and error. However, it is important to balance this with gaining as much reward as possible whilst training.
So it can be a more important consideration, to obtain best cumulative reward during an ongoing training process, than even finding the optimal policy. That means taking care to balance exploitation and exploration, often strongly favouring exploitation after a relatively short high exploration phase.
Upvotes: 2 <issue_comment>username_6: If you explore too much, you waste your time (among other resources.) You will probably exhaust your resources before you learn anything meaningful.
Let's say your goal is to learn as much about Star Wars as possible within a library. If you fully explore, you just pick books at random.
Exploitation might look something like "*pick most of your books within the Sci-Fi section*", or "*choose books with lightsabers on the cover*" or "*books with 'Star Wars' in the title*" because that's where you have found relevant information in the past.
Upvotes: 2 |
2022/06/08 | 1,606 | 6,226 | <issue_start>username_0: Is it possible, in a transformer or other deep architecture, to include the number of layers as a parameter of the model so it could be learned?
In fact, I have a keras layer that I use to change the final layer without rebuilding the model, so I can just change a parameter between epochs (The original use was to try to train deep networks starting from shallower ones, increasing the number of layers after each epoch).
```
class LayerSelect(tf.keras.layers.Layer):
def __init__(self,nlevels,**kwargs):
super(LayerSelect,self).__init__(**kwargs)
self.nlevels = nlevels
self.range=tf.range(self.nlevels,dtype=tf.float32)
def build(self, input_shape):
self.kernel=self.add_weight(shape=(1,),
initializer=tf.keras.initializers.Constant(min(self.nlevels,14.0)/1.9),
trainable=True, dtype=tf.float32,
constraint=lambda x: tf.clip_by_value(x,1.0,self.nlevels))
def call(self,inputs):
selector=tf.math.maximum([0.0], 1.0 - 1.0 *(self.range-self.kernel)**2 )
final=tf.reduce_sum(inputs*selector,axis=-1)
return final
```
The layer expects an stack of hidden layers to choose from:
```
allEncoders=tf.stack([encoder[level] for level in range(layers)],axis=-1)
finalEncoderRaw=adhoc.LayerSelect(layers)(allEncoders)
```
So that by calling `set_weights` during the training I can choose as output any layer, or a combination of two, being the layer variable a float and using a wider selector, say `1.0 - 0.25 *(self.range-self.kernel)**2`
And as you can expect, if I set the weight to be trainable, the optimiser moves the variable. But it keeps either moving randomly some small percent or moving backwards towards smaller values. So it is possible that this approach is a dead end?
If not a way to patch this method, is there another successful method to train the number of layers without using meta-parameter (hyperparameter grids) farms?<issue_comment>username_1: I like the idea, but I fear this approach may be a dead end. I see a few problems:
Layers in front of (closer to the output than) the currently selected layer(s) don't affect the output, so they won't change and they can't learn to be good predictors for the true output.
Layers behind the currently selected layer won't be trained to predict the final output (they'll be trained to provide outputs are useful inputs for the selected layer) so switching to those layers is unlikely to improve the output.
And if another layer did provide a better approximation of the final output, the network is unlikely to switch to it unless it's next to the currently selected layer (i.e. the selection is likely to get stuck in a local minimum). For example, if layer 3 is selected, and 1 is a better prediction than 3, but 3 is better than 2, the network would stick with 3.
>
> is there another successful method to train the number of layers without using meta-parameter farms?
>
>
>
I haven't heard of one, and that seems like something that would be widely shared if it worked well.
I have seen genetic algorithms for selecting neural network architecture, but as far as I know they don't perform better than grid search for choosing the number of layers.
Upvotes: 3 [selected_answer]<issue_comment>username_2: It is always possible to use a Dense layer to allow the network to built its own menu of layers
```
allEncoders=tf.stack([encoder[level] for level in range(1+nlayers)],axis=-1)
layerSelect=tf.keras.layers.Dense(1,activation=None, use_bias=False,
kernel_initializer=tf.keras.initializers.Constant(0.5),
kernel_constraint=tf.keras.constraints.MinMaxNorm(axis=1))
almostFinalEncoder=tf.keras.layers.Reshape([-1,layerUnits])(layerSelect(allEncoders))
finalEncoder=tf.keras.layers.LayerNormalization()(almostFinalEncoder)
```
but one must be careful with the initializer, and the vector `layerSelect.get_weights()[0]`needs to be monitorised.
Generically this layer will not converge to a Kronecker delta, and it must keep some extra weights open just for the sake of overfitting.
Still, the evolution seems to have some information. See here the contourplot for a 64 layers, 256 units, transformer that has reached overfitting at epoch 18 but keeps learning slowly. It seems to prefer to increase weights of layers 1 to 24, and it moves as the learning progresses, decreasing the use of the first layers.
[](https://i.stack.imgur.com/JgJh8.png)
[](https://i.stack.imgur.com/mUcht.png)
(Sorry the colour scheme is confusing. Basically we have a descending trend in the first layers, ascending in the middle layer, then descending again in the tail)
Generically most plots show a trend to zero the two first layers (or not raising them at all if the starting vector is `[0.1,0.0,...]` and the growing of some peaks that can be related to the structure of the network. One of such peaks can be due to finite size, of course we can not try with an infinite number of layers and I can not imagine how to compensate for the long tail.
But it does not converge to a delta, or a set of deltas. And for few epochs it is not granted that the best peak is the best layer. You have three nearby definitions that do not seem to coincide:
1. The layer with greatest weight in the Dense fusion of layers.
2. The layer with best result when evaluated with a Kronecker weight (w[l]=1, w[!l]=0) in the LayerSelect class.
3. The number of layers that give the best training in the usual, unfusioned, training.
Upvotes: 1 <issue_comment>username_3: The work had been done before, take a look at [this paper](https://arxiv.org/pdf/1611.01578.pdf). The author not only search for the number of layers but also the whole model architecture.
By using reinforcement learning, the system makes a loop, generates a model by a sequence of LSTM then validates the reward by the accuracy of the test set. It's a famous paper but not widely used because of the high complexity and huge computation cost.
Upvotes: 1 |
2022/06/09 | 1,249 | 4,826 | <issue_start>username_0: I have been trying to train a Mask RCNN model to identify individual poker chips in a stack. No matter what property I change, the end results look like the following image. I was guessing the issue is that the objects are too close to each other for the proper detection. Is there any alternative model or property of mask RCCN or my training model I could possibly try to change?
[](https://i.stack.imgur.com/tt7g3.jpg)
[](https://i.stack.imgur.com/DZ7WJ.jpg)<issue_comment>username_1: I like the idea, but I fear this approach may be a dead end. I see a few problems:
Layers in front of (closer to the output than) the currently selected layer(s) don't affect the output, so they won't change and they can't learn to be good predictors for the true output.
Layers behind the currently selected layer won't be trained to predict the final output (they'll be trained to provide outputs are useful inputs for the selected layer) so switching to those layers is unlikely to improve the output.
And if another layer did provide a better approximation of the final output, the network is unlikely to switch to it unless it's next to the currently selected layer (i.e. the selection is likely to get stuck in a local minimum). For example, if layer 3 is selected, and 1 is a better prediction than 3, but 3 is better than 2, the network would stick with 3.
>
> is there another successful method to train the number of layers without using meta-parameter farms?
>
>
>
I haven't heard of one, and that seems like something that would be widely shared if it worked well.
I have seen genetic algorithms for selecting neural network architecture, but as far as I know they don't perform better than grid search for choosing the number of layers.
Upvotes: 3 [selected_answer]<issue_comment>username_2: It is always possible to use a Dense layer to allow the network to built its own menu of layers
```
allEncoders=tf.stack([encoder[level] for level in range(1+nlayers)],axis=-1)
layerSelect=tf.keras.layers.Dense(1,activation=None, use_bias=False,
kernel_initializer=tf.keras.initializers.Constant(0.5),
kernel_constraint=tf.keras.constraints.MinMaxNorm(axis=1))
almostFinalEncoder=tf.keras.layers.Reshape([-1,layerUnits])(layerSelect(allEncoders))
finalEncoder=tf.keras.layers.LayerNormalization()(almostFinalEncoder)
```
but one must be careful with the initializer, and the vector `layerSelect.get_weights()[0]`needs to be monitorised.
Generically this layer will not converge to a Kronecker delta, and it must keep some extra weights open just for the sake of overfitting.
Still, the evolution seems to have some information. See here the contourplot for a 64 layers, 256 units, transformer that has reached overfitting at epoch 18 but keeps learning slowly. It seems to prefer to increase weights of layers 1 to 24, and it moves as the learning progresses, decreasing the use of the first layers.
[](https://i.stack.imgur.com/JgJh8.png)
[](https://i.stack.imgur.com/mUcht.png)
(Sorry the colour scheme is confusing. Basically we have a descending trend in the first layers, ascending in the middle layer, then descending again in the tail)
Generically most plots show a trend to zero the two first layers (or not raising them at all if the starting vector is `[0.1,0.0,...]` and the growing of some peaks that can be related to the structure of the network. One of such peaks can be due to finite size, of course we can not try with an infinite number of layers and I can not imagine how to compensate for the long tail.
But it does not converge to a delta, or a set of deltas. And for few epochs it is not granted that the best peak is the best layer. You have three nearby definitions that do not seem to coincide:
1. The layer with greatest weight in the Dense fusion of layers.
2. The layer with best result when evaluated with a Kronecker weight (w[l]=1, w[!l]=0) in the LayerSelect class.
3. The number of layers that give the best training in the usual, unfusioned, training.
Upvotes: 1 <issue_comment>username_3: The work had been done before, take a look at [this paper](https://arxiv.org/pdf/1611.01578.pdf). The author not only search for the number of layers but also the whole model architecture.
By using reinforcement learning, the system makes a loop, generates a model by a sequence of LSTM then validates the reward by the accuracy of the test set. It's a famous paper but not widely used because of the high complexity and huge computation cost.
Upvotes: 1 |
2022/06/09 | 1,732 | 5,501 | <issue_start>username_0: I'm trying to get my toy network to learn a sine wave.
I output (via tanh) a number between -1 and 1, and I want the network to minimise the following loss, where `self(x)` are the predictions.
```
loss = -torch.mean(self(x)*y)
```
This should be equivalent to trading a stock with a sinusoidal price.
The issue I'm having is that the network doesn't learn anything. It *does* work if I change the loss function to be `torch.mean((self(x)-y)**2)` (MSE), but this isn't what I want. I'm trying to focus the network on 'making a profit', not making a prediction.
I think the issue may be related to the convexity of the loss function, but I'm not sure, and I'm not certain how to proceed. I've experimented with differing learning rates, but alas nothing works.
What should I be thinking about?
Actual code:
```
%load_ext tensorboard
import matplotlib.pyplot as plt; plt.rcParams["figure.figsize"] = (30,8)
import torch;from torch.utils.data import Dataset, DataLoader
import torch.nn.functional as F;import pytorch_lightning as pl
from torch import nn, tensor
def piecewise(x): return 2*(x>0)-1
class TsDs(torch.utils.data.Dataset):
def __init__(self, s, l=5): super().__init__();self.l,self.s=l,s
def __len__(self): return self.s.shape[0] - 1 - self.l
def __getitem__(self, i): return self.s[i:i+self.l], torch.log(self.s[i+self.l+1]/self.s[i+self.l])
def plt(self): plt.plot(self.s)
class TsDm(pl.LightningDataModule):
def __init__(self, length=5000, batch_size=1000): super().__init__();self.batch_size=batch_size;self.s = torch.sin(torch.arange(length)*0.2) + 5 + 0*torch.rand(length)
def train_dataloader(self): return DataLoader(TsDs(self.s[:3999]), batch_size=self.batch_size, shuffle=True)
def val_dataloader(self): return DataLoader(TsDs(self.s[4000:]), batch_size=self.batch_size)
dm = TsDm()
class MyModel(pl.LightningModule):
def __init__(self, learning_rate=0.01):
super().__init__();self.learning_rate = learning_rate
super().__init__();self.learning_rate = learning_rate
self.conv1 = nn.Conv1d(1,5,2)
self.lin1 = nn.Linear(20,3);self.lin2 = nn.Linear(3,1)
# self.network = nn.Sequential(nn.Conv1d(1,5,2),nn.ReLU(),nn.Linear(20,3),nn.ReLU(),nn.Linear(3,1), nn.Tanh())
# self.network = nn.Sequential(nn.Linear(5,5),nn.ReLU(),nn.Linear(5,3),nn.ReLU(),nn.Linear(3,1), nn.Tanh())
def forward(self, x):
out = x.unsqueeze(1)
out = self.conv1(out)
out = out.reshape(-1,20)
out = nn.ReLU()(out)
out = self.lin1(out)
out = nn.ReLU()(out)
out = self.lin2(out)
return nn.Tanh()(out)
def step(self, batch, batch_idx, stage):
x, y = batch
loss = -torch.mean(self(x)*y)
# loss = torch.mean((self(x)-y)**2)
print(loss)
self.log("loss", loss, prog_bar=True)
return loss
def training_step(self, batch, batch_idx): return self.step(batch, batch_idx, "train")
def validation_step(self, batch, batch_idx): return self.step(batch, batch_idx, "val")
def configure_optimizers(self): return torch.optim.SGD(self.parameters(), lr=self.learning_rate)
#logger = pl.loggers.TensorBoardLogger(save_dir="/content/")
mm = MyModel(0.1);trainer = pl.Trainer(max_epochs=10)
# trainer.tune(mm, dm)
trainer.fit(mm, datamodule=dm)
#
```<issue_comment>username_1: The loss function you have defined is the negative mean of the predictions multiplied by the targets, where both values are on the closed interval [-1,1]. The alternative you've listed is the MSE. Let's look at what these two loss functions result in with some concrete values:
| self(x) | y | Your Loss | MSE |
| --- | --- | --- | --- |
| 1 | -1 | 1 | 4 |
| 0.5 | -1 | 0.5 | 2.25 |
| 0 | -1 | 0 | 1 |
| -0.5 | -1 | -0.5 | 0.25 |
| -1 | -1 | -1 | 0 |
A good loss function is such that when we are making good predictions the loss is close to 0, and when we are making bad predictions the loss increases above 0. That's why we often call the lost function a *cost function*. In other words, bad predictions are costly for the model, good predictions cost nothing ($L = 0$). As you can see, your loss function lacks this property, and instead it attains 0 whenever your prediction or the target is 0. Hence why your model cannot learn.
Upvotes: 1 <issue_comment>username_2: It doesn't matter that your loss is not convex. As a matter of fact, the loss function of a neural network is in general neither convex nor concave ([reference](https://stats.stackexchange.com/questions/106334/cost-function-of-neural-network-is-non-convex)).
As username_1 points out, the issue is that the loss function you've defined has nothing to do with the problem you're trying to solve.
For example, a stock price of zero is going to give you a loss of zero and hence no gradients at all: the neural network is allowed to output arbitrary values whenever the stock price is zero.
You want to "make a profit". I'm not sure why the MSE is not good in this case: if your neural network outputs the correct price of the stock for the next time period, you can use this information to make the trade which will maximize your profit.
Or do you want to predict the price further in the future? In that case you could use an MSE of the form
```
torch.mean((self(x_t)-y_{t+n})**2)
```
where `x_t` is the input at time period `t` and `y_{t+n}` the price of the stock at time period `t+n`, `n` a number you choose.
Upvotes: 3 [selected_answer] |
2022/06/10 | 1,075 | 4,305 | <issue_start>username_0: To generate synthetic dataset using a trained VAE, there is confusion between two approaches:
1. Use learned latent space: `z = mu + (eps * log_var)` to generate (theoretically, infinite amounts of) data. Here, we are learning `mu` and `log_var` vectors using the data, and, `eps` is sampled from multivariate, standard, Gaussian distribution.
2. Use multivariate, standard, Gaussian distribution.
I am leaning more towards point 1 since we learn the `mu` and `log_var` vectors using our dataset. Whereas, point 2 uses the uninformative prior which contains no particular information about the dataset.
One of the reasons of VAE is to be able to learn this "unknown" latent space distribution by constraining it to approximate a multivariate, standard, Gaussian distribution, but at the same time, allow it sufficient flexibility to deviate from it too.
What are your thoughts? I have implemented some VAE, Conditional VAE codes both in TensorFlow 2 and PyTorch which you can refer to [here](https://github.com/arjun-majumdar/Autoencoders_Experiments).<issue_comment>username_1: I think method 1 will provide the best output.
Approximating the empirical distribution of $z$ should provide decoder inputs in the subset of latent space that the decoder was trained on.
Sampling from $N(0,I)$ could undersample or omit some regions of the true distribution, oversample others, and even provide inputs to the decoder that it isn't trained for (and neural networks aren't usually good at extrapolation).
Upvotes: 0 <issue_comment>username_2: Few more clarifications. While the correct thing to do is draw from the prior, we have no guarantees that the aggregated posterior will cover the prior. Think of the aggregated posterior as the distribution of the latent variables for your dataset (see [here](https://jmtomczak.github.io/blog/7/7_priors.html) for a nice explanation and visualization). Our hope is that this will be like the prior but often in practice we get a mismatch between the prior and the aggregate posterior. In this case sampling from the prior might fail because part of it is not covered by the aggregate posterior. This can be solved in various ways, like learning the prior or computing the aggregated posterior after training.
---
Maybe there's a misconception, we are not learning a `mu` and `log_var` but a mapping (encoder) from an image to `mu` and `log_var`. This is quite different because the `mu` and `log_var` are not two fixed vectors for the dataset but are computed separately for each image.
In similar fashion, the decoder is a learned mapping from the prior distribution $N(0,I)$ back to the image space.
Essentially the encoder takes the image as input and spits out the parameters of another gaussian (the posterior). This means that during training the input of the decoder is conditioned upon the image. Let's take MNIST for example. We hope that after the training the encoder has learned to spit out similar `mu` and `log_var` for similar digits and that the decoder has learned to decode noise from a posterior to a specific digit.
For example with a 1-dimensional latent what we hope for is something like this:
Input digit 0 --> Encoder gives mu 0.1 log\_var 0.3
Input digit 0 --> Encoder gives mu 0.2 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.4 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.5 log\_var 0.1
...
Input digit 9 --> Encoder gives mu -4.5 log\_var 0.3
[This blogpost](https://becominghuman.ai/using-variational-autoencoder-vae-to-generate-new-images-14328877e88d) has a nice visualization with 2d latents.
If we didn't have the encoder, we would always draw noise from the same `N(0,I)` gaussian. This could also work but then we'd need a different training technique like in GANs.
During test time we many times want to draw a sample from the whole data distribution and for that reason we should use the prior $N(0,I)$. If you for some reason want to condition the output to look like a specific sample then you can use the posterior. For example if you only want digits of 1 then you can pass an image of 1 through the encoder and then use the `mu`, `log_var` to draw samples.
So the questions is, do you want a sample from the whole distribution? Then use the prior.
Upvotes: 2 |
2022/06/11 | 1,284 | 4,931 | <issue_start>username_0: I am working with simulated sequential data and the goal is to forecast that data. Long-short-term-memory (LSTM) is one of the most advanced models to forecast time series according to this [post](https://ai.stackexchange.com/questions/27312/advantages-of-cnn-vs-lstm-for-sequence-data-like-text-or-log-files). I can imagine that it is a good model because of the memory-cells they use which are useful when learning of the past.
This [paper](https://journalofbigdata.springeropen.com/articles/10.1186/s40537-022-00599-y) discussed the use of CNN in time-series analysis. It says:
>
> CNN is suitable for forecasting time-series because it offers dilated
> convolutions, in which filters can be used to compute dilations
> between cells. The size of the space between each cell allows the
> neural network to understand better the relationships between the
> different observations in the time-series [14].
>
>
>
It even outperformed LSTM:
>
> A specific architecture of CNN, WaveNet, outperformed LSTM and the
> other methods in forecasting financial time-series [16].
>
>
>
I see more and more posts about the usage of CNN in combination with LSTM, but I can't find any information about the advantages and disadvantages of using these in combination.
This post ([Advantages of CNN vs. LSTM for sequence data like text or log-files](https://ai.stackexchange.com/questions/27312/advantages-of-cnn-vs-lstm-for-sequence-data-like-text-or-log-files)), it is asked about the advantages of CNN vs. LSTM. But I would like to know the advantages and disadvantages of adding CNN to LSTM for forecasting univariate sequential data? Or should you use one of the two algorithms?<issue_comment>username_1: I think method 1 will provide the best output.
Approximating the empirical distribution of $z$ should provide decoder inputs in the subset of latent space that the decoder was trained on.
Sampling from $N(0,I)$ could undersample or omit some regions of the true distribution, oversample others, and even provide inputs to the decoder that it isn't trained for (and neural networks aren't usually good at extrapolation).
Upvotes: 0 <issue_comment>username_2: Few more clarifications. While the correct thing to do is draw from the prior, we have no guarantees that the aggregated posterior will cover the prior. Think of the aggregated posterior as the distribution of the latent variables for your dataset (see [here](https://jmtomczak.github.io/blog/7/7_priors.html) for a nice explanation and visualization). Our hope is that this will be like the prior but often in practice we get a mismatch between the prior and the aggregate posterior. In this case sampling from the prior might fail because part of it is not covered by the aggregate posterior. This can be solved in various ways, like learning the prior or computing the aggregated posterior after training.
---
Maybe there's a misconception, we are not learning a `mu` and `log_var` but a mapping (encoder) from an image to `mu` and `log_var`. This is quite different because the `mu` and `log_var` are not two fixed vectors for the dataset but are computed separately for each image.
In similar fashion, the decoder is a learned mapping from the prior distribution $N(0,I)$ back to the image space.
Essentially the encoder takes the image as input and spits out the parameters of another gaussian (the posterior). This means that during training the input of the decoder is conditioned upon the image. Let's take MNIST for example. We hope that after the training the encoder has learned to spit out similar `mu` and `log_var` for similar digits and that the decoder has learned to decode noise from a posterior to a specific digit.
For example with a 1-dimensional latent what we hope for is something like this:
Input digit 0 --> Encoder gives mu 0.1 log\_var 0.3
Input digit 0 --> Encoder gives mu 0.2 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.4 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.5 log\_var 0.1
...
Input digit 9 --> Encoder gives mu -4.5 log\_var 0.3
[This blogpost](https://becominghuman.ai/using-variational-autoencoder-vae-to-generate-new-images-14328877e88d) has a nice visualization with 2d latents.
If we didn't have the encoder, we would always draw noise from the same `N(0,I)` gaussian. This could also work but then we'd need a different training technique like in GANs.
During test time we many times want to draw a sample from the whole data distribution and for that reason we should use the prior $N(0,I)$. If you for some reason want to condition the output to look like a specific sample then you can use the posterior. For example if you only want digits of 1 then you can pass an image of 1 through the encoder and then use the `mu`, `log_var` to draw samples.
So the questions is, do you want a sample from the whole distribution? Then use the prior.
Upvotes: 2 |
2022/06/13 | 960 | 3,755 | <issue_start>username_0: I was reading a paper and this paragraph said that:
>
> The ground truth score is calculated based on the intersectionover-
> union (IoU) of the perturbed image and the ground truth one. Since we
> would like to distinguish among IoU values close to 1.0, we use $IOU^3$
> as the ground truth score.
>
>
>
I couldn't find any references to this, nor mentioned in the paper. Is it just simply the calculated IoU to the power of 3? Or is it a special kind of IoU calculation method? What does this mean?<issue_comment>username_1: I think method 1 will provide the best output.
Approximating the empirical distribution of $z$ should provide decoder inputs in the subset of latent space that the decoder was trained on.
Sampling from $N(0,I)$ could undersample or omit some regions of the true distribution, oversample others, and even provide inputs to the decoder that it isn't trained for (and neural networks aren't usually good at extrapolation).
Upvotes: 0 <issue_comment>username_2: Few more clarifications. While the correct thing to do is draw from the prior, we have no guarantees that the aggregated posterior will cover the prior. Think of the aggregated posterior as the distribution of the latent variables for your dataset (see [here](https://jmtomczak.github.io/blog/7/7_priors.html) for a nice explanation and visualization). Our hope is that this will be like the prior but often in practice we get a mismatch between the prior and the aggregate posterior. In this case sampling from the prior might fail because part of it is not covered by the aggregate posterior. This can be solved in various ways, like learning the prior or computing the aggregated posterior after training.
---
Maybe there's a misconception, we are not learning a `mu` and `log_var` but a mapping (encoder) from an image to `mu` and `log_var`. This is quite different because the `mu` and `log_var` are not two fixed vectors for the dataset but are computed separately for each image.
In similar fashion, the decoder is a learned mapping from the prior distribution $N(0,I)$ back to the image space.
Essentially the encoder takes the image as input and spits out the parameters of another gaussian (the posterior). This means that during training the input of the decoder is conditioned upon the image. Let's take MNIST for example. We hope that after the training the encoder has learned to spit out similar `mu` and `log_var` for similar digits and that the decoder has learned to decode noise from a posterior to a specific digit.
For example with a 1-dimensional latent what we hope for is something like this:
Input digit 0 --> Encoder gives mu 0.1 log\_var 0.3
Input digit 0 --> Encoder gives mu 0.2 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.4 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.5 log\_var 0.1
...
Input digit 9 --> Encoder gives mu -4.5 log\_var 0.3
[This blogpost](https://becominghuman.ai/using-variational-autoencoder-vae-to-generate-new-images-14328877e88d) has a nice visualization with 2d latents.
If we didn't have the encoder, we would always draw noise from the same `N(0,I)` gaussian. This could also work but then we'd need a different training technique like in GANs.
During test time we many times want to draw a sample from the whole data distribution and for that reason we should use the prior $N(0,I)$. If you for some reason want to condition the output to look like a specific sample then you can use the posterior. For example if you only want digits of 1 then you can pass an image of 1 through the encoder and then use the `mu`, `log_var` to draw samples.
So the questions is, do you want a sample from the whole distribution? Then use the prior.
Upvotes: 2 |
2022/06/13 | 1,360 | 5,567 | <issue_start>username_0: Basically what I want to do is to create a single vector representation of a list of skills belonging to employees at a company (one list per employee). The embedding will be a representation of an employee's "profile". The motivation behind this is (among other reasons) that I want to be able to identify clusters among the employees.
Assume I already have a trained FastText model (or Word2vec) that can generate good representations of the individual words in the list.
My current solution is simply to add all the word embeddings in an employee's list together (without any form of normalization). But I'm very unsure about whether this is the best approach to generating a good representation of an employee's profile.
The dimensions of the vectors are 300 and there are usually around 10 to 30 skills in a single list.
Any help would be greatly appreciated!
Example:
Let's say we have an it-consulting firm where each employee has their own set of skills. Some consultants are more experienced or versatile, thus having more skills listed in their profiles. eg we have:
```
alex_skills = ['microsoft azure', 'machine learning', 'data science', 'python', 'sklearn', 'xgboost', 'nginx', 'flask', 'SHAP', 'git', 'word2vec', 'statistics', 'deep learning', 'linux','docker compose', 'pandas']
carla_skills = ['devops', 'machine learning', 'deep learning', 'continuous integration', 'kubernetes', 'python','git', 'speech recognition', 'github', 'bitbucket', 'scikit-learn', 'natural language processing', 'pandas']
adam_skills = ['automation', 'robotic process automation', 'banking and finance', 'process mapping', 'IAM', 'väsentlighetsanalys', 'business intelligence', 'auditor', 'requirements handling', 'risk management', 'coordinator', 'project manager', 'data visualization']
```
As you can see Alex and Carla are more similar and should possibly be in the same cluster, while Adam might not be.
So I wan't to make a vector representation of the entire list of skills. And then I will use these vector representations in some clustering algorithm (eg HDBscan) and by some distance metric (eg. cosine distance), capture the relation between Alex and Carla.
I suspect the fact that the lists have different lengths might cause problems, therefore maybe divide by the length of the list after adding?<issue_comment>username_1: I think method 1 will provide the best output.
Approximating the empirical distribution of $z$ should provide decoder inputs in the subset of latent space that the decoder was trained on.
Sampling from $N(0,I)$ could undersample or omit some regions of the true distribution, oversample others, and even provide inputs to the decoder that it isn't trained for (and neural networks aren't usually good at extrapolation).
Upvotes: 0 <issue_comment>username_2: Few more clarifications. While the correct thing to do is draw from the prior, we have no guarantees that the aggregated posterior will cover the prior. Think of the aggregated posterior as the distribution of the latent variables for your dataset (see [here](https://jmtomczak.github.io/blog/7/7_priors.html) for a nice explanation and visualization). Our hope is that this will be like the prior but often in practice we get a mismatch between the prior and the aggregate posterior. In this case sampling from the prior might fail because part of it is not covered by the aggregate posterior. This can be solved in various ways, like learning the prior or computing the aggregated posterior after training.
---
Maybe there's a misconception, we are not learning a `mu` and `log_var` but a mapping (encoder) from an image to `mu` and `log_var`. This is quite different because the `mu` and `log_var` are not two fixed vectors for the dataset but are computed separately for each image.
In similar fashion, the decoder is a learned mapping from the prior distribution $N(0,I)$ back to the image space.
Essentially the encoder takes the image as input and spits out the parameters of another gaussian (the posterior). This means that during training the input of the decoder is conditioned upon the image. Let's take MNIST for example. We hope that after the training the encoder has learned to spit out similar `mu` and `log_var` for similar digits and that the decoder has learned to decode noise from a posterior to a specific digit.
For example with a 1-dimensional latent what we hope for is something like this:
Input digit 0 --> Encoder gives mu 0.1 log\_var 0.3
Input digit 0 --> Encoder gives mu 0.2 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.4 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.5 log\_var 0.1
...
Input digit 9 --> Encoder gives mu -4.5 log\_var 0.3
[This blogpost](https://becominghuman.ai/using-variational-autoencoder-vae-to-generate-new-images-14328877e88d) has a nice visualization with 2d latents.
If we didn't have the encoder, we would always draw noise from the same `N(0,I)` gaussian. This could also work but then we'd need a different training technique like in GANs.
During test time we many times want to draw a sample from the whole data distribution and for that reason we should use the prior $N(0,I)$. If you for some reason want to condition the output to look like a specific sample then you can use the posterior. For example if you only want digits of 1 then you can pass an image of 1 through the encoder and then use the `mu`, `log_var` to draw samples.
So the questions is, do you want a sample from the whole distribution? Then use the prior.
Upvotes: 2 |
2022/06/15 | 957 | 3,792 | <issue_start>username_0: I'm trying to "solve" the OpenAI gym environment "Humanoid-v3" using PPO. I got it to work to some degree (The NN is learning a policy and perfecting it. Average reward of about 5.5k). However, the learned policies do not yet resemble the human stride (like in the [PPO blog post](https://openai.com/blog/openai-baselines-ppo/)), which brought up a question.
Should the algorithm always converge toward the global optimum (given good hyperparameters)? Or is a good convergence somewhat luck-based and you may need multiple training processes?<issue_comment>username_1: I think method 1 will provide the best output.
Approximating the empirical distribution of $z$ should provide decoder inputs in the subset of latent space that the decoder was trained on.
Sampling from $N(0,I)$ could undersample or omit some regions of the true distribution, oversample others, and even provide inputs to the decoder that it isn't trained for (and neural networks aren't usually good at extrapolation).
Upvotes: 0 <issue_comment>username_2: Few more clarifications. While the correct thing to do is draw from the prior, we have no guarantees that the aggregated posterior will cover the prior. Think of the aggregated posterior as the distribution of the latent variables for your dataset (see [here](https://jmtomczak.github.io/blog/7/7_priors.html) for a nice explanation and visualization). Our hope is that this will be like the prior but often in practice we get a mismatch between the prior and the aggregate posterior. In this case sampling from the prior might fail because part of it is not covered by the aggregate posterior. This can be solved in various ways, like learning the prior or computing the aggregated posterior after training.
---
Maybe there's a misconception, we are not learning a `mu` and `log_var` but a mapping (encoder) from an image to `mu` and `log_var`. This is quite different because the `mu` and `log_var` are not two fixed vectors for the dataset but are computed separately for each image.
In similar fashion, the decoder is a learned mapping from the prior distribution $N(0,I)$ back to the image space.
Essentially the encoder takes the image as input and spits out the parameters of another gaussian (the posterior). This means that during training the input of the decoder is conditioned upon the image. Let's take MNIST for example. We hope that after the training the encoder has learned to spit out similar `mu` and `log_var` for similar digits and that the decoder has learned to decode noise from a posterior to a specific digit.
For example with a 1-dimensional latent what we hope for is something like this:
Input digit 0 --> Encoder gives mu 0.1 log\_var 0.3
Input digit 0 --> Encoder gives mu 0.2 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.4 log\_var 0.2
Input digit 1 --> Encoder gives mu 1.5 log\_var 0.1
...
Input digit 9 --> Encoder gives mu -4.5 log\_var 0.3
[This blogpost](https://becominghuman.ai/using-variational-autoencoder-vae-to-generate-new-images-14328877e88d) has a nice visualization with 2d latents.
If we didn't have the encoder, we would always draw noise from the same `N(0,I)` gaussian. This could also work but then we'd need a different training technique like in GANs.
During test time we many times want to draw a sample from the whole data distribution and for that reason we should use the prior $N(0,I)$. If you for some reason want to condition the output to look like a specific sample then you can use the posterior. For example if you only want digits of 1 then you can pass an image of 1 through the encoder and then use the `mu`, `log_var` to draw samples.
So the questions is, do you want a sample from the whole distribution? Then use the prior.
Upvotes: 2 |
2022/06/20 | 316 | 1,153 | <issue_start>username_0: This is my model average rewards as follow image.
How to tell if it is undertrained or not convergent? How many training steps does it usually take to train an RL model?
And I'm using [PPO](https://stable-baselines3.readthedocs.io/en/master/modules/ppo.html) to train.
[](https://i.stack.imgur.com/OQFOD.png)<issue_comment>username_1: This is not possible to know in advance precisely, only approximately, but it also strongly depends on the environment, hyperparameters and algorithm. For hard environments, e.g. the ones learning from pixels such as the Atari games, you can easily expect 1-10M steps.
You can get a feeling by looking at many solved environments, for instance here: <https://github.com/Rafael1s/Deep-Reinforcement-Learning-Algorithms>
Upvotes: 1 <issue_comment>username_2: It depends on the the problem you're applying PPO to. To get an idea, you can have a look at the [CleanRL](https://docs.cleanrl.dev/) benchmarks, there are a few of them where they use PPO: <https://wandb.ai/openrlbenchmark/openrlbenchmark/reportlist>.
Upvotes: 0 |
2022/06/20 | 1,087 | 4,688 | <issue_start>username_0: Let's consider the following example from BERT
[](https://i.stack.imgur.com/YLGSz.png)
I cannot understand why "the input embeddings are the *sum* of the token embeddings, the segmentation embeddings, and the position embeddings". The thing is, these embeddings carry different types of information, so intuitively adding them together doesn't really make sense. I mean, you cannot add 2 meters to 3 kilograms, but you can make a tuple (2 meters, 3 kilograms), so I think it's more natural to concatenate these embedding together. By adding them together, we are assuming the information about token, segmentation, and position can be simultaneously represented in the same embedding space, but that sounds like a bold claim.
Other transformers, like ViTMAE, seem to follow the trend of [adding position embeddings to other "semantic" embeddings](https://github.com/huggingface/transformers/blob/v4.20.0/src/transformers/models/vit_mae/modeling_vit_mae.py#L795). What's the rationale behind the practice?<issue_comment>username_1: First of all, I think it is very hard to properly reason about these things, but there are a few points that might justify using sum instead of concatenation.
For example, concatenation would have the drawback of increasing the dimensionality. So for subsequent residual connections to work, you would either have to use the increased dimensionality throughout the model, or add yet another layer to transform it back to the original dimensionality.
>
> The thing is, these embeddings carry different types of information, so intuitively adding them together doesn't really make sense. I mean, you cannot add 2 meters to 3 kilograms, but you can make a tuple
>
>
>
I would say that because the token-embedding is learned, you cannot really compare it to a fix unit like kilogram. Instead the embedding space of the token can be optimized to work with the positional encoding under summation.
>
> By adding them together, we are assuming the information about token, segmentation, and position can be simultaneously represented in the same embedding space, but that sounds like a bold claim.
>
>
>
The same applies here, the problem is not to embed them into the same space, but rather that subsequent layers can separate the position information from the token information.
And I think this is possible for two reasons. Firstly, if you look at the visual representation of the positional embedding, the highest distortion by summation would happen in the first dimensions:
 (Image taken from [here](https://www.inovex.de/de/blog/positional-encoding-everything-you-need-to-know/))
Therefore the token embedding could learn to encode high-frequency information only in the last dimensions to be less affected by the positional embedding.
I think another interesting statement in the Transformer paper is that the positional encoding behaves linearly w.r.t. relative position:
>
> We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions, since for any fixed offset $k$, $PE\_{pos+k}$ can be represented as a linear function of $PE\_{pos}$.
> [[Source: Transformer Paper]](https://arxiv.org/abs/1706.03762)
>
>
>
So this property shouldn't add additional non linearity to the token embedding, but instead acts more like a linear transformation, since any change in position changes the the embedding linearly. In my intuition this should also enable easy separation of positional vs token information.
*This is my intuition so far, I am happy to hear your thoughts and additions*
Upvotes: 2 <issue_comment>username_2: The confusion here is that we believe positional embedding is a more complicated version of adding positional information to the work embedding; however, it is not actually. Adding new dimensions to each embedding increases the dimensionality of the problem. On the other hand, please note that the added positional embedding is static, as shown in this image for a 2D positional embedding:
[](https://i.stack.imgur.com/zbcvp.png)
The added positional embeddings are the same for all the inputs, and the transformer can separate the positional information from the actual work embedding through the training process. Therefore, the positional embedding doesn't mess with the work embedding information, and adding them is a more efficient way of adding the positional information that concatenates them.
Upvotes: 1 |
2022/06/23 | 274 | 1,192 | <issue_start>username_0: What is the proper technical name of the classification problem where each data sample can be classified according to two different criteria and each of them can have two or more classes?
For example age/gender estimation problem where age is one criterion, gender is second.
Age can be divided into 4 age groups and gender into 2. And model should generated estimate of both age and gender for every sample.<issue_comment>username_1: I think what you are looking for is multi-label classification. Multi-label classification can take care of both of your constraints -
>
> each data sample can be classified according to two different criteria
> and each of them can have two or more classes
>
>
>
If you just had the latter constraint, I'd have suggested multi-class classification, but multi-label classifiers are a subset of multi-class classifiers.
See [What is the difference between Multiclass and Multilabel Problem](https://stats.stackexchange.com/a/133205).
Upvotes: 1 <issue_comment>username_2: This is called multi-task learning, as you have two independent classification tasks. Note that this is not the same as multi-label learning.
Upvotes: 0 |
2022/06/24 | 865 | 2,808 | <issue_start>username_0: I'm reading this interesting [blog post](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/) explaining *diffusion probabilistic models* and trying to understand the following.
In order to compute the *reverse process*, we need to consider the posterior distribution $q(\textbf{x}\_{t-1} | \textbf{x}\_t)$ which is said to be **intractable**\*
>
> because **it needs to use the entire dataset** and therefore we need to learn a model $p\_\theta$ to approximate these conditional probabilities in order to run the reverse diffusion process.
>
>
>
If we use *Bayes theorem* we have
$$q(\textbf{x}\_{t-1} | \textbf{x}\_t) = \frac{q(\textbf{x}\_t |\textbf{x}\_{t-1})q(\textbf{x}\_{t-1})}{q(\textbf{x}\_t)}$$
I understand that indeed we don't have any prior knowledge of $q(\textbf{x}\_{t-1})$ or $q(\textbf{x}\_t)$ since this would mean already having the distribution we are trying to estimate. Is this correct?
The above posterior becomes **tractable** when conditioned on $\textbf{x}\_0$ and we obtain
$$q(\textbf{x}\_{t-1} | \textbf{x}\_t , \textbf{x}\_0) = \mathcal{N}(\tilde{\bf{\mu}}(\textbf{x}\_t , \textbf{x}\_0) \, , \, \tilde{\beta}\_t \textbf{I})$$
So, apparently, we obtain a **posterior** that can be calculated in **closed form** when we condition on the original data $\textbf{x}\_0$. At this point, I don't understand the role of the model $p\_\theta$ : why do we need to tune the parameters of a model if we can already obtain our posterior?<issue_comment>username_1: I am also learning diffusion models and would like to give some information.
>
> At this point, I don't understand the role of the model $p\_\theta$
>
>
>
To clear a bit: $p\_\theta$ is just another annotation for U-net and the role is receiving ($x\_t$,$t$) (sometimes also receives classifier $y$) and predicts $x\_0$ OR $x\_{t-1}$ depending on different papers. So at the end of the day, to synthesize new data, given a noisy (usually Gaussian) image, U-net can iteratively predict $x\_0$ better - check out algorithm 2 in the DDPM paper (2020).
Your question about the posterior might be answered in more detail here: [Diffusion Models | Paper Explanation | Math Explained - YouTube](https://www.youtube.com/watch?v=HoKDTa5jHvg)
Check video time around **18:00** that explains a bit more information regarding $x\_0$ guided process in the optimization of lower boundary.
Upvotes: 2 <issue_comment>username_2: You do not yet have $\mathbf{x}\_0$ during sampling (not training). That's why you need to approximate $q(\mathbf{x}\_{t−1}|\mathbf{x}\_t, \mathbf{x}\_0)$ with $p\_{\theta}(\mathbf{x}\_{t−1}|\mathbf{x}\_t)$ via variational inference such as KL divergence. After training with good data, this should produce an approximation of $\mathbf{x}\_0$.
Upvotes: 0 |
2022/06/26 | 601 | 2,451 | <issue_start>username_0: I have heard a lot of hype around LSTM for all kinds of time-series based applications including NLP. Despite this, I haven't seen many (if any) applications of LSTM where LSTM performs ***uniquely*** well compared to other type of deep learning, including more vanilla RNN.
Are there any examples where LSTM does significantly better on a particular task, compared to other modern algorithms and architectures?<issue_comment>username_1: LSTMs were the state-of-the-art (SOTA) in many cases (e.g. machine translation) until transformers came along - now I don't really know the SOTA or where LSTMs still perform better than e.g. transformers. LSTMs were introduced to solve the **vanishing** and **exploding gradient problems**. Even [the LSTM paper](http://www.bioinf.jku.at/publications/older/2604.pdf) tells you that
>
> In comparisons with RTRL, BPTT, Recurrent Cascade-Correlation,
> Elman nets, and Neural Sequence Chunking, LSTM leads to many **more successful runs, and learns much faster. LSTM also solves complex, articial long time lag tasks that have never been solved by previous recurrent network algorithms**.
>
>
>
For a specific case where LSTM achieved SOTA (if I remember correctly), you can check [the **neural machine translation** paper](https://arxiv.org/pdf/1409.0473.pdf). Google used LSTMs for some time in Google Translate. See [this paper](https://research.google/pubs/pub45610/) for more details.
Upvotes: 3 [selected_answer]<issue_comment>username_2: RNN's/LSTM's work better on smaller datasets compared to transformers, assuming neither are pretrained.
RNN's perform worse than transformers in most tasks with enough data due to RNN's having a higher model bias.
Recall that in many ML classes, you are taught that with enough samples, models with higher bias (and lower complexity) will perform worse than models with lower bias (and higher complexity).
RNN's take in a sequence of elements $x\_1,x\_2,...,x\_n$ and assume a markov property. this markov property is used to simplify / reduce the complexity of representations for the input. That and the fact that the RNN only uses one representation for the sequence of tokens are examples of simplifying/introducing a bias into the model architecture.
Transformers create a representation for each token, and do not assume that markov property.
This means it has lower bias than RNN's and can scale better with more data.
Upvotes: 1 |
2022/06/26 | 339 | 1,411 | <issue_start>username_0: While reading the book, Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, I read that VAEs using a sampling technique to obtain values from the coding layer. However, the output of a neural network activation is just a single value. In that case, what values does the VAE sample?<issue_comment>username_1: The VAE's encoder is usually implemented to produce the mean and variance. These are vectors, which can also be 1-dimensional (equivalently, scalars). If that's the case, then the latent vector is also 1-dimensional (if I understand your question correctly). See [this implementation](https://github.com/pytorch/examples/blob/main/vae/main.py#L44), where the size of the mean and variance vector is $20$, so the size of $z$ is also $20$, but you can change this to $1$.
(Actually, the (co)variance may not be a vector but a matrix, but I assume a diagonal covariance, like in the linked implementation and the VAE paper).
Upvotes: 2 [selected_answer]<issue_comment>username_2: The idea of a VAE is that the code/feature layers produce a probability distribution of features (Gaussian to be exact), by predicting its mean and variance.
This is the point that requires sampling, since probability distributions are abstract concepts, and sampling is the only way to get numbers from them.
Upvotes: 2 |
2022/06/28 | 704 | 2,865 | <issue_start>username_0: Let's consider a deep convolutional network. It seems that there is some consensus on the following notions:
**1. Shallow layers tend to recognise more low-level features such as edges and curves.**
**2. Deeper layers tend to recognise more high-level features (whatever this means).**
While I usually come across various online articles and blogs that state this, no one ever cites literature that supports this claim. I am not seeking the question as to why this phenomenon happens, I'm only seeking whether it has actually been experimented on and documented. Also, I am barely able to find any peer-reviewed literature that provides evidence of this on sites such as Google Scholar or ResearchGate.
Could anyone point me to the right direction?<issue_comment>username_1: You won’t find literature on this point because it’s true by definition. Low-level features are simple statistics of the raw input. High-level features are statistics of lower-level features. In a convolutional (or any feedforward) network the shallowest layers compute statistics directly on the input, so they create the lowest-level features. Deeper layers operate on the features of shallower layers, so they create higher-level features.
As an example, edges might be computed at the lowest/first convolutional layer, then corners at the second layer by looking for two perpendicular edges. The highest level may detect, say, whole faces, which is a complex, aka high-order, statistic.
If you want a visual example of what ‘low-level’ and ‘high-level’ features look like in a convolutional network, check out Google’s [Deep Dream Generator](https://deepdreamgenerator.com/), which emphasizes what the different layers ‘see’.
Upvotes: 0 <issue_comment>username_2: It is assumed that NNs build up a hierarchical representation, whereby each layer combines features from the lower-level layers. The layers could be understood as representing a cascade of stacked features:
edges -> texture -> patterns -> parts -> objects
So from lower-level patterns to the more abstract higher-level concept like representation. This [Distill article](https://distill.pub/2017/feature-visualization/) as far as I can tell is one of the most cited sources (740 citations) and provides an in-depth explanation of the features and how to visualize them. The journal is peer-reviewed.
The post also points to some older references such as: [this](https://www.researchgate.net/publication/265022827_Visualizing_Higher-Layer_Features_of_a_Deep_Network), [this](https://arxiv.org/pdf/1312.6034.pdf) or [this](https://www.auduno.com/2015/07/29/visualizing-googlenet-classes/). The [website of <NAME>](https://colah.github.io/) one of the authors of the Distill article is also a great source for finding visualizations for different deep learning architectures.
Upvotes: 1 |
2022/06/29 | 808 | 3,101 | <issue_start>username_0: I am wondering how a plain auto encoder is a generative model though its version might be but how can a plain auto encoder can be generative. I know that Vaes which is a version of the autoencoder is generative as it generates distribution for latent variables and whole data explicitly. But I am not able to think how an autoencoder generates probability distribution and becomes a generative model.
Also from this youtube video: [here](https://www.youtube.com/watch?v=c27SHdQr4lw&t=380s&ab_channel=PaulHand) It says plain auto encoder is not a generative model. See last line from picture.
[](https://i.stack.imgur.com/28Eq4.png)<issue_comment>username_1: An autoencoder is not considered a generative model, because it only reconstructs the given input. You could use the decoder *like* a generative model by putting in different vectors. However, the standard autoencoder mostly learns a sparse latent space. This means that you will have distinct clusters in the latent space (see the left image below). The decoder has never learned to reconstruct vectors in between the clusters, so it will produce very abstract things - mostly garbage.
Instead a variational autoencoder (VAE) is considered a generative model. It's basically an autoencoder with a modified bottleneck. This VAE learns a dense latent space (see image on the right), this means you can sample any vector from the latent space, pass it to the model and it will give you a nice result with somewhat interpolated object properties from the dataset.
[This article](https://news.sophos.com/en-us/2018/06/15/using-variational-autoencoders-to-learn-variations-in-data/) provides a nice overview of the two models.
[](https://i.stack.imgur.com/tIvHU.png)
*Figure taken from [here](https://news.sophos.com/en-us/2018/06/15/using-variational-autoencoders-to-learn-variations-in-data/)*
Upvotes: 5 [selected_answer]<issue_comment>username_2: Just as completing, in general, the autoencoders are an *unsupervised learning technique* in which we use neural networks for the task of representation learning. Specifically, we'll design a neural network architecture to impose a network bottleneck, forcing a *compressed knowledge representation* of the original input. In this case, we dont generate any new data and just compress them.
In the case of VQ-VAE or VAE, we generate new data because the architecture encodes all data in latent space and samples from that (continuous in VAE and discrete in VQ-VAE). Therefore, although the decoder will generate similar outputs (if regulated well), the input data is not just a compressed version.For better understanding, in VAE, the mean and variance of encoded data are used for generation.
Also, you can read these good references: <https://www.jeremyjordan.me/autoencoders/> , <https://www.v7labs.com/blog/autoencoders-guide> , <https://www.analyticsvidhya.com/blog/2021/06/complete-guide-on-how-to-use-autoencoders-in-python/>
Upvotes: 0 |
2022/07/02 | 1,053 | 3,228 | <issue_start>username_0: In reinforcement learning, we define the optimal policy $\pi^\*$ as the policy that maximizes the value of the state:
$$
\pi\_v^\*=\underset{\pi}{\operatorname{argmax}} {V\_{\pi}(s)}
$$
In Q-learning, we try to find a policy that maximize the state-action value function Q:
$$
\pi\_q^\*=\underset{\pi}{\operatorname{argmax}} {Q\_{\pi}(s,a)}
$$
However, does maximizing the value function and maximizing the state-action value function generate the same optimal policy? In the general case of continuous, stochastic action, $V\_{\pi}(s)$ is connected to $Q\_{\pi}(s,a)$ by:
$$
V\_{\pi}(s)=E\_{a\sim\pi}[Q\_{\pi}(s,a)]
$$
So
$$
\pi\_v^\*=\underset{\pi}{\operatorname{argmax}} {V\_{\pi}(s)}=\underset{\pi}{\operatorname{argmax}} E\_{a\sim\pi}[Q\_{\pi}(s,a)]
$$
And if $\pi\_v^\*=\pi\_q^\*$, mathematically I'm not sure why the expectation $E\_{a\sim\pi}$ before $Q\_{\pi}(s,a)$ can be simply dropped.<issue_comment>username_1: I am not sure your equations are correct or I understand very well your question but I write the main equations of $V$ and $Q$ functions from Sutton book that may help you:
$ v^{\*}(s) = max v\_{\pi}(s) $
$ q^{\*}(s,a) = max q\_{\pi}(s,a) $
$ v^{\*}(s)=max\mathbb{E}[R\_{t+1}+\gamma v^{\*}(s+1)|S\_{t}=s, A\_{t}=a] $
$ q^{\*}(s,a)=\mathbb{E}[R\_{t+1}+\gamma max q^{\*}(s+1,a^{'})|S\_{t}=s, A\_{t}=a] $
$ v^{\*}(s)=max q^{\*}\_{\pi}(s,a) $
As you can see, the optimal value function is equal to the maximum Q function on all feasible actions. We will reach optimal policy if we calculate the $argmax$ of these functions on possible actions. Also, the policy found by the value function is not always the same as the Q function because their features are different. The Q function considers both actions and states that not only decrease the speed of the process but also choose the shortest optimal path. For example, you can test simple graphical navigation on below Github repository:
<https://github.com/pouyan-asg/global-path-planning>
In this example, you will see the path that agent chooses to reach the destination is not similar in both cases, and the timing in value function mode is higher (because the agent does not consider the action). I hope my answer is clear.
Upvotes: 0 <issue_comment>username_2: Your question can be answered by observing the expression of the Bellman Optimality Equation:
$$
v(s)
= {\max\_\pi} \sum\_{a} {\pi(a|s)}\left(\sum\_{r}p(r|s,a)r + \gamma \sum\_{s'}p(s'|s,a)v(s')\right),\nonumber\\
\doteq {\max\_\pi} \sum\_{a} {\pi(a|s)} q(s,a),\quad \text{for all } s\in\mathcal{S}.
$$
If $v\_{\pi^\*}(s)$ is the solution to this equation, then $q(s,a)$ in this case equals to $q\_{\pi^\*}(s,a)$, which is the action value under $\pi^\*$. That is, the optimal state value of $s$ equals the maximum optimal action value at $s$.
You can check the details of the above equation in Equation (3.1) in the book: [Mathematical foundation of reinforcement learning](https://github.com/MathFoundationRL/Book-Mathmatical-Foundation-of-Reinforcement-Learning).
Moreover, the Bellman optimality equation can be expressed in either state values or action values. See the expression in action values in equation (7.16) in the book.
Upvotes: 1 |
2022/07/02 | 975 | 3,278 | <issue_start>username_0: I would like to approximate the following relation by a neural network
$y = \mathcal{f}(x\_1(t),x\_2(t))$
Here, I have only one output variable that is a function of 2 other variables which vary in time. Now, I want to be able to predict $y$, given any shape of the 2 independent variables in time. For this reason, I have a training set corresponding to different input and output signals. However, I don't know how to make the neural network understand the concept of time which is very important since I expect the solution at time $t\_k$ to be influenced by the previous instants in time. For this reason, I added as input variable the time derivative as
$y = \mathcal{f}\left(x\_1(t),x\_2(t), \dfrac{\partial x\_1 (t)}{\partial t}\right)$
This solution seems to work quite well for the fully connected neural network that I'm using. However, I would like to know if there are other ways to treat such problems where the time history is important.<issue_comment>username_1: I am not sure your equations are correct or I understand very well your question but I write the main equations of $V$ and $Q$ functions from Sutton book that may help you:
$ v^{\*}(s) = max v\_{\pi}(s) $
$ q^{\*}(s,a) = max q\_{\pi}(s,a) $
$ v^{\*}(s)=max\mathbb{E}[R\_{t+1}+\gamma v^{\*}(s+1)|S\_{t}=s, A\_{t}=a] $
$ q^{\*}(s,a)=\mathbb{E}[R\_{t+1}+\gamma max q^{\*}(s+1,a^{'})|S\_{t}=s, A\_{t}=a] $
$ v^{\*}(s)=max q^{\*}\_{\pi}(s,a) $
As you can see, the optimal value function is equal to the maximum Q function on all feasible actions. We will reach optimal policy if we calculate the $argmax$ of these functions on possible actions. Also, the policy found by the value function is not always the same as the Q function because their features are different. The Q function considers both actions and states that not only decrease the speed of the process but also choose the shortest optimal path. For example, you can test simple graphical navigation on below Github repository:
<https://github.com/pouyan-asg/global-path-planning>
In this example, you will see the path that agent chooses to reach the destination is not similar in both cases, and the timing in value function mode is higher (because the agent does not consider the action). I hope my answer is clear.
Upvotes: 0 <issue_comment>username_2: Your question can be answered by observing the expression of the Bellman Optimality Equation:
$$
v(s)
= {\max\_\pi} \sum\_{a} {\pi(a|s)}\left(\sum\_{r}p(r|s,a)r + \gamma \sum\_{s'}p(s'|s,a)v(s')\right),\nonumber\\
\doteq {\max\_\pi} \sum\_{a} {\pi(a|s)} q(s,a),\quad \text{for all } s\in\mathcal{S}.
$$
If $v\_{\pi^\*}(s)$ is the solution to this equation, then $q(s,a)$ in this case equals to $q\_{\pi^\*}(s,a)$, which is the action value under $\pi^\*$. That is, the optimal state value of $s$ equals the maximum optimal action value at $s$.
You can check the details of the above equation in Equation (3.1) in the book: [Mathematical foundation of reinforcement learning](https://github.com/MathFoundationRL/Book-Mathmatical-Foundation-of-Reinforcement-Learning).
Moreover, the Bellman optimality equation can be expressed in either state values or action values. See the expression in action values in equation (7.16) in the book.
Upvotes: 1 |
2022/07/03 | 850 | 3,765 | <issue_start>username_0: I want to design a multi-arm bandit system for a multi-step, multi-location system. Locations are dynamic, so I can not design the system based on them. In each location, the alternative actions that can be taken would be different. When you take correct actions, taken in correct locations, then some rewards would be earned. Some other alternative rewards can be incorporated in the system for the activities taken before reaching the correct state.
I know this may not be very clear. What I want to ask is "Is there a way to form a reward function so that it would take into account the order of the actions or the correctness of the order of the actions?".
Previously, I have implemented some other multi-arm bandit problems, but they were more straightforward. I need some ideas to help me to implement this new type of problem from some experts.<issue_comment>username_1: You are describing an environment which requires a full Markov Decision Process (MDP) to model it and reinforcement learning (RL) algorithms to solve it. You will not be able to adapt k-armed bandit algorithms, without effectively re-inventing MDPs.
The two key details that make this full RL, and not a bandit problem, are:
* Decisions are sequential, with options and outcomes that depend on previous decisions.
* Action choices make changes to variables (which in MDP would be part of the state description) that impact outcomes of future actions.
If you allow the agent to access the state including effects of previous actions encoded in a way that it has enough data to correctly predict rewards, then you have a normal MDP and most RL methods should be applicable.
If you do not allow the agent to use a convenient history of past actions (and/or their effects) as part of the state, then you will have constructed a partially observable MDP (POMDP) and may need a more advanced approach to solve it. For instance, using an RNN (most likely an LSTM or a GRU architecture) to process state sequence and predict action values could learn about the hidden sequence.
In terms of implementing a simulation of your environment, you will need to model it as a stateful system, and will have to include a concept of forward step in the sequence which modifies the state variables (regardless of whether these variables are made available to the agent in any observations). This would include the location information, and any other factors that change the allowed actions or outcome. As well as a step function, you will probably want a state reset function that puts the system into a starting state, or one of a range of possible starting states.
If your environment is episodic (a sequence can end), then you will need a way to flag that so that the learning agent can react to the end of an episode and request a new starting state.
Upvotes: 2 [selected_answer]<issue_comment>username_2: username_1 has a solid answer. In general you could have a bandit algorithm in which the reward $r \sim f(. | a)$, or an mdp where reward $r \sim f(. |s,a,s')$. Here $a$ is the current action, $s$ is the current state, and $s'$ is the next state. How you encode $f$ if a bandit or mdp is up to you whether deterministic or based on assuming a parametric family. Of course, an MDP is not the most general setup either, but describing a setting with a highly action/state dependent history is not simple to do for mathematical analysis (and coding too).
Hence if you want something more complex than what an mdp assumes, that's up to you to code/describe, or to initialize something more complex like an rnn as username_1 says and use that as your reward. Without nice data to train it though, an rnn reward function may not give very sensical rewards.
Upvotes: 0 |
2022/07/03 | 813 | 3,468 | <issue_start>username_0: Consider a model `A` that achieved an test accuracy of 99% on dataset-A with the size of 200 images and a model `B` that achieved only 50% on dataset-B with a size of 50,000 images. Also consider both the datasets split into train,validation and test sets in the ratio of 0.8,0.1,0.1.
But on test data of dataset-B,the model A is failed to attain same accuracy in fact it is giving lesser accuracy that of model B.
So, is the accuracy always the best measure to evaluate the performance of the DL model? Or any other better performance metrics available?<issue_comment>username_1: You are describing an environment which requires a full Markov Decision Process (MDP) to model it and reinforcement learning (RL) algorithms to solve it. You will not be able to adapt k-armed bandit algorithms, without effectively re-inventing MDPs.
The two key details that make this full RL, and not a bandit problem, are:
* Decisions are sequential, with options and outcomes that depend on previous decisions.
* Action choices make changes to variables (which in MDP would be part of the state description) that impact outcomes of future actions.
If you allow the agent to access the state including effects of previous actions encoded in a way that it has enough data to correctly predict rewards, then you have a normal MDP and most RL methods should be applicable.
If you do not allow the agent to use a convenient history of past actions (and/or their effects) as part of the state, then you will have constructed a partially observable MDP (POMDP) and may need a more advanced approach to solve it. For instance, using an RNN (most likely an LSTM or a GRU architecture) to process state sequence and predict action values could learn about the hidden sequence.
In terms of implementing a simulation of your environment, you will need to model it as a stateful system, and will have to include a concept of forward step in the sequence which modifies the state variables (regardless of whether these variables are made available to the agent in any observations). This would include the location information, and any other factors that change the allowed actions or outcome. As well as a step function, you will probably want a state reset function that puts the system into a starting state, or one of a range of possible starting states.
If your environment is episodic (a sequence can end), then you will need a way to flag that so that the learning agent can react to the end of an episode and request a new starting state.
Upvotes: 2 [selected_answer]<issue_comment>username_2: username_1 has a solid answer. In general you could have a bandit algorithm in which the reward $r \sim f(. | a)$, or an mdp where reward $r \sim f(. |s,a,s')$. Here $a$ is the current action, $s$ is the current state, and $s'$ is the next state. How you encode $f$ if a bandit or mdp is up to you whether deterministic or based on assuming a parametric family. Of course, an MDP is not the most general setup either, but describing a setting with a highly action/state dependent history is not simple to do for mathematical analysis (and coding too).
Hence if you want something more complex than what an mdp assumes, that's up to you to code/describe, or to initialize something more complex like an rnn as username_1 says and use that as your reward. Without nice data to train it though, an rnn reward function may not give very sensical rewards.
Upvotes: 0 |
2022/07/03 | 1,444 | 4,108 | <issue_start>username_0: I am writing a Neural Network frorm scratch. Below is what I have right now, based off of the math that I *think* I understand.
```
##### Imports #####
from matplotlib import pyplot as plt
import numpy as np
###### Activation Function #####
def sigmoid(input, derivative = False):
if derivative:
return sigmoid(input) * (1 - sigmoid(input))
return 1 / (1 + np.exp(-input))
##### Feed Forward Neural Netowkr Class #####
class FFNN:
def __init__(self, learning_rate, num_epochs):
# Network
self.w1 = np.random.randn(30, 5)
self.w2 = np.random.randn(5, 3)
# Hyperparameters
self.learning_rate = learning_rate
self.num_epochs = num_epochs
# Forward Propagation
def forward(self, input):
self.z1 = np.dot(input, self.w1)
self.a1 = sigmoid(self.z1)
self.z2 = np.dot(self.a1, self.w2)
self.a2 = sigmoid(self.z2)
# Backward Propagation
def backward(self, input, error):
error2 = error * sigmoid(self.z2, derivative = True)
d2 = np.dot(self.a1.T, error2)
error1 = np.dot(self.w2, error2.T).T * sigmoid(self.z1, derivative = True)
d1 = np.dot(input.T, error1)
self.w1 -= d1 * self.learning_rate
self.w2 -= d2 * self.learning_rate
# Train
def train(self, inputs, labels):
for _ in range(self.num_epochs):
for input, label in zip(inputs, labels):
self.forward(input)
self.backward(input, self.a2 - label)
# Test
def test(self, inputs):
for input in inputs:
self.forward(input)
print('Image is a', 'ABC'[np.argmax(self.a2)])
plt.imshow(input.reshape(5, 6))
plt.show()
# Initialize Neural Network
feed_forward_neural_network = FFNN(learning_rate = 0.1, num_epochs = 100)
##### Training #####
a = [0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1]
b = [0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0]
c = [0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0]
y = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
x = [np.array(a).reshape(1, 30), np.array(b).reshape(1, 30), np.array(c).reshape(1, 30)]
feed_forward_neural_network.train(x, y)
##### Testing #####
feed_forward_neural_network.test(x)
```
However, after looking at someone else's code, they have the same thing except the backward function does this instead:
```
# Backward Propagation
def backward(self, input, error):
error2 = error
d2 = np.dot(self.a1.T, error2)
error1 = np.dot(self.w2, error2.T).T * sigmoid(self.z1, derivative = True)
d1 = np.dot(input.T, error1)
self.w1 -= d1 * self.learning_rate
self.w2 -= d2 * self.learning_rate
```
Notice the missing sigmoid(self.z2, derivative = True) multiplication by the layer 2 error.
Both of these functions converge just fine, but obviously one of them is wrong. Which one, and why?<issue_comment>username_1: The latter one seems to correct implementation assuming your loss function is binary cross-entropy.
The partial derivative for cross-entropy w.r.t `z2` is `self.a2 - label` in your example. You can check mathematics in more detail [here](http://neuralnetworksanddeeplearning.com/chap3.html#introducing_the_cross-entropy_cost_function).
So, there is no need to multiply it again with derivative of sigmoid.
Upvotes: 0 <issue_comment>username_2: Your $d\_2$ is the gradient used to update $w\_2$, which is of course $\frac{dL}{dw\_2}$. To compute this gradient, using your notation:
$$ \frac{dL}{dw\_2} = \frac{dL}{da\_2}\frac{da\_2}{dz\_2}\frac{dz\_2}{dw\_2} = err \cdot \sigma'(z\_2)\cdot a\_1$$
So your version seems to be correct.
One possibility is that the forward is also different, and there is no sigmoid after the second layer in your colleague's network (which is often the case for the last layer). In which case their version would *also* be correct.
Upvotes: 2 [selected_answer] |
2022/07/04 | 1,039 | 3,963 | <issue_start>username_0: My understanding is that GPT uses the **same embedding matrix** for both inputs and output: Let $V$ be the vocab size, $D$ the number of embedding dimensions, and $E$ be a $V \times D$ embedding matrix:
* On input, if $x$ is a one-hot $V$-dimensional vector, GPT uses $Ei$.
* On output, if $\hat y$ is a $D$-dimensional prediction vector, GPT uses softmax($E^\top{\hat y}$) as its predictions.
Q1. Is the above correct?
-------------------------
I cannot find this stated clearly in the paper, but it is stated explicitly [here](https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens). It's also clearly implied by the parameter count listed [here](https://jalammar.github.io/illustrated-gpt2/), and argued for as best practice [here](https://paperswithcode.com/method/weight-tying). Yet, for example, Karpathy's mini-GPT implementation seems to use two different matrices:
```
self.tok_emb = nn.Embedding(config.vocab_size, config.n_embd) # <--- This would be E
self.pos_emb = nn.Parameter(torch.zeros(1, config.block_size, config.n_embd))
self.drop = nn.Dropout(config.embd_pdrop)
# transformer
self.blocks = nn.Sequential(*[Block(config) for _ in range(config.n_layer)])
# decoder head
self.ln_f = nn.LayerNorm(config.n_embd)
self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False) # <--- This has the same dimensions as Etranspose but is clearly a different matrix
```
Q2. If it is correct, how does can it work?
-------------------------------------------
This seems to be tasking $E$ with two very different, even opposing, functions:
* Map vocab to their *meaning* on the input side; higher magnitude indicates "more meaning"
* Map meaning to the *most likely* vocab on the output side; higher magnitude indicates greater likelihood
When outputting, we want the softmax to be highest when the word is most likely; magnitude of the output matrix should be roughly proportional to how likely the word is two appear.
Yet, when inputting, magnitude has *nothing to do with likelihood*. Magnitude on the input side captures some element of meaning: perhaps how extreme or intense the meaning is, perhaps another aspect (not necessarily easily interpreted).<issue_comment>username_1: Yes, GPT uses the same embedding matrix. See [here](https://www.cs.ubc.ca/%7Eamuham01/LING530/papers/radford2018improving.pdf).
[](https://i.stack.imgur.com/K4q6K.png)
Regarding your second question - on the input side, a given token selects one row from the token embedding matrix; it is not clear that higher magnitude signifies anything. On the output side, the magnitude of the output vector $h\_n$ also doesn't signify anything. However, once you take the dot product of the output vector with each of the previous "rows" of the token embedding matrix, this gives you a dot-product similarity of the computed output with the pre-stored embeddings. A large value for a given token element means the model is more likely to predict that token.
Upvotes: 0 <issue_comment>username_2: A GPT produces output based on its own previous output, so it must be able to understand its output.
The learning input is provided as a stream of tokens, and these tokens are defined before learning starts. So it has to use the same set of tokens to understand its own output. The set of possible output tokens is fixed, it learns only to assign probabilities of the next token.
Looking on it in a per token perspective, when it gets a token as input, it learns that it has a non-zero probability to be following the previous input. If the same happens again, it learned that the probability is higher than previously assumed. The probability is expressed as a number of occurrences in the input, and a number per token in the output, but both are about the relative probability of tokens following the previous text.
Upvotes: 3 [selected_answer] |
2022/07/05 | 1,027 | 4,590 | <issue_start>username_0: for the usage of ML technologies, having a appropriate dataset is arguably the first and fundamental step one has to tackle by either aquiring a dataset from external sources or creating their own.
While datasets from external sources are of course marketed as beeing 'good' or 'high-quality' (and in most cases not explained on how the authors come to this conclusion), creating a dataset yourself doesn't come with these labels.
This brings me to my questions: *How can one (objectively) quantify the quality of a dataset for a given problem?*
This, of course, includes some points which are more or less 'accepted' within the community, e.g. the dataset has to contain enough datapoints for every modeled state (which leads to the question of what is 'enough' ...), or that the datapoints for each modeled state should be roughly equally split (e.g. a dataset for cat/dog image discrimination would not work well with one dog image and 10k cat images) and so on.
I recognize that this is a rather open ended and maybe even philosophical question, but I believe that, given the importance of data for ML (and other disciplines), I am in need of an objective way to evaluate my datasets in relation to the task at hand and determine their quality in an objective way. Also my goal here is clearly ML oriented, but since this topic is not only valid in the ML context (and ML is in its core more or less complex statistics), I don't want to restrict this only to the ML, but to datasets overall<issue_comment>username_1: This is not a simple answer, and I think it really depends the goal, *quality* is not clearly defined and can vary. However there are some points that are commonly seen as positive.
The quantity
============
As you said, having enough data is important to train models, and it is even better if the dataset is balanced for classification. This means that it i spossible to build more complex models, and then *learn more* from the dataset.
*Learn more* does not mean to learn from more data, but from more **diverse** data. If you take the example of dog/cat classification, a larger dataset will probably have more angles (from above, behind or with an obscured view), and more cat/dog races which means that the algorithm can learn all these differences that needs a larger amount of data.
The main point of quantity is not how much data, but how much of the input space is covered.
A clean dataset
===============
The first step of every project is often cleaning the data and preprocessing it, which can be avoided if the dataset is of better "quality". This means that all instances are correctly labeled, there is no missing value, no duplicates, the data types are the same...
There the data is synthetic and can be used nearly as is, without need of further work to process these.
The ambiguous points
====================
If a dataset present the attributes above I'd say it's quality is already great, but there are some additional points that can be considered. Some attributes of the dataset can vary (for example image size/resolution, background noise for audios, typo in language corpus...).
The quality of the instances with such difference can be said to be low, if example if for example only the tail of the dog can be seen. However the issue is that **real world data is not perfect**.
In some cases you can pre-process these differences or perturbations and correct them, but sometimes you can't. There some people will say it's part of the challenge to deal with these, some will say it is useless.
There it depends on your goal and subjective point of view to decide if these inconsistency is needed or more harmful.
To add a more personal thought, I'd say that when building a production model it is important to keep these as they reflect the real use and the perturbations brought by real data.
Upvotes: 1 <issue_comment>username_2: For the dataset to be useful for the community it should be publicly available in a stable domain and establish what the test set is (if you can have a hidden one and make a webpage to evaluate the submissions that's that's better). It's also useful to have baseline performance numbers.
The dataset doesn't have to be balanced. High-quality datasets are regarded as such if the label noise is small (note that annotators do fail), the data are not corrupted and care has been taken regarding potentially harmful biases. Of course, having the dataset checked and reviewed by experts in the field is desirable. Creating a good dataset involves lots of work.
Upvotes: 0 |
2022/07/06 | 343 | 1,452 | <issue_start>username_0: Even though if exploration doesn't happen, it's deterministic.<issue_comment>username_1: Yes - you can think of an epsilon-greedy policy as a mixture of a policy that chooses an action at random (the stochastic part) and a possibly deterministic policy used otherwise. The value of epsilon gives the weight of the random component, and $1-\epsilon$ that of the other component.
Upvotes: 2 <issue_comment>username_2: When a policy is stochastic, it means taking actions will be done based on probabilities. For example, in deterministic policy (in the case of navigation for four actions), if an agent takes upward action, it will go up and etc. However, in stochastic policy, if an agent takes upward, it may go up with 80% of probability and go right with 20% of possibility. Thus, in epsilon-greedy based policies, we choose actions in exploration mode randomly, but it does not mean it is stochastic.
I tried to explain short and clear.
Upvotes: 0 <issue_comment>username_3: I would argue it is just stochastic because it chooses the current best action with probability $1-\epsilon+\epsilon/|A|$ and then selects randomly among the rest of the actions with the remaining probability $\epsilon/|A|$, where $A$ is the action space. The current best action is updated over time with running averages and may be the same one in the long run if it is truly a stationary bandit environment, but it will still explore.
Upvotes: 2 |
2022/07/07 | 557 | 2,444 | <issue_start>username_0: I have voice recordings which are labelled by not only a single label but multiple labels. Each voice recording corresponds to one of class labels within a set. In other words, the training instance is given a **set** of (or distribution over) candidate class labels and **only one** of the candidate labels is the correct one.
I wish to train a model that classifies which class label corresponds to each voice recording. Each one of my voice recordings is accompanied by a set of 10 potential labels (labels are always different), but it is unknown which label it is exactly (aside from a small sample where there is only one correct label).
This is due to the nature of where my data comes from: someone records a short voice message and then types the same message into a chat, however there will be slight delay between the two and in the meanwhile other chat messages arrive. Only one of the next 10 chat messages after the voice message is the correct one that corresponds to that voice message.
How would I define a loss function in this case?<issue_comment>username_1: Yes - you can think of an epsilon-greedy policy as a mixture of a policy that chooses an action at random (the stochastic part) and a possibly deterministic policy used otherwise. The value of epsilon gives the weight of the random component, and $1-\epsilon$ that of the other component.
Upvotes: 2 <issue_comment>username_2: When a policy is stochastic, it means taking actions will be done based on probabilities. For example, in deterministic policy (in the case of navigation for four actions), if an agent takes upward action, it will go up and etc. However, in stochastic policy, if an agent takes upward, it may go up with 80% of probability and go right with 20% of possibility. Thus, in epsilon-greedy based policies, we choose actions in exploration mode randomly, but it does not mean it is stochastic.
I tried to explain short and clear.
Upvotes: 0 <issue_comment>username_3: I would argue it is just stochastic because it chooses the current best action with probability $1-\epsilon+\epsilon/|A|$ and then selects randomly among the rest of the actions with the remaining probability $\epsilon/|A|$, where $A$ is the action space. The current best action is updated over time with running averages and may be the same one in the long run if it is truly a stationary bandit environment, but it will still explore.
Upvotes: 2 |
2022/07/07 | 495 | 2,078 | <issue_start>username_0: I would like to buy a book about AI and neural networks written on accessible level for a 17 years old mathematically very gifted student interested in these topics. The book should contain some sections about perceptrons and optical character recognition. I am aware of <https://www.deeplearningbook.org/> but it does not fully satisfy me. Mostly, because it goes too slow and too long. For instance, in order to grasp the back-propagation algorithm one needs to read 60 pages! Eqs.(6.49-6.52) are particular shocking, I thought every student should know rules of chain differentiation! We do not write such trivial things in, e.g., theoretical physics.
Now, Internet is full of all possible blogs and tutorials, but it is impossible to filter some nice and concise exposition for people with mathematical background. I notice some popular extremes such as i) prolonged discussion of a single neuron, ii) XOR example, iii) very technical tutorials, which require a lot of python, a lot of packages, web-servers etc.
I am seeking some nice text to create a neural network completely from scratch, and with some impressive performance for e.g. written digits recognition. No reliance on external packages, no object-oriented features, as it may deter young students. But functional programming paradigm is welcome.
Therefore, I was thinking about more refined and concise books, preferably with good typography and illustrations, hard cover, suitable as a gift for a mathematically-inclined student. What would be your recommendation?<issue_comment>username_1: My vote would go to [Artificial Intelligence: A Modern Approach](https://rads.stackoverflow.com/amzn/click/com/0136042597). It is not concise, but that's a feature, just select the chapters you are interested into.
Upvotes: 1 <issue_comment>username_2: After a lot of searching the following seems to be a good choice, but sometimes with repetitions of material. The math is very accessible.
*Neural Networks and Deep Learning: A Textbook* by <NAME>
Upvotes: 1 [selected_answer] |
2022/07/12 | 302 | 1,129 | <issue_start>username_0: I wonder if it is possible to add manual inference to the output of a model?
For example, I have a model called 'net', and the output value of 'net' is a vector called v = [v1, ... vn]. v is a binary vector. For some reason, I need to manually adjust this output, which means I need to manually flip some 0s to 1s and vice versa.
My question is, is it possible to do so. The reason I have this question are
1. I am new to torch
2. If I manually change the variable.data, even it is doable, I did not do any due adjustments on gradient.
Please enlighten me under the context of PyTorch<issue_comment>username_1: My vote would go to [Artificial Intelligence: A Modern Approach](https://rads.stackoverflow.com/amzn/click/com/0136042597). It is not concise, but that's a feature, just select the chapters you are interested into.
Upvotes: 1 <issue_comment>username_2: After a lot of searching the following seems to be a good choice, but sometimes with repetitions of material. The math is very accessible.
*Neural Networks and Deep Learning: A Textbook* by <NAME>
Upvotes: 1 [selected_answer] |
2022/07/13 | 2,769 | 9,552 | <issue_start>username_0: AIs are getting better and better at creating images and art. Some of the stuff is almost impossible to be detected by the naked eye. But what about programs and algorithms? Instead of creating an image, can anything detect that this image was created by an AI?
Take this one for example:
This picture of a woman's face was generated by AI
[](https://i.stack.imgur.com/S0WyG.jpg)<issue_comment>username_1: Images such as this one are produced using [generative adversial network](https://en.wikipedia.org/wiki/Generative_adversarial_network), which is build from two models:
* one to generate images given a random vector as input
* another trying to detect the generated image from two images, with one of them being real
Then the weights of the first model are updated if the second one detected which image is artificial, and the second model is updated if its prediction is wrong.
Of course you might build a model that can sometime detect AI generated images, but it is probably not possible to differentiate them all the time. Then, if you build such model that is better than any other model to detect generated images, it is possible to create another model trained to fool it.
Upvotes: 1 <issue_comment>username_2: I am not an expert, but it feels like these GANs are not paying attention to the clothes and the background and make them "fluid".
Like, what is this hat the woman in your example is wearing? Why is the right side of the background looks like it is a mix of liquid paint?
Or here:
[](https://i.stack.imgur.com/LBZK6.jpg)
What is she wearing? Did she kill a rat to make these clothes? And similar fluid background.
Upvotes: 1 <issue_comment>username_3: I have not worked practically with GANs and just know their theory, but I do not agree 100% with this comment that AI chooses stupid things for clothes or backgrounds. I remember it could be detected when a video was generated with deep learning methods from Obama.
Upvotes: 0 <issue_comment>username_4: There's a paper that claims to detect AI generated images with a 95% accuracy.
<https://www.researchgate.net/publication/326053461_Detection_of_GAN-Generated_Fake_Images_over_Social_Networks>
A search with the right keywords can reveal more such research.
Upvotes: 0 <issue_comment>username_5: **TL;DR: Yes, but it's becoming more and more difficult, even for humans, as generative models get better and better. It's a quite hot research topic.**
*Disclaimer: I am not affiliated with any of the authors, I'm just studying this research topic.*
Humans usually look for some visual artifacts (as all the other answers point out), for example
* Colour or texture artifacts (colour blobs, unrealistic texture)
* Asymmetries or inconsistencies in the image (this is easy to spot in faces or hair, for example)
* Anomalies in color, lighting, image parts
[](https://i.stack.imgur.com/2EcVj.png)
But as models become more and more advanced, these artifacts are becoming harder to spot, if not completely disappeared. As in 2023, images coming from diffusion models like Stable Diffusion or Midjourney API have a photorealistic quality, and often are already indistinguishable from real images (see some examples [here](https://www.midjourney.com/showcase/recent/)).
For these reasons, we want to find automatic and more robust detection approaches to prevent malicious uses of these AI generation models.
Detection approach
------------------
A simple approach is to train a detector on AI generated images, which classifies the image as real or AI generated either by looking at the whole images or at single patches [Chai et al., 2020].
A more performing approach is to exploit some invisible artifacts created by the convolutional upsampling, which is commonly used in GANs and in *some* Diffusion models (Stable Diffusion, for example) to create high-resolution images. While invisible in the image domain, this trace can be easily extracted and identified in the frequency domain [Marra et al., 2019, Yu et al., 2019].
[](https://i.stack.imgur.com/KAw9O.png)
The detector needs to be robust to common image modifications (contrast/luminosity, colour jittering, jpeg compression,etc.) and also to adversarial attacks on the detector.
Watermarking/Fingerprinting
---------------------------
To proactively improve the detection performance, the developers of generative models could include a *watermark* in their images, to mark the images produced by their models as AI-generated. This watermark is typically invisible, and can be generated in several ways:
* Traditional approaches are based on frequency decompositions of the image, constructed through DCT, DWT, Fourier-Mellin, or complex wavelet transformations Cox et al., 1996, O’ Ruanaidh et al., 1996,
O’Ruanaidh and Pun, 1997]. These frequency transformations all share the beneficial property that simple image manipulations, such as translations, rotations, and resizing are easily understandable and watermarks can be constructed with robustness to these transformations in mind.
* Model-based approaches use a different learned model to embed a watermark in the image. Hayes and Danezis [2017] and Zhu
et al. [2018] propose strategies to learn watermarking end-to-end, where both the watermark encoder and the watermark decoder are learned models, optimized via adversarial objectives to maximize transmission and robustness [Zhang et al., 2019]. Zeng et al. [2023] present a related approach, in which a neural network watermarked encoder and its associate detector are jointly learned using an image dataset. Notably these approaches still work like a traditional watermark in that the encoder imprints a post-hoc signal onto a given image - however the type of imprint is now learned.
* More recent approaches use another model or the generative model itself to either embed the watermark after the generation process or during the generation process. Some approaches consist in embedding a watermark in training data [Yu et al., 2022], in some components of the model (e.g the convolutional decoder) [Fernandez et al., 2023], or by slightly modifying the distribution from where sampling is performed [Wen et al., 2023].
As detectors, watermarks need to be robust to common image transformations (such as cropping, contrast/luminance editing, jpeg compression, etc.) and adversarial attacks that actively try to remove the watermark.
References
----------
* Gragnaniello, Diego et al. “Are GAN Generated Images Easy to Detect? A Critical Analysis of the State-Of-The-Art.” 2021 IEEE International Conference on Multimedia and Expo (ICME) (2021): 1-6.
* <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. A comprehensive survey on
robust image watermarking. Neurocomputing, 488:226–247, June 2022. ISSN 0925-2312. doi: 10.
1016/j.neucom.2022.02.083. URL <https://www.sciencedirect.com/science/article/>
pii/S0925231222002533.
* [Cox et al., 1996] <NAME>., <NAME>., <NAME>., and <NAME>. (1996). Secure spread spectrum watermarking for images, audio and video. In Proceedings of 3rd IEEE International Conference on Image Processing, volume 3, pages 243–246 vol.3.
* [O’ Ruanaidh et al., 1996] <NAME>., <NAME>., and <NAME>. (1996). Watermarking digital
images for copyright protection. IEE PROCEEDINGS VISION IMAGE AND SIGNAL PROCESSING, 143:250–256.
* O’Ruanaidh and Pun, 1997] <NAME>. and <NAME>. (1997). Rotation, scale and translation invariant digital image watermarking. In Proceedings of International Conference on Image Processing, volume 1, pages 536–539. IEEE.
* [Zhu et al., 2018] <NAME>., <NAME>., <NAME>., and <NAME>. (2018). Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657–672. 3
* [Marra et al., 2019] <NAME>., <NAME>., <NAME>., and <NAME>. (2019). Do gans leave artificial fingerprints? In 2019 IEEE conference on multimedia information processing and retrieval
(MIPR), pages 506–511. IEEE.
* [Yu et al., 2019] <NAME>., <NAME>., and <NAME>. (2019). Attributing fake images to gans: Learning and analyzing gan fingerprints. In Proceedings of the IEEE/CVF international conference on
computer vision, pages 7556–7566.
2
* [Chai et al., 2020] <NAME>., <NAME>., <NAME>., and <NAME>. (2020). What makes fake images detectable? understanding properties that generalize
* [Yu et al., 2022] <NAME>., <NAME>., <NAME>., and <NAME>. (2022). Artificial fingerprinting for generative models: Rooting deepfake attribution in training data.
* [Corvi et al., 2023] <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>.
(2023). On the detection of synthetic images generated by diffusion models. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE.
* [Wen et al., 2023] <NAME>., <NAME>., <NAME>., and <NAME>. (2023). Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust.
* [Fernandez et al., 2023] <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2023). The stable signature: Rooting watermarks in latent diffusion models.
Upvotes: 0 |
2022/07/13 | 2,847 | 9,914 | <issue_start>username_0: I have become more familiar with libraries such as tensorflow for a while now, and have become interested in utilizing neural networks for solving specific problems. The big question I have is, what are some principles that you have to take into account for designing your neural networks architecture?
Some other questions I have are:
* Do I want my network to slowly reduce the dimensionality of data (so
it picks out important features), the deeper it goes? What happens
when the output (lets say its one hot encoded, so the no. classes is
in the 1000s while your e.g text is only of length 30) is a lot
bigger than the input?
* If so, then what do I do when I have to process a single class? Do I just add layers which expand off that 1 input (isn't that wasting resources?)
What resources do you recommend I should look into?<issue_comment>username_1: Images such as this one are produced using [generative adversial network](https://en.wikipedia.org/wiki/Generative_adversarial_network), which is build from two models:
* one to generate images given a random vector as input
* another trying to detect the generated image from two images, with one of them being real
Then the weights of the first model are updated if the second one detected which image is artificial, and the second model is updated if its prediction is wrong.
Of course you might build a model that can sometime detect AI generated images, but it is probably not possible to differentiate them all the time. Then, if you build such model that is better than any other model to detect generated images, it is possible to create another model trained to fool it.
Upvotes: 1 <issue_comment>username_2: I am not an expert, but it feels like these GANs are not paying attention to the clothes and the background and make them "fluid".
Like, what is this hat the woman in your example is wearing? Why is the right side of the background looks like it is a mix of liquid paint?
Or here:
[](https://i.stack.imgur.com/LBZK6.jpg)
What is she wearing? Did she kill a rat to make these clothes? And similar fluid background.
Upvotes: 1 <issue_comment>username_3: I have not worked practically with GANs and just know their theory, but I do not agree 100% with this comment that AI chooses stupid things for clothes or backgrounds. I remember it could be detected when a video was generated with deep learning methods from Obama.
Upvotes: 0 <issue_comment>username_4: There's a paper that claims to detect AI generated images with a 95% accuracy.
<https://www.researchgate.net/publication/326053461_Detection_of_GAN-Generated_Fake_Images_over_Social_Networks>
A search with the right keywords can reveal more such research.
Upvotes: 0 <issue_comment>username_5: **TL;DR: Yes, but it's becoming more and more difficult, even for humans, as generative models get better and better. It's a quite hot research topic.**
*Disclaimer: I am not affiliated with any of the authors, I'm just studying this research topic.*
Humans usually look for some visual artifacts (as all the other answers point out), for example
* Colour or texture artifacts (colour blobs, unrealistic texture)
* Asymmetries or inconsistencies in the image (this is easy to spot in faces or hair, for example)
* Anomalies in color, lighting, image parts
[](https://i.stack.imgur.com/2EcVj.png)
But as models become more and more advanced, these artifacts are becoming harder to spot, if not completely disappeared. As in 2023, images coming from diffusion models like Stable Diffusion or Midjourney API have a photorealistic quality, and often are already indistinguishable from real images (see some examples [here](https://www.midjourney.com/showcase/recent/)).
For these reasons, we want to find automatic and more robust detection approaches to prevent malicious uses of these AI generation models.
Detection approach
------------------
A simple approach is to train a detector on AI generated images, which classifies the image as real or AI generated either by looking at the whole images or at single patches [Chai et al., 2020].
A more performing approach is to exploit some invisible artifacts created by the convolutional upsampling, which is commonly used in GANs and in *some* Diffusion models (Stable Diffusion, for example) to create high-resolution images. While invisible in the image domain, this trace can be easily extracted and identified in the frequency domain [Marra et al., 2019, Yu et al., 2019].
[](https://i.stack.imgur.com/KAw9O.png)
The detector needs to be robust to common image modifications (contrast/luminosity, colour jittering, jpeg compression,etc.) and also to adversarial attacks on the detector.
Watermarking/Fingerprinting
---------------------------
To proactively improve the detection performance, the developers of generative models could include a *watermark* in their images, to mark the images produced by their models as AI-generated. This watermark is typically invisible, and can be generated in several ways:
* Traditional approaches are based on frequency decompositions of the image, constructed through DCT, DWT, Fourier-Mellin, or complex wavelet transformations Cox et al., 1996, O’ Ruanaidh et al., 1996,
O’Ruanaidh and Pun, 1997]. These frequency transformations all share the beneficial property that simple image manipulations, such as translations, rotations, and resizing are easily understandable and watermarks can be constructed with robustness to these transformations in mind.
* Model-based approaches use a different learned model to embed a watermark in the image. Hayes and Danezis [2017] and Zhu
et al. [2018] propose strategies to learn watermarking end-to-end, where both the watermark encoder and the watermark decoder are learned models, optimized via adversarial objectives to maximize transmission and robustness [Zhang et al., 2019]. Zeng et al. [2023] present a related approach, in which a neural network watermarked encoder and its associate detector are jointly learned using an image dataset. Notably these approaches still work like a traditional watermark in that the encoder imprints a post-hoc signal onto a given image - however the type of imprint is now learned.
* More recent approaches use another model or the generative model itself to either embed the watermark after the generation process or during the generation process. Some approaches consist in embedding a watermark in training data [Yu et al., 2022], in some components of the model (e.g the convolutional decoder) [Fernandez et al., 2023], or by slightly modifying the distribution from where sampling is performed [Wen et al., 2023].
As detectors, watermarks need to be robust to common image transformations (such as cropping, contrast/luminance editing, jpeg compression, etc.) and adversarial attacks that actively try to remove the watermark.
References
----------
* Gragnaniello, Diego et al. “Are GAN Generated Images Easy to Detect? A Critical Analysis of the State-Of-The-Art.” 2021 IEEE International Conference on Multimedia and Expo (ICME) (2021): 1-6.
* <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. A comprehensive survey on
robust image watermarking. Neurocomputing, 488:226–247, June 2022. ISSN 0925-2312. doi: 10.
1016/j.neucom.2022.02.083. URL <https://www.sciencedirect.com/science/article/>
pii/S0925231222002533.
* [Cox et al., 1996] <NAME>., <NAME>., <NAME>., and <NAME>. (1996). Secure spread spectrum watermarking for images, audio and video. In Proceedings of 3rd IEEE International Conference on Image Processing, volume 3, pages 243–246 vol.3.
* [O’ Ruanaidh et al., 1996] O’ <NAME>., <NAME>., and <NAME>. (1996). Watermarking digital
images for copyright protection. IEE PROCEEDINGS VISION IMAGE AND SIGNAL PROCESSING, 143:250–256.
* O’Ruanaidh and Pun, 1997] O’<NAME>. and <NAME>. (1997). Rotation, scale and translation invariant digital image watermarking. In Proceedings of International Conference on Image Processing, volume 1, pages 536–539. IEEE.
* [Zhu et al., 2018] <NAME>., <NAME>., <NAME>., and <NAME>. (2018). Hidden: Hiding data with deep networks. In Proceedings of the European conference on computer vision (ECCV), pages 657–672. 3
* [Marra et al., 2019] <NAME>., <NAME>., <NAME>., and <NAME>. (2019). Do gans leave artificial fingerprints? In 2019 IEEE conference on multimedia information processing and retrieval
(MIPR), pages 506–511. IEEE.
* [Yu et al., 2019] <NAME>., <NAME>., and <NAME>. (2019). Attributing fake images to gans: Learning and analyzing gan fingerprints. In Proceedings of the IEEE/CVF international conference on
computer vision, pages 7556–7566.
2
* [Chai et al., 2020] <NAME>., <NAME>., <NAME>., and <NAME>. (2020). What makes fake images detectable? understanding properties that generalize
* [Yu et al., 2022] <NAME>., <NAME>., <NAME>., and <NAME>. (2022). Artificial fingerprinting for generative models: Rooting deepfake attribution in training data.
* [Corvi et al., 2023] <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>.
(2023). On the detection of synthetic images generated by diffusion models. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE.
* [Wen et al., 2023] <NAME>., <NAME>., <NAME>., and <NAME>. (2023). Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust.
* [Fernandez et al., 2023] <NAME>., <NAME>., <NAME>., <NAME>., and <NAME>. (2023). The stable signature: Rooting watermarks in latent diffusion models.
Upvotes: 0 |
2022/07/13 | 1,026 | 4,284 | <issue_start>username_0: It is my understanding that when training a Deep NN in Tensorflow/PyTorch/... we only keep the current state of the network in memory, except perhaps when we manually decide to save the current weights to the HDD/SSD.
Now, naively speaking it may seem reasonable to not only remember the current state (i.e. the current values of the trained weights), i.e. "where we are", but also the "best" weights "so far" by some metric, such as the validation error. Immediately, this approach doubles our memory requirements, especially if we want to keep everything in the GPU memory.
Is this done in practice? If not what are arguments against it?
---
Now there are certainly details that make this question more tricky. If you saved only each epoch it's probably fine to just save your weights to the HDD without significant loss in speed. If you remember the best weights for each *step* of SGD than you would have to compute the validation error over and over, which is costly. Or you use perhaps an estimate of the training error based on its gradient, which however could be leading to overfitting if you are not careful. You could also mix and match, or you could compute the validation error only on a subset of the data, or only remember the best every $k$ steps, etc. etc. You might also want to go back and restart from a "better" set of weights in the hopes of getting a better trajectory.
I also don't know whether in practice at the of training people typically, deliberately, go back to a previous "checkpoint" they might have saved somewhere.<issue_comment>username_1: >
> Is this done in practice?
>
>
>
Yes, this is done normally when using (lack of) improvements to validation metrics as a stop criterion, and many libraries support it as standard. Depending on the library, you may find you need to add a little code to keep a copy of the best-so-far weights, but some will do it automatically by default, or based on setting params on the `train` or `fit` function.
For example, [Keras' EarlyStopping class](https://keras.io/api/callbacks/early_stopping/) has a `restore_best_weights` parameter. Using this class in your main fit function, and setting the param to `true` will do what you want automatically with no other code required.
>
> If not what are arguments against it?
>
>
>
Over-fitting to the validation set is a possible concern, as running the validation checks 100s of times to decide the "best" model may lead to some maximisation bias, and make decisions between other hyperparameters than the number of epochs less reliable.
Upvotes: 4 [selected_answer]<issue_comment>username_2: ### Common practice
Model checkpoints are often saved to the HDD to keep the GPU memory free.
At every epoch, a selection metric (e.g. validation loss) is evaluated after the training stage completes, and we save the:
* Top-k best models (by model selection metric)
* Latest model at end of epoch
Example training loop:
```
best_val_loss = np.inf
for epoch in range(num_epochs):
out_train = train_epoch(model)
out_val = validate_epoch(model)
save(model, "last.ckpt")
if out_val["loss"] < best_val_loss:
save(model, "best.ckpt")
best_val_loss = out_val["loss"]
```
It is possible that there exist models that best minimize the selection metric during an epoch. However, since the gain is likely quite minimal, it is much more practical to choose the strategy mentioned above.
---
### Is there something better?
You propose some interesting strategies. It is possible that there might be a better process than the typical "train/validation/test over shuffled epoch with batches", but it needs to be shown that such a process leads to noticeable improvements. The convenience and comprehensibility of using a common approach should not be underestimated.
>
> You might also want to go back and restart from a "better" set of weights in the hopes of getting a better trajectory.
>
>
>
Restarting from a better model is almost like training using the validation set, though admittedly not as severe as using validation gradients to optimize the model directly. Typical problems to consider when using such an approach are overfitting and difficulty in escaping local minima.
Upvotes: 2 |
2022/07/15 | 1,053 | 3,802 | <issue_start>username_0: I have the following situation:
| Stock | Time\_Stamps | Feature\_1 | Feature\_2 | Feature\_n | Price |
| --- | --- | --- | --- | --- | --- |
| Stock\_1 | 2019 | 0.5 | 1.0 | 1.0 | 100 |
| Stock\_1 | 2020 | 0.7 | 1.3 | 0.9 | 90 |
| Stock\_2 | 2019 | 0.3 | 0.9 | 1.1 | 110 |
| Stock\_2 | 2020 | 0.2 | 0.8 | 1.1 | 120 |
| Stock\_n | year\_n | value\_n | value\_n | value\_n | price\_n |
So this is how my data table is structured. My original df has 100+ features and 70000k observations resp. 2000+ stocks - so this is only a simplification.
I want to train a LSTM on this data table and look for features correlation with the price.
Common idea, nothing new, so pls save your time giving me "this will not work" bla bla.
I am generally interested in how you would approach this problem. We have multiple inputs (features) for our time series forecast with 8 time stamps (8 years) per stock. However, in my understanding, I'd have to train my model for every stock seperately which is inconvenient.
How would you pre-process my data, so that I can train a decent model?<issue_comment>username_1: >
> Is this done in practice?
>
>
>
Yes, this is done normally when using (lack of) improvements to validation metrics as a stop criterion, and many libraries support it as standard. Depending on the library, you may find you need to add a little code to keep a copy of the best-so-far weights, but some will do it automatically by default, or based on setting params on the `train` or `fit` function.
For example, [Keras' EarlyStopping class](https://keras.io/api/callbacks/early_stopping/) has a `restore_best_weights` parameter. Using this class in your main fit function, and setting the param to `true` will do what you want automatically with no other code required.
>
> If not what are arguments against it?
>
>
>
Over-fitting to the validation set is a possible concern, as running the validation checks 100s of times to decide the "best" model may lead to some maximisation bias, and make decisions between other hyperparameters than the number of epochs less reliable.
Upvotes: 4 [selected_answer]<issue_comment>username_2: ### Common practice
Model checkpoints are often saved to the HDD to keep the GPU memory free.
At every epoch, a selection metric (e.g. validation loss) is evaluated after the training stage completes, and we save the:
* Top-k best models (by model selection metric)
* Latest model at end of epoch
Example training loop:
```
best_val_loss = np.inf
for epoch in range(num_epochs):
out_train = train_epoch(model)
out_val = validate_epoch(model)
save(model, "last.ckpt")
if out_val["loss"] < best_val_loss:
save(model, "best.ckpt")
best_val_loss = out_val["loss"]
```
It is possible that there exist models that best minimize the selection metric during an epoch. However, since the gain is likely quite minimal, it is much more practical to choose the strategy mentioned above.
---
### Is there something better?
You propose some interesting strategies. It is possible that there might be a better process than the typical "train/validation/test over shuffled epoch with batches", but it needs to be shown that such a process leads to noticeable improvements. The convenience and comprehensibility of using a common approach should not be underestimated.
>
> You might also want to go back and restart from a "better" set of weights in the hopes of getting a better trajectory.
>
>
>
Restarting from a better model is almost like training using the validation set, though admittedly not as severe as using validation gradients to optimize the model directly. Typical problems to consider when using such an approach are overfitting and difficulty in escaping local minima.
Upvotes: 2 |
2022/07/19 | 859 | 3,496 | <issue_start>username_0: I'm new to ML and trying to write a solution to a food delivery duration time problem (so called lead time). I used algorithms such as random forest and gradient boosting which gave OK results but not amazing. I have daily data covering 3 years, that I split to train/test with an 80/20% split.
To try and improve results, I repeated the split however I did it randomly, i.e. on a shuffled data. My model, does not use any relations between data points, it only predicts the lead time using that specific sample's features (e.g. type of food, who's the carrier, expected delivery duration, etc.)
When I do this, the results improve dramatically, which makes me question I'm doing something illegal.
I wanted to know if I can actually shuffle the data? Can I do that?<issue_comment>username_1: >
> Is this done in practice?
>
>
>
Yes, this is done normally when using (lack of) improvements to validation metrics as a stop criterion, and many libraries support it as standard. Depending on the library, you may find you need to add a little code to keep a copy of the best-so-far weights, but some will do it automatically by default, or based on setting params on the `train` or `fit` function.
For example, [Keras' EarlyStopping class](https://keras.io/api/callbacks/early_stopping/) has a `restore_best_weights` parameter. Using this class in your main fit function, and setting the param to `true` will do what you want automatically with no other code required.
>
> If not what are arguments against it?
>
>
>
Over-fitting to the validation set is a possible concern, as running the validation checks 100s of times to decide the "best" model may lead to some maximisation bias, and make decisions between other hyperparameters than the number of epochs less reliable.
Upvotes: 4 [selected_answer]<issue_comment>username_2: ### Common practice
Model checkpoints are often saved to the HDD to keep the GPU memory free.
At every epoch, a selection metric (e.g. validation loss) is evaluated after the training stage completes, and we save the:
* Top-k best models (by model selection metric)
* Latest model at end of epoch
Example training loop:
```
best_val_loss = np.inf
for epoch in range(num_epochs):
out_train = train_epoch(model)
out_val = validate_epoch(model)
save(model, "last.ckpt")
if out_val["loss"] < best_val_loss:
save(model, "best.ckpt")
best_val_loss = out_val["loss"]
```
It is possible that there exist models that best minimize the selection metric during an epoch. However, since the gain is likely quite minimal, it is much more practical to choose the strategy mentioned above.
---
### Is there something better?
You propose some interesting strategies. It is possible that there might be a better process than the typical "train/validation/test over shuffled epoch with batches", but it needs to be shown that such a process leads to noticeable improvements. The convenience and comprehensibility of using a common approach should not be underestimated.
>
> You might also want to go back and restart from a "better" set of weights in the hopes of getting a better trajectory.
>
>
>
Restarting from a better model is almost like training using the validation set, though admittedly not as severe as using validation gradients to optimize the model directly. Typical problems to consider when using such an approach are overfitting and difficulty in escaping local minima.
Upvotes: 2 |
2022/07/21 | 772 | 2,538 | <issue_start>username_0: Please can someone describe how to properly obtain the **ImageNet dataset** (to be precise the ImageNet 2012 Classification Dataset).
**What I attempted so far**
The [ImageNet webpage](https://image-net.org/download.php) refers the user to download the ImageNet dataset from Kaggle. However, the Kaggle webpage it refers belongs to the Image *Localization* (not classification) challenge.
I have also requested a download from the ImageNet webpage which is pending since almost one year.<issue_comment>username_1: ImageNet is available in `torchvision datasets`. <https://pytorch.org/vision/stable/generated/torchvision.datasets.ImageNet.html>
Upvotes: -1 <issue_comment>username_2: These are the detailed steps on how I obtained ImageNet and ran a PyTorch example training on it:
```
1. Go to https://www.image-net.org/download.php
2. Request to download ImageNet
3. Wait about 5 days for approval, write to them if the waiting period is over.
4. [I think you can skip this step] Download the Development Kit from the ILSVRC2017 page
5. Download the images from the ILSVRC2012 page
a. Training images (Task 1 & 2) 138 GB
b. Validation images (all tasks) 6.3 GB
c. Test images (all tasks) 13 GB
6. [I think you can skip this step if you use the script from step 8!] Unpack the tar files
a. mkdir val
b. tar -C val/ -xvf ILSVRC2012_img_val*.tar
c. mkdir test
d. tar -C test/ -xvf ILSVRC2012_img_test_v10102019.tar
e. media train
f. tar -C train/ -xvf ILSVRC2012_img_train.tar
7. Confirm the number of images in each folder
a. ls val/ | wc -l # should give 50,000
b. ls test/ | wc -l # should give 50,000
8. Run the script extract_ILSVRC.sh from the PyTorch GitHub [https://github.com/pytorch/examples/blob/main/imagenet/extract_ILSVRC.sh]
# imagenet/train/
# ├── n01440764
# │ ├── n01440764_10026.JPEG
# │ ├── n01440764_10027.JPEG
# │ ├── ......
# ├── ......
# imagenet/val/
# ├── n01440764
# │ ├── ILSVRC2012_val_00000293.JPEG
# │ ├── ILSVRC2012_val_00002138.JPEG
# │ ├── ......
# ├── ......
9. Run a PyTorch example training on your ImageNet dataset [e.g. from the PyTorch examples GitHub repository https://github.com/pytorch/examples/blob/main/imagenet/main.py]
```
Upvotes: 0 |
2022/07/22 | 514 | 1,460 | <issue_start>username_0: Is there any place where people share their agent's settings for solving OpenAI Gym Environments?
For example, I'd like to know what are good parameters for a DDPG agent to learn the task in Reacher-v2. I believe that a lot of people tried to solve it and maybe they shared their solution for achieving better performance.<issue_comment>username_1: This is kind of spread throughout the internet. There are tons of [gists](https://gist.github.com/gkhayes/3d154e0505e31d6367be22ed3da2e955) and [repos](https://github.com/sourcecode369/deep-reinforcement-learning), there are contests and topics on kaggle. I do not think there is one centralized repository of this sort of thing other than individuals who have curated some list(which is the repo I linked), although perhaps you could make one!
Upvotes: 0 <issue_comment>username_2: Specific hyperparameters for *Reacher-v2*, from Table 10 in [Universal Successor Features for Transfer Reinforcement Learning](https://arxiv.org/abs/2001.04025):
| Hyperparameter | DDPG | DDPG + USFs | HER | HER + USFs |
| --- | --- | --- | --- | --- |
| Actor Learning Rate | 1e-4 | 1e-4 | 1e-3 | 1e-4 |
| Critic Learning Rate | 1e-3 | 1e-3 | 1e-4 | 1e-3 |
| Loss Weight λ | N/A | 1e-4 | N/A | 0.01 |
| Batch Size | 64 | 64 | 64 | 64 |
| Discount Factor γ | 0.99 | 0.99 | 0.99 | 0.99 |
| HER Future Steps | N/A | N/A | 50 | 50 |
| HER Buffer Sampling Probability | N/A | N/A | 0.5 | 0.5 |
Upvotes: 1 |
2022/07/22 | 882 | 3,921 | <issue_start>username_0: I don't understand if the purposes of RL agents is simply optimizing a model with a reward instead of using labeled data (i.e. in a supervision fashion), or they have also the purpose of keep training and exploring in order to adapt to possible environment changes.<issue_comment>username_1: I think it is better to answer this question based on the reference book *"Reinforcement Learning: An Introduction by <NAME> and <NAME>"*
At first, come to see a general description of RL:
>
> Reinforcement learning is learning what to do—how to map situations to
> actions—so as to maximize a numerical reward signal. The learner is
> not told which actions to take, but instead must discover which
> actions yield the most reward by trying them. In the most interesting
> and challenging cases, actions may affect not only the immediate
> reward but also the next situation and, through that, all subsequent
> rewards. These two characteristics—trial-and-error search and delayed
> reward—are the two most important distinguishing features of
> reinforcement learning.
>
>
>
Also, there is a simple comparison with supervised learning:
>
> Reinforcement learning is different from supervised learning, the kind
> of learning studied in most current research in the field of machine
> learning. Supervised learning is learning from a training set of
> labeled examples provided by a knowledgable external supervisor. Each
> example is a description of a situation together with a
> specification—the label—of the correct action the system should take
> to that situation, which is often to identify a category to which the
> situation belongs. The object of this kind of learning is for the
> system to extrapolate, or generalize, its responses so that it acts
> correctly in situations not present in the training set. This is an
> important kind of learning, but alone it is not adequate for learning
> from interaction. In interactive problems it is often impractical to
> obtain examples of desired behavior that are both correct and
> representative of all the situations in which the agent has to act. In
> uncharted territory—where one would expect learning to be most
> beneficial—an agent must be able to learn from its own experience.
>
>
>
We can tell that the RL approach provides a straightforward adaption for the agent. Thus, the agent only needs the training to be a master of a skill.
Upvotes: 0 <issue_comment>username_2: This depends on the setting. Ongoing learning that never ends is a feature of settings where one or both of the following is true:
* There is little existing available data or experience when considering the complexity of the problem, and it is practical to deploy a partially-trained model in the real environment (e.g. it is better than nothing, even partially trained) and continue to train it.
* The properties of the environment change over time, so that existing models degrade in performance unless updated with new data or experience.
There is no major difference between reinforcement learning (RL) and supervised learning in this regard. RL lends itself well to changing environments though, because it must already be designed to adjust for changes to a policy as it progressively improves.
>
> I don't understand if the purposes of RL agents is simply optimizing a model with a reward
>
>
>
It is still a common scenario to have separate training and test/deployment phases when developing an RL agent.
If you want to have an agent that can play a specific board game or video game, or to navigate a robot in a building, it is often possible to spend enough training time that data availability is not an issue, and to generalise about all the variations of the environment that need to be handled. In which case it can be preferable to have an agent with a known, measured ability that won't change.
Upvotes: 2 |
2022/07/27 | 569 | 2,065 | <issue_start>username_0: So I've been doing ML for ~2 years in industry, I'm a BSc in applied math, finished several courses on ML/DL on coursera, read some specific topics in ML/DL books. Seem to be in the know, more or less.
But the catch is: I've never *really* validated my knowledge, as in taken an exam or certification. Some time has passed since I read theory and I sort of think that I'm losing it (especially since all I did was time-series forecasting).
What are the possible ways to validate that I actually understand stuff? Do I just text random ML experts "hey could I tell you stuff and you say whether that's bs or not"?..<issue_comment>username_1: I do not have a definite answer for your question, but one way to validate your ML knowledge is to attempt certain problems that are asked in ML interviews for jobs. It might be a good starting point although it will not cover all the relevant concepts. Two such resources are
* The [questions](https://huyenchip.com/ml-interviews-book/contents/part-ii.-questions.html) section in Chip Huyen's ML Interviews book
* A [Deep Learning Interviews](https://arxiv.org/abs/2201.00650) book that contains many questions testing your ML and DL knowledge.
If you are looking to solve more theory questions, you can work through the exercises of Bishop's [Pattern Recognition and Machine Learning](https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf) book, although I am not sure if the solutions are available online.
Upvotes: 1 <issue_comment>username_2: 1. Find a high quality and well recognized course, you can pay extra for a Certificate:
* <https://ocw.mit.edu/search/?t=Artificial+Intelligence>
* <https://www.edx.org/course/the-analytics-edge>
2. Try answering some of these questions, network with others:
* [https://mathoverflow.net/](https://mathoverflow.net/search?q=views%3A5000..+answers%3A0+closed%3A0)
3. Write a few papers, publish, and try to obtain an [endorsement](https://arxiv.org/help/endorsement).
Upvotes: 0 |
2022/07/29 | 319 | 1,501 | <issue_start>username_0: As I searched about this two terms, I found they are somehow like each other, both try to create a vector from raw data as I understood. But, what is the difference of this two term?<issue_comment>username_1: Vector representation is a generic term used to talk about any type of feature encoding, embedding vectors are instead a special case of vector representation.
When talking about vector representation the only underlying assumption is that every variable was encoded into numerical values, without any restriction regarding the numbers or the vector itself.
Embedding vectors instead are specifically continuous vectors of fixed dimensions obtained trough matrix factorization techniques or deep learning models. They originally proposed to encode text in the [Word2Vec](https://arxiv.org/pdf/1301.3781.pdf) paper, and since then they acquired more and more popularity due to the high generalization potential of the proposed method in other AI branched rather than natural language processing.
Upvotes: 3 [selected_answer]<issue_comment>username_2: To the best of my knowledge:
A latent representation, or code, $\mathbb{z}$ is a learned continuous vector representation of some input $\mathbb{x}$. E.g., we might have a code of an image.
An embedding is a type of code which specifically represents a discrete variable $\mathbb{x}\_\text{discrete}$. One popular example is a word embedding, where words are encoded discretely using one-hot encoding.
Upvotes: 0 |
2022/07/29 | 567 | 2,317 | <issue_start>username_0: I'm trying to solve a medical imaging regression problem using a CNN. Each of the samples in my data set consists of one, two, or three of the following file types:
* flair.nii.gz
* mprage.nii.gz
* swi.nii.gz
Each of the files is a three- or four-dimensional matrix of voxel values between 0 and 255.
I don't want to throw information away since I have a limited number of data samples. Is there a technique for working around those cases which have a 'missing' file (or two)? For example, could I construct a matrix consisting of, say, all zeroes of the correct dimension and size and use this as a replacement for the file? Would this work or would it lead to problems with the CNN? Of course, the samples that I want to make inferences on might also have missing files, so, maybe, a matrix with all zeroes would be a type of information.
Anyway, what does one do in a case like this?<issue_comment>username_1: I am not sure if I understood your question. Basically, you have data with different dimensions. For example you have image data HxWxc (3dimension) and you have video data, HxWxcxt (4dimension). What you are saying is that padding zeros to the image to make a video. That does not really make sense to me.
Upvotes: -1 <issue_comment>username_2: The general version of your problem is described a bit in [this stackexchange post](https://stats.stackexchange.com/questions/341740/handling-missing-data-for-a-neural-network).
However, it's hard to know how to address the issue unless the missing file dropout is better understood. For example, if the missing files are truly iid, then zeroing out the tensor might be ok. If one of your files drops out 2 or 3x more than another, zeroing out will skew towards the other features.
You also specified that the amount of data you have is small, which might make dropout less effective with a net, and you might have better luck with hand crafted features with a decision tree.
Ultimately, you just have to try things. I would start by making sure you can overfit on the small set of examples without missing data and then try to overfit again with the missing data to see how, or if, your metrics degrade. This will give you better insight for what to try when you train with a proper train / test / validation split.
Upvotes: 0 |
2022/08/10 | 1,290 | 4,584 | <issue_start>username_0: I am learning the transformers architecture from these two sources:
<https://arxiv.org/pdf/1706.03762.pdf>
<https://jalammar.github.io/illustrated-transformer/>
I just wanted to ask about the final step in the decoder. Let's fix testing time. As I understand, the decoder starts with an input of dimension $(N\_{words},d\_{emb})$, where $N\_{words}$ is the number of words already predicted and $d\_{emb}$ is the embedding dimension.
Now if we "follow" the following decoder steps, at each step (after e.g. the attention layers) we should have a vector of dimension $(N\_{words},d\_{model})$ where $d\_{model}$ is the model dimension. In other words, up to the final linear layer we have $N\_{words}$ vectors which are $d\_{model}$-dimensional.
Are all these $N\_{words}$ vectors fed into the last linear layer (before the softmax) or, as I suspect, only the last of these vectors is used ? In the latter case the last linear layer would be a matrix of dimension $d\_{model}\times N\_{vocab}$, where $N\_{vocab}$ is the vocabulary dimension.
Is this correct ? Are there any issues in what I wrote ? Unluckily from the online sources I was not able to clarify this point...
PS: I conjectured that the last linear layer is using just the last vector, because than I would understand what happens in training time, one would just use in that case all the output vectors from the decoder, instead of just the last one, to have a parallelized prediction.<issue_comment>username_1: ### Edit
Based on the comments to the original version of this answer, OP indicated that the use case was translation between two languages.
### Answer:
At sampling time, the last linear layer of the decoder is going to output a sequence whose length is incremented by one each time you apply the encoder-decoder transformer to the input sentence.
Let's take a practical example, with $w$ denoting the words in the original sentence and $w^{i}$ those in the target language after iteration $i$ of applying the transformer model.
If you have already sampled a sequence $(w^1\_1, w^2\_2, ..., w^{l}\_l)$, the inputs to the encoder and the decoder the next time you apply the model (to get the next token $w^{l+1}\_{l+1}$) are going to be respectively $(w\_1, w\_2, ..., w\_T)$ (the original sentence) and $(w^1\_1, w^2\_2, ..., w^l\_l)$. The output of the decoder (after sampling) will be $(w^{l+1}\_1, w^{l+1}\_2, ..., w^{l+1}\_l, w^{l+1}\_{l+1})$, but we're only going to keep $w^{l+1}\_{l+1}$.
Then, applying the model again to get $w^{l+2}\_{l+2}$, the new inputs to the encoder and the decoder are going to be $(w\_1, w\_2, ..., w\_T)$ (again the original sentence) and $(w^1\_1, w^2\_2, ..., w^l\_l, w^{l+1}\_{l+1})$. The output of the decoder will be $(w^{l+2}\_1, w^{l+2}\_2, ..., w^{l+2}\_l, w^{l+2}\_{l+1}, w^{l+2}\_{l+2})$, and we'll keep only $w^{l+2}\_{l+2}$.
This continues until the sentence is fully translated.
At each step, we give the whole original sentence to the encoder so that the model can build its translation for the next word by looking at the whole sentence. And we give the sentence translated so far to the decoder so that the translation of the next word can attend to what has already been translated.
To confirm, you can have a look at what they do in the [TensorFlow transformer tutorial](https://www.tensorflow.org/text/tutorials/transformer#run_inference).
Upvotes: 0 <issue_comment>username_2: I agree with this:
>
> PS: I conjectured that the last linear layer is using just the last
> vector, because than I would understand what happens in training time,
> one would just use in that case all the output vectors from the
> decoder, instead of just the last one, to have a parallelized
> prediction.
>
>
>
Indeed when you look at the code of Tensorflow [shared previusly](https://ai.stackexchange.com/a/36695/47183) you see this line
```
# select the last token from the seq_len dimension
predictions = predictions[:, -1:, :] # (batch_size, 1, vocab_size)
```
where they take the last token of the sequence as a prediction for the next token. And this is consistent with the fact that the input sequence to the decoder is shifted to the right by adding a token at the beginning (see transformers [original paper](https://arxiv.org/abs/1706.03762) section 3.1-Decoder).
So to correct the previous answer, the input and output sequence of the decoder have the same **length** and the last vector is used for the next token prediction which is then added at the end of the input sequence.
Upvotes: 3 [selected_answer] |
2022/08/11 | 1,039 | 3,788 | <issue_start>username_0: Does anyone know if there is a theorem or counterexample establishing whether or not for any given binary classification task in some finite (possibly large) dimensional vector space of attributes, that there exists a polynomial classifier that can form a hyperplane sorting all the positive from negatively labelled data points?
To clarify, I know that if a dataset is linearly separable, then we can find such a linear classifier. But my question is more general and asks if without knowing beforehand whether a dataset is separable at all, can we know ahead of time if there exists a polynomial classifier for any n-dimensional vector space of data points?<issue_comment>username_1: ### Edit
Based on the comments to the original version of this answer, OP indicated that the use case was translation between two languages.
### Answer:
At sampling time, the last linear layer of the decoder is going to output a sequence whose length is incremented by one each time you apply the encoder-decoder transformer to the input sentence.
Let's take a practical example, with $w$ denoting the words in the original sentence and $w^{i}$ those in the target language after iteration $i$ of applying the transformer model.
If you have already sampled a sequence $(w^1\_1, w^2\_2, ..., w^{l}\_l)$, the inputs to the encoder and the decoder the next time you apply the model (to get the next token $w^{l+1}\_{l+1}$) are going to be respectively $(w\_1, w\_2, ..., w\_T)$ (the original sentence) and $(w^1\_1, w^2\_2, ..., w^l\_l)$. The output of the decoder (after sampling) will be $(w^{l+1}\_1, w^{l+1}\_2, ..., w^{l+1}\_l, w^{l+1}\_{l+1})$, but we're only going to keep $w^{l+1}\_{l+1}$.
Then, applying the model again to get $w^{l+2}\_{l+2}$, the new inputs to the encoder and the decoder are going to be $(w\_1, w\_2, ..., w\_T)$ (again the original sentence) and $(w^1\_1, w^2\_2, ..., w^l\_l, w^{l+1}\_{l+1})$. The output of the decoder will be $(w^{l+2}\_1, w^{l+2}\_2, ..., w^{l+2}\_l, w^{l+2}\_{l+1}, w^{l+2}\_{l+2})$, and we'll keep only $w^{l+2}\_{l+2}$.
This continues until the sentence is fully translated.
At each step, we give the whole original sentence to the encoder so that the model can build its translation for the next word by looking at the whole sentence. And we give the sentence translated so far to the decoder so that the translation of the next word can attend to what has already been translated.
To confirm, you can have a look at what they do in the [TensorFlow transformer tutorial](https://www.tensorflow.org/text/tutorials/transformer#run_inference).
Upvotes: 0 <issue_comment>username_2: I agree with this:
>
> PS: I conjectured that the last linear layer is using just the last
> vector, because than I would understand what happens in training time,
> one would just use in that case all the output vectors from the
> decoder, instead of just the last one, to have a parallelized
> prediction.
>
>
>
Indeed when you look at the code of Tensorflow [shared previusly](https://ai.stackexchange.com/a/36695/47183) you see this line
```
# select the last token from the seq_len dimension
predictions = predictions[:, -1:, :] # (batch_size, 1, vocab_size)
```
where they take the last token of the sequence as a prediction for the next token. And this is consistent with the fact that the input sequence to the decoder is shifted to the right by adding a token at the beginning (see transformers [original paper](https://arxiv.org/abs/1706.03762) section 3.1-Decoder).
So to correct the previous answer, the input and output sequence of the decoder have the same **length** and the last vector is used for the next token prediction which is then added at the end of the input sequence.
Upvotes: 3 [selected_answer] |
2022/08/15 | 615 | 2,757 | <issue_start>username_0: I am beginner to this field and i am trying to find big picture and i have tried to explore youtube and google images in this regard. According to my understanding ,machine learning is subset of artificial intelligence . But what about computer vision and data science, are they also subset of artificial intelligence? Which is main set? Artificial intelligence or data science?
If someone can share a venn diagram that may be very helpful in understanding the proper placement/arrangement of above mentioned different fields<issue_comment>username_1: This Venn diagram might help to visualize the relation between the different fields:
[](https://i.stack.imgur.com/Ms8ml.png)
The image is from the free [deep learning book](https://www.deeplearningbook.org/contents/intro.html) by <NAME>, <NAME> and <NAME>. As you said machine learning is indeed a subset of Artificial intelligence. Artificial intelligence is [since its origins](https://en.wikipedia.org/wiki/History_of_artificial_intelligence) a very broad framework with a strong overlap with computer science, neuroscience, game theory, reinforcement learning, linguistics, etc. Computer vision can in general be considered a part of artificial intelligence. However, computer vision comes in various flavors and forms and some people in the field might not be fond of putting their work under the 'AI' umbrella. Data science conceptually overlaps with many of the methods used within AI but it is more focused on statistics, data analytics, modeling, and data mining.
In industry being a data scientist often means 'generating insights from data' (eg via AB testing) while working as something with AI in the title involves some kind of deep learning (eg for natural language understanding).
Upvotes: 2 <issue_comment>username_2: While I cannot provide a Venn Diagram like the other answers, i can explain the relationship between them.
Computer vision is heavily tied in with AI, because computers have no intuitive sense of what a sign or word is. AI can help by using a training model with samples to tell it what a word in English or Russian is etc. Without AI computer vision is fiction. You cannot teach a computer to understand a sign without telling it what a sign usually looks like via sample data.
Data science can be somewhat connected to AI and Machine vision, but its usually just crunching numbers. For example, what is the most optimal way to recommend these videos to the users? Data science is wide so AI can be used for some jobs while statistics is used for another. Computers are used to solve most of the heavy work especially with algorithms.
Upvotes: 0 |
2022/08/20 | 649 | 2,292 | <issue_start>username_0: Is it possible to use artificial intelligence for example method like reinforcment learning, LSTM, ... in predicting the price of stocks or currencies like Bitcoin, etc.? And has the work been implemented and had a positive result so far?! If the answer is yes, what is the best method?<issue_comment>username_1: No! I don't think so, price action and or the fluctuating value of any stock, option, commodity or currency is totally random. However, there or indicators and software out there that do make fairly good assumptions or predictions but none are not perfect I don't believe.
Upvotes: 0 <issue_comment>username_2: [I searched “machine learning finance” on Google Scholar and hid a bunch of hits.](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=machine+learning+finance&btnG=)
A few jump out as being explicitly about predicting prices.
Culkin, Robert, and <NAME>. "Machine learning in finance: the case of deep learning for option pricing." Journal of Investment Management 15.4 (2017): 92-100.
<NAME>, Jan, et al. "Machine learning for quantitative finance: Fast derivative pricing, hedging and fitting." Quantitative Finance 18.10 (2018): 1635-1643.
[Specifically for Bitcoin:](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=machine+learning+bitcoin&oq=machine+learning+bit)
McNally, Sean, <NAME>, and <NAME>. "Predicting the price of bitcoin using machine learning." 2018 26th euromicro international conference on parallel, distributed and network-based processing (PDP). IEEE, 2018.
Chen, Zheshi, <NAME>, and <NAME>. "Bitcoin price prediction using machine learning: An approach to sample dimension engineering." Journal of Computational and Applied Mathematics 365 (2020): 112395.
Further, it’s easy to find postings about machine learning jobs at financial firms.
Consequently, I say that finance is at least interested in applying modern machine learning methods in the pursuit of accurate predictions of asset prices.
Finally, no post on financial machine learning resources would be complete without mentioning [*Advances in Financial Machine Learning*](https://rads.stackoverflow.com/amzn/click/com/1119482089), written by a guy from the hedge fund AQR Capital Management.
Upvotes: 2 |
2022/08/26 | 624 | 2,869 | <issue_start>username_0: I have been trying out various tutorials on object detection machine learning. All the tutorials so far have been to use a pre-trained model for practical reasons when detecting objects that the pre-trained model learnt (e.g cats & dogs). However, will this pre-trained model work if I input a few hundred images of a particular car engine part and predict this class, which the pre-trained model did not train on? Is it recommended to make a model from scratch in this case?
I am further confused by this in TensorFlow documentation (Images -> Transfer learning and fine-tuning), the summary states:
>
> Using a pre-trained model for feature extraction: When working with a small dataset, it is a common practice to take advantage of features learned by a model trained on a larger dataset in the **same domain**
>
>
>
By that meaning, if I need to predict a particular car engine part then this statement seems to suggest I create a model from scratch?
TLDR: Will pre-trained model be able to work on image dataset that it has never learn before or better to work on a model from scratch?<issue_comment>username_1: Well, I think you forget about "fine tuning" stage here. What they mean in these tutorials is that you take such model that was pretrained on large dataset and you usually freeze from training all layers except the last one or few last ones and you train these last layer/layers on your smaller specific dataset. This is called "transfer learning".
So in theory model should learn more general features during that first trainin and use them without changes in the second training (fine tuning), when it learns features more specific to your problem. Only then you can use it for detection objects from your dataset. If the model has never seen a labeled part from your dataset during its training it will never come up what this is by itself that's why you need this fine tuning process. But fortunately it requires far less data and thanks to that is much faster too.
Usually you let fine tune only last layer. But if you feel that your problem is noticeably different from the images the model been trained on then you can try tuning two or three last layers.
It's not alaways easy to find a model pretrained on dataset from every possible domain so usually you just start with a model pretrained on datasets containing images of very general classes of objects such as ImageNet or COCO.
If you work on a model from scratch it would probably require a lot of data. It's easy to overfit deep learning models with just few hundreds samples.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Besides the answer from @username_1, you can also fine tune all the layers of the pretrained model.
Yes, it is incredibly commonplace to do transfer learning using pretrained models from different domains.
Upvotes: 0 |
2022/08/29 | 413 | 1,944 | <issue_start>username_0: Object detection uses mAP as the metrics. But if we are only interested in classification once the object in a bounding box is extracted, what metrics should we use? Thanks!<issue_comment>username_1: Well, I think you forget about "fine tuning" stage here. What they mean in these tutorials is that you take such model that was pretrained on large dataset and you usually freeze from training all layers except the last one or few last ones and you train these last layer/layers on your smaller specific dataset. This is called "transfer learning".
So in theory model should learn more general features during that first trainin and use them without changes in the second training (fine tuning), when it learns features more specific to your problem. Only then you can use it for detection objects from your dataset. If the model has never seen a labeled part from your dataset during its training it will never come up what this is by itself that's why you need this fine tuning process. But fortunately it requires far less data and thanks to that is much faster too.
Usually you let fine tune only last layer. But if you feel that your problem is noticeably different from the images the model been trained on then you can try tuning two or three last layers.
It's not alaways easy to find a model pretrained on dataset from every possible domain so usually you just start with a model pretrained on datasets containing images of very general classes of objects such as ImageNet or COCO.
If you work on a model from scratch it would probably require a lot of data. It's easy to overfit deep learning models with just few hundreds samples.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Besides the answer from @username_1, you can also fine tune all the layers of the pretrained model.
Yes, it is incredibly commonplace to do transfer learning using pretrained models from different domains.
Upvotes: 0 |