date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2022/08/31
1,266
4,794
<issue_start>username_0: I know by the expressiveness of a neural networks that it can be seen as a chain of function compositions, i.e. $g(f(.. z(x)..))$ and also that, if we go deep, we can approximate complex functions $f: \mathbb{R} \rightarrow [0,1]$ with a lower number of units. But why, if we go deeper, do we get computing power?<issue_comment>username_1: It is not that we "get more computing power", it is the fact that deep networks are more expressive than shallow ones, which is pretty much the result of what you have started stating about composition. It might be helpful to think of an example - here's a nice one I've written about [here](https://github.com/Hadar933/Intro-to-Deep-Learning/blob/main/IDL_notes.pdf) > > We define the t-saw-tooth function as a piece-wise affine function > with $t$ pieces. Also, we define the hat function as > $hat(x)=relu(2relu(x)-4relu(x-0.5))$. > > > note that hat is a 4-saw-tooth function by definition. We can use hat > to concatenate saw-tooth functions - let $t(x) = > hat(x)+hat(x-1)$ comprises of two hats, and is a 6-saw-tooth function, > and the composition $t(t(x))$ is a 10-saw-tooth, $t(t(t(x))))$ is a > 18-saw-tooth, and in general, a composition of $T$ functions ($T$ > $t(x)$'s) is a $(2+2^{T+1})$-saw-tooth, namely, the composition is > comprised of $2^T$ hats. How can we represent this composition using a > shallow network? remember that every neuron in the (single) hidden > layer "represents" a single relu, therefore two neurons can represent > a single hat. As there are exponentially many hats, a shallow network > will have to use exponentially many neurons ($2^{T+1}$ to be exact). > On the other hand, a deep neural network would only need $\Theta(T)$ > (if, for example, we use $T$ layers and $2$ nodes per layer). This is > the case because the deep network $i'th$ layer receives as input the > values of the $(i-1)'th$ layer, and for that matter is represents the > composition of $t(x)$ simply as a product of its architectural design. > > > Upvotes: 2 <issue_comment>username_2: It would be simple, intuitive answer but generally deeper networks have more "space" to learn more complex features. They start from very simple "shapes" (using CNNs as an example) and gradually build towards more complex ones. Having more layers means there can be a lot of intermediary stages that will help in constituting final features (near the end of the network). This assures that the final features can be complex. In that case more details are considered in the middle layers and last layers can take advantage of these details. Whereas shallow networks don't have that space for developing complicated features at the end. They must make use of only few layers therefore the increase of features details will be big between layers, making it harder for complex "shapes" to be found at the end. A lot of useful information maybe lost in the process. Upvotes: 2 <issue_comment>username_3: Let's try to get the intuition with an example. If you're already familiar with CNNs, we can use the example of [Feature Maps](https://machinelearningmastery.com/how-to-visualize-filters-and-feature-maps-in-convolutional-neural-networks). Feature Maps help us visualize what CNNs learn at each layer. The observation is that the ones in the first layer capture the fine details of the input image, the ones in the second use them to get the finer details (that's why the feature map is less visualizable) and the same goes on as we go deeper. The feature maps at the second last layer may be hardly visualizable but has more information than the previous layers to predict the output. In case you're not familiar with CNNs, we can use the example of a multi-layered artificial neural network. The idea is same as the CNN. The first layer captures the simple details of the input and creates some features, the second layer uses those features to get the better details and the same goes on till the second last layer. We feed the final features to an output function which gives us the final output. Upvotes: 1 <issue_comment>username_4: I agree that in practice having deeper networks makes it simpler to construct more expressive functions for the reasons that other people have mentioned. However, while it is a practical convenience, it is not a theoretically necessity. The Wikipedia page on the [Universal approximation theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem) reminds us that > > <NAME>, <NAME>, and <NAME> showed in 1989 that multilayer feed-forward networks with as few as one hidden layer are universal approximators. > > > That is to say, *in line of principle* even a network with a single hidden layer can approximate any function. Upvotes: 1
2022/09/01
1,405
6,078
<issue_start>username_0: For example using a neural network to predict a coin toss. Can a trained neural network to predict it with more than 50% accuracy?<issue_comment>username_1: You need to ask yourself, what is the limiting factor in the accuracy for whatever you are trying to predict. * If the limiting factor is in the quality of the algorithm being used to calculate the prediction, then perhaps you could find a better algorithm that would improve the accuracy. * If the limiting factor is in the very nature of the problem itself, such as a coin flip, then there is no method of calculation that could improve the accuracy. Upvotes: 2 <issue_comment>username_2: **YES** If you obtain information about the force and angle of the throwers thumb striking the coin at release, that would give insight into how many times the coin would be expected to rotate. Combine this with what faces up when the coin releases, and you should be able to do better than 50/50. I don’t have a firm source (perhaps there is something on [Skeptics](https://skeptics.stackexchange.com)), but it seems that people have trained themselves to flip coins to reliably land on one of the sides, so there are some features that dictate how the coin rotates. Really, this is kind of the point of regression. You think some process has a 50/50 chance of the two outcomes, but once you know a bit more (features), you can sharpen that estimate. Formalizing this mathematically involves the conditional vs marginal distribution discussed in the answer by username_4. Upvotes: 3 <issue_comment>username_3: **No.** If there are no patterns, relations or correlations in your data, AI can do nothing to improve what essentially is just guessing. My last 5 tosses were Heads, Tails, Tails, Heads, Tails. Can you predict the next toss outcome? How would you explain your guess? If you give AI this same data, it cannot do better than just guessing. The question changes if you have data that is related to the outcome of the coin toss, such as the direction and force the coin was tossed before it lands. In this case, it isn't "an event with a statistical probability of 50%" anymore. If you measured everything perfectly, you could have 99.9% accuracy on what the outcome of the coin toss would be. AI can only produce accurate results if a super smart human could theoretically also produce accurate results. Upvotes: 4 <issue_comment>username_4: This is a question of marginal vs. conditional distribution =========================================================== The [marginal distribution](https://en.wikipedia.org/wiki/Marginal_distribution) of the coin may be a Bernoulli random variable with 50% probability for either outcome. **However**, the [conditional distribution](https://en.wikipedia.org/wiki/Conditional_probability_distribution) of the outcome given information about other factors (e.g. the angle, throw height, ... see other answers) may look entirely different. Provided these features determine the outcome in some way, a neural network can absolutely predict the outcome with more than 50% accuracy. ### A neural network could not exceed 50% accuracy, if * The information determining the throw outcome is not available * The function is of a nature that can not be learnt by the neural network * The coin toss is **truly random** A coin toss is often used as a casual example of a "truly random" event, so in this sense the answer to your question is "No". In reality however, it is very hard to find any truly random events (at least outside quantum mechanics), which is why random number generation is a big challange and neural networks can predict a lot of things. Upvotes: 4 <issue_comment>username_5: Although the question is a little vague, I'll treat it as a statement about the mapping of inputs and outputs in the underlying random process - no matter what conditions/inputs/features we observe, there is not a consistent mapping from input to output. A statistical probability of 50% suggests in two cases with *identical inputs*, we may find different outputs. A traditional deterministic neural network cannot do this, as it is really just a mathematical function, which by definition maps every possible input to *exactly one* output - it is not possible to use the same inputs and get different outputs. Because of this, a deterministic neural network can't achieve more than 50% accuracy in the long run in this case. No matter what set of features is input, there are in reality two possible outcomes, but the neural network can only return one outcome for any particular input. On average, the neural network will return the correct output only half the time - it can't achieve more than 50% accuracy. Upvotes: 0 <issue_comment>username_6: If a neural network was only able to be as reliable as random guessing, they wouldn't be much use! Let's suppose there's an election on, and the result is finely balanced between the yellow party and the purple party. At a top level, 50% of people will vote for each colour. If you know nothing else about the people, "Who will the next person in the polling station vote for?" is intrinsically an even guess. It would still be possible to use a neural network (or a decision tree, or a human!) to predict at much better than 50% if you have additional input. For example, if you look at them and can read their apparent wealth, race, gender presentation, or whether they come accompanied or alone, it may then be possible to identify membership of a sub-population which is more likely to vote yellow. The fundamental limit on performance isn't the baseline 50:50 probability. Instead it is the component which is caused by some inputs (which may but doesn't have to be true randomness) that are simply not available to the network. Suppose for example that there's a 5% chance that someone has been bribed to flip their vote and the network can't know that. In this case it won't get to more than 95% reliable predictions, but can still do much better than 50% with demographic data. Upvotes: 0
2022/09/08
1,558
6,802
<issue_start>username_0: Are there any techniques to combine a feature set (other than the text itself) with pretrained language models. Let's say I have a random NLP task that tries to predict a binary class label based on e.g. Twitter data. One could easily utilize a pretrained language model such as BERT/GPT-3 etc. to fine-tune it on the text of the tweets. However the tweets come with a lot of useful metadata such as likes/retweets etc. or if I want to add additional syntactic features such as POS-Tags, dependency relation or any other generated feature. Is it possible to use additional features I extracted for the finetuning step of the pretrained language model? Or is the only way of doing so to use an ensemble classifier and basically write a classifier for each of the extracted features and combine all of their predictions with the finetuned LMs predictions?<issue_comment>username_1: You need to ask yourself, what is the limiting factor in the accuracy for whatever you are trying to predict. * If the limiting factor is in the quality of the algorithm being used to calculate the prediction, then perhaps you could find a better algorithm that would improve the accuracy. * If the limiting factor is in the very nature of the problem itself, such as a coin flip, then there is no method of calculation that could improve the accuracy. Upvotes: 2 <issue_comment>username_2: **YES** If you obtain information about the force and angle of the throwers thumb striking the coin at release, that would give insight into how many times the coin would be expected to rotate. Combine this with what faces up when the coin releases, and you should be able to do better than 50/50. I don’t have a firm source (perhaps there is something on [Skeptics](https://skeptics.stackexchange.com)), but it seems that people have trained themselves to flip coins to reliably land on one of the sides, so there are some features that dictate how the coin rotates. Really, this is kind of the point of regression. You think some process has a 50/50 chance of the two outcomes, but once you know a bit more (features), you can sharpen that estimate. Formalizing this mathematically involves the conditional vs marginal distribution discussed in the answer by username_4. Upvotes: 3 <issue_comment>username_3: **No.** If there are no patterns, relations or correlations in your data, AI can do nothing to improve what essentially is just guessing. My last 5 tosses were Heads, Tails, Tails, Heads, Tails. Can you predict the next toss outcome? How would you explain your guess? If you give AI this same data, it cannot do better than just guessing. The question changes if you have data that is related to the outcome of the coin toss, such as the direction and force the coin was tossed before it lands. In this case, it isn't "an event with a statistical probability of 50%" anymore. If you measured everything perfectly, you could have 99.9% accuracy on what the outcome of the coin toss would be. AI can only produce accurate results if a super smart human could theoretically also produce accurate results. Upvotes: 4 <issue_comment>username_4: This is a question of marginal vs. conditional distribution =========================================================== The [marginal distribution](https://en.wikipedia.org/wiki/Marginal_distribution) of the coin may be a Bernoulli random variable with 50% probability for either outcome. **However**, the [conditional distribution](https://en.wikipedia.org/wiki/Conditional_probability_distribution) of the outcome given information about other factors (e.g. the angle, throw height, ... see other answers) may look entirely different. Provided these features determine the outcome in some way, a neural network can absolutely predict the outcome with more than 50% accuracy. ### A neural network could not exceed 50% accuracy, if * The information determining the throw outcome is not available * The function is of a nature that can not be learnt by the neural network * The coin toss is **truly random** A coin toss is often used as a casual example of a "truly random" event, so in this sense the answer to your question is "No". In reality however, it is very hard to find any truly random events (at least outside quantum mechanics), which is why random number generation is a big challange and neural networks can predict a lot of things. Upvotes: 4 <issue_comment>username_5: Although the question is a little vague, I'll treat it as a statement about the mapping of inputs and outputs in the underlying random process - no matter what conditions/inputs/features we observe, there is not a consistent mapping from input to output. A statistical probability of 50% suggests in two cases with *identical inputs*, we may find different outputs. A traditional deterministic neural network cannot do this, as it is really just a mathematical function, which by definition maps every possible input to *exactly one* output - it is not possible to use the same inputs and get different outputs. Because of this, a deterministic neural network can't achieve more than 50% accuracy in the long run in this case. No matter what set of features is input, there are in reality two possible outcomes, but the neural network can only return one outcome for any particular input. On average, the neural network will return the correct output only half the time - it can't achieve more than 50% accuracy. Upvotes: 0 <issue_comment>username_6: If a neural network was only able to be as reliable as random guessing, they wouldn't be much use! Let's suppose there's an election on, and the result is finely balanced between the yellow party and the purple party. At a top level, 50% of people will vote for each colour. If you know nothing else about the people, "Who will the next person in the polling station vote for?" is intrinsically an even guess. It would still be possible to use a neural network (or a decision tree, or a human!) to predict at much better than 50% if you have additional input. For example, if you look at them and can read their apparent wealth, race, gender presentation, or whether they come accompanied or alone, it may then be possible to identify membership of a sub-population which is more likely to vote yellow. The fundamental limit on performance isn't the baseline 50:50 probability. Instead it is the component which is caused by some inputs (which may but doesn't have to be true randomness) that are simply not available to the network. Suppose for example that there's a 5% chance that someone has been bribed to flip their vote and the network can't know that. In this case it won't get to more than 95% reliable predictions, but can still do much better than 50% with demographic data. Upvotes: 0
2022/09/19
748
2,810
<issue_start>username_0: For a short presentation about AI I am looking for examples where AI failed and therby shows the limits of itself. I remember there was one examples, where an image classifier was given an image of pink animals (I think sheep) on a tree and classified it as "Birds on a tree". I think this example showed what AI might do if the given example is not represented in the training data. But I cannot find that example anymore (and I need a source). Anyone knows of exmaples, that are documented that I could give and show the problem in a similar way?<issue_comment>username_1: A couple of examples could be: * *Image classifiers learning different properties than the actual target*: many books reference the case of a perceptron trained on detecting tanks which learned to actually predict good or bad weather in the background, ignoring completely the tanks. First cited in [What Artificial Experts Can and Cannot Do](https://www.gwern.net/docs/www/www.jefftk.com/d21a41fba82b2fbb43d273b1847722f2b18eb387.pdf). (notably this is most likely a [urban legend](https://www.gwern.net/Tanks#origin), but still a very much realistic situation that anybody working in computer vision will face soon or later). * *Amazon recruitment algorithm biased towards men*: this is totally real and it has been analyzed in several paper, I'll just link the first one I found [Encoded Bias in Recruitment Algorithms](https://www.researchgate.net/publication/331967358_Encoded_Bias_in_Recruitment_Algorithms). Again another case which remember us that machine learning and AI in general are data driven. An algorithm will learn and always be limited by what's in the data, including stereotypes and prejudices in case of natural language processing. Upvotes: 2 <issue_comment>username_2: A quite significant issue is where some AI systems have mislabeled black people as being gorillas (I suspect a major cause is the training data being insufficiently diverse, balanced and representative). Two examples are: * In 2015, this occurred with Google's Photo App (e.g., see [Google apologizes for algorithm mistakenly calling black people 'gorillas'](https://www.cnet.com/tech/services-and-software/google-apologizes-for-algorithm-mistakenly-calling-black-people-gorillas/)). Note that, for at least 3 years, Google apparently "fixed" their system just by removing gorillas as being an option, with this explained in [Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech](https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai). * Facebook's AI recommendation system had a similar problem (e.g., see [Facebook apology as AI labels black men 'primates'](https://www.bbc.com/news/technology-58462511)). Upvotes: 2 [selected_answer]
2022/09/20
833
3,451
<issue_start>username_0: I'm following along with PyTorch's example implementations ([found here](https://github.com/pytorch/examples/tree/main/reinforcement_learning)) of reinforcement learning algorithms that happen to be largely REINFORCE (vanilla policy gradient) based, and I notice they don't use batches. This leads me to ask, are batch updates of the network actually useful in this context? Adding on, in my particular environment there's not a real meaningful cutoff for episodes as it's really set up for a sort of continuous play. As such, any n-length trajectory + rewards I collect is just as valid as another. For that reason, it would seem to mean that a longer episode/trajectory would serve the same purpose batches tend to in network updating. Is it expected then that batches are not particularly worthwhile in the REINFORCE context, or is this just coincidence of the implementation I'm using? And is that answer amended if there are no meaningful episode cutoffs?<issue_comment>username_1: Yes I see in the repo you link, in reinforce.py they only perform a gradient update once every episode. It sounds like what you're asking about is the difference between that reinforce style, and the more popular (and also more efficient) PPO type style. In that latter way, we have something like \begin{align\*} & \text{ for each iteration }: \\ & \qquad \text{ for t in range(size\_training\_set)}: \\ & \qquad \qquad \text{sample } a\_t; \text{ get reward } r\_t \text{ and next state } s\_{t+1}; \text{ save transition to memory} \\ & \qquad \text{ for m epochs}: \\ & \qquad \qquad \text{ calculate advantages} \\ & \qquad \qquad \text{ for k mini-batches}: \\ & \qquad \qquad \qquad \text{make mini batch from training\_set}\text{ and do policy gradient update } \end{align\*} There are some other details such as importance sampling, so I would recommend you can try another repo's code first. The advantage of the PPO way is that we spend more time training on mini-batches and less time sampling the environment (which is slower), we can use each transition in multiple mini-batches, and we can generate more varied data to train on. Grouping together transitions from different times in different episodes might help remove harmful correlations. Also, theoretically the batch size shouldn't really matter, but in practice it's important, and we can't even tune that with the reinforce way. Upvotes: 0 <issue_comment>username_1: In REINFORCE, if you generated several episodes, and calculated the gradient over all transitions over all episodes, this would reduce the variance of the gradient compared to regular REINFORCE where we sample one episode at a time. You might know that when estimating the sample mean of a population, the variance decreases like $1/n$ where $n$ is the sample size. That's true here, for exactly the same reason: if you generated $n$ episodes per REINFORCE gradient, the variance will be $1/n$ what it is in normal REINFORCE. If we choose some $n$ and also multiply the learning rate by $n$, we would expect both versions of REINFORCE to perform about the same in terms of average reward vs wall time and average reward vs number of episodes. But the one with higher $n$ does less gradient updates. In practice, you might be able to tune $n$ as a hyperparameter, but really you need to be using a better algorithm than REINFORCE if you care about performance at all. Upvotes: 2 [selected_answer]
2022/09/21
596
2,741
<issue_start>username_0: Since transformers contain a neural network, are they a strict generalisation of standard feedforward neural networks? In what ways can transformers be interpreted as a generalisation and abstraction of these?<issue_comment>username_1: Neural network is a generic term used in literature as a sort of umbrella for all types of architectures, architecture being a set of distinct forward operations and hyper parameters (such as number of layers/nodes, kernels size). Feedforward neural networks, multi layer perceptrons, convolutional neural networks, recurrent neural networks, autoencoders, transformers (and many more) are all types of neural networks (deep neural networks to be precise, the 'deep' is usually assumed). Also edge cases like generative adversarial networks (which is more of a training approach than a strict architecture) are usually referred to as neural network, which allegedly might be confusing. So "*since transformers contain a neural network*" is not really a correct way of putting it. Also in case you meant "*since transformers contain a feed forward neural network*" it would still be incorrect cause transformers use operations such as convolutions which are not used in feed forward neural networks, so they are still very distinct type of architectures. Upvotes: 1 <issue_comment>username_2: Transformers can be used for a variety of tasks that standard feedforward neural networks $\color{gray}{\textbf{cannot}}$, such as *Natural Language Processing* and *Time Series Forecasting* & they are $\color{gray}{\textbf{more efficient}}$ than standard feedforward neural networks, as they can share parameters across multiple tasks. $\color{maroon}{\textbf{No}}$, Transformers are not a **Strict** generalization of standard feedforward neural networks. Since there is `no` **Strict** definition of `what a "standard feedforward neural network" is`, but they *Can be seen as a Generalization of a Feed-Forward Neural Network in Several ways*. > > Transformers can be interpreted as a generalization of *recurrent neural networks (RNNs)*. In contrast to RNNs, which propagate hidden states through time, Transformers propagate hidden states through a self-attention mechanism. This allows them to model long-range dependencies without the need for RNNs, which are difficult to train. > > > > > Transformers can also be interpreted as a generalization of *convolutional neural networks (CNNs)*. In contrast to CNNs, which extract features from local patches of an input, Transformers extract features from the entire input sequence. This allows them to model global dependencies without the need for CNNs, which are limited by their local receptive fields. > > > Upvotes: 0
2022/09/22
798
3,369
<issue_start>username_0: In the deep learning course I took at the university, the professor touched upon the subject of the **Restricted Boltzmann Machine**. What I understand from this subject is that this system works completely like **Artificial Neural Networks**. During the lecture, I asked the Professor the difference between these two systems, but I still did not fully understand. In general, there is an input layer and a hidden layer in both, and the weights are updated with forward-backward propagation. Can someone explain the exact difference between them?<issue_comment>username_1: You can find in [this paper](https://link.springer.com/chapter/10.1007/978-3-642-33275-3_2) that RBM is a specific type of artificial neural networks. Hence, the term Artificial Neural Network is more general than RBF. > > It is an important property that single as well as stacked RBMs can be reinterpreted as deterministic feed-forward neural networks. Than they are used as functions from the domain of the observations to the expectations of the latent variables in the top layer. Such a function maps the observations to learnt features, which can, for example, serve as input to a supervised learning system. Further, the neural network corresponding to a trained RBM or DBN can be augmented by an output layer, where units in the new added output layer represent labels corresponding to observations. Then the model corresponds to a standard neural network for classification or regression that can be further strained by standard supervised learning algorithms. It has been argued that this initialization (or unsupervised pretraining) of the feed-forward neural network weights based on a generative model helps to overcome problems observed when training multi-layer neural networks. > > > Of course, we have other types of neural networks like Recurrent Neural Networks expressing them as an RBF is not straightforward. Upvotes: 1 <issue_comment>username_2: A Boltzmann Machine is a probabilistic graphical model which follows Boltzmann distribution: $$p(v,h) = \frac{e^{-E(v,h)}}{\sum\_{v,h} e^{-E(v,h)}}$$ where $E(v,h)$ is known as the energy function. An RBM is a Boltzmann machine with a restriction that there are no connections between any two visible nodes or any two hidden nodes, which gives it the structure similar to a 2-layer **Artificial Neural Network**. The difference is that RBM, being an unsupervised model, is trained by minimizing the energy function. While an artificial neural network can have many hidden layers along with an output layer, and is trained by optimizing the loss between the values of output layer and the values of target variable. A special case of RBM which has binary visible nodes and binary hidden nodes, also known as **Bernoulli RBM** has an [API](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.BernoulliRBM.html) available in scikit-learn. They have also documented the learning algorithm [here](https://scikit-learn.org/stable/modules/neural_networks_unsupervised.html). In [this](https://scikit-learn.org/stable/auto_examples/neural_networks/plot_rbm_logistic_classification.html) example, they show how Bernoulli RBM can be used to perform effective non-linear feature extraction which can be fed to the LogisticRegression classifier for digit classification. Upvotes: 2
2022/09/22
1,125
4,368
<issue_start>username_0: So I have this function let call her $F:[0,1]^n \rightarrow \mathbb{R}$ and say $10 \le n \le 100$. I want to find some $x\_0 \in [0,1]^n$ such that $F(x\_0)$ is as small as possible. I don't think there is any hope of getting the global minimum. I just want a reasonably good $x\_0$. AFAIK the standard approach is to run an (accelerated) gradient descent a bunch of times and take the best result. But in my case values of $F$ are computed algorithmically and I don't have a way to compute gradients for $F$. So I want to do something like this. (A) We create a neural network which takes an $n$-dimensional vector as input and returns a real number as result. We want the NN to "predict" values of $F$ but at this point it is untrained. (B) We take bunch of random points in $[0,1]^n$. We compute values of $F$ at those points. And we train NN using this data. (C1) Now the neural net provides us with a reasonably smooth function $F\_1:[0,1]^n \rightarrow \mathbb{R}$ approximating $F$. We run a gradient decent a bunch of times on $F\_1$. We take the final points of those decent and compute $F$ on them to see if we caught any small values. Then we take whole paths of those gradient decent, compute $F$ on them and use this as data to retrain our neural net. (C2) The retrained neural net provides us with a new function $F\_2$ and we repeat the previous step (C3) ... Does this approach have a name? Is it used somewhere? Should I indeed use neural nets or there are better ways of constructing smooth approximations for my needs?<issue_comment>username_1: You can find in [this paper](https://link.springer.com/chapter/10.1007/978-3-642-33275-3_2) that RBM is a specific type of artificial neural networks. Hence, the term Artificial Neural Network is more general than RBF. > > It is an important property that single as well as stacked RBMs can be reinterpreted as deterministic feed-forward neural networks. Than they are used as functions from the domain of the observations to the expectations of the latent variables in the top layer. Such a function maps the observations to learnt features, which can, for example, serve as input to a supervised learning system. Further, the neural network corresponding to a trained RBM or DBN can be augmented by an output layer, where units in the new added output layer represent labels corresponding to observations. Then the model corresponds to a standard neural network for classification or regression that can be further strained by standard supervised learning algorithms. It has been argued that this initialization (or unsupervised pretraining) of the feed-forward neural network weights based on a generative model helps to overcome problems observed when training multi-layer neural networks. > > > Of course, we have other types of neural networks like Recurrent Neural Networks expressing them as an RBF is not straightforward. Upvotes: 1 <issue_comment>username_2: A Boltzmann Machine is a probabilistic graphical model which follows Boltzmann distribution: $$p(v,h) = \frac{e^{-E(v,h)}}{\sum\_{v,h} e^{-E(v,h)}}$$ where $E(v,h)$ is known as the energy function. An RBM is a Boltzmann machine with a restriction that there are no connections between any two visible nodes or any two hidden nodes, which gives it the structure similar to a 2-layer **Artificial Neural Network**. The difference is that RBM, being an unsupervised model, is trained by minimizing the energy function. While an artificial neural network can have many hidden layers along with an output layer, and is trained by optimizing the loss between the values of output layer and the values of target variable. A special case of RBM which has binary visible nodes and binary hidden nodes, also known as **Bernoulli RBM** has an [API](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.BernoulliRBM.html) available in scikit-learn. They have also documented the learning algorithm [here](https://scikit-learn.org/stable/modules/neural_networks_unsupervised.html). In [this](https://scikit-learn.org/stable/auto_examples/neural_networks/plot_rbm_logistic_classification.html) example, they show how Bernoulli RBM can be used to perform effective non-linear feature extraction which can be fed to the LogisticRegression classifier for digit classification. Upvotes: 2
2022/09/25
606
2,524
<issue_start>username_0: Consider the graph below for an understanding on how IDS work. Now my question is: why do IDS start at the root every iteration, why not start at the previously searched depth in the context of minmax? What is the intuition behind it? [![enter image description here](https://i.stack.imgur.com/mg1WW.png)](https://i.stack.imgur.com/mg1WW.png)<issue_comment>username_1: I could be mistaken since I don't know the source of the image you have provided, but that image appears to be showing how the tree is built, not how it is searched. Even so, when a balanced tree of the sort you have illustrated is searched, it will start with the root node, though in a search the maximum number of nodes traversed will be minimized and all operations (like min or max) will be performed in $O(h)$ where $h$ is the height of the tree. Upvotes: 0 <issue_comment>username_2: Normally in minimax (or any form of depth-first search really), we do not store nodes in memory for the parts we have already searched. The tree is only implicit, it's not stored anywhere explicitly. We typically implement these algorithms in a recursive memory. As soon as we've finished searching a certain of the tree, none of the data for that subtree is retained in memory. If you wanted to be able to continue the search from where you left off, you'd have to change this and actually store everything you've searched explicitly in memory. This can very quickly cause us to actually run out of memory and crash. Intuitively, at first it certainly makes sense what you suggest in terms of computation time, i.e. it would avoid re-doing work we've already done (if it were practical in terms of memory usage). However, if you analyse exactly how much you would save, it turns out not to be much at all. Due to the exponential growth of the size of the search tree as depth increases, it is usually the case that the computation effort for the next level ($d + 1$) is **much bigger** than the computation effort already done for **all previous depth levels** ($1, 2, 3, \dots, d$) **put together**. So, while in theory we're wasting some time re-doing work we've already done, in practice it actually rarely hurts. In the specific context of minimax with alpha-beta pruning, we get an additional benefit when re-doing the work. We get to make use of the estimated scores from our previous iteration to re-order the branches at the root, and with alpha-beta prunings this can actually make our search *more* efficient! Upvotes: 1
2022/09/28
957
3,490
<issue_start>username_0: I have created some nice patterns using the [MidJourney tool](https://midjourney.gitbook.io/docs/). I'd like to find a way to extend these patterns, and I was thinking about an AI tool that takes one of these patterns and extends it in all directions surrounding the original pattern. Just to give you an idea, this is one of those patterns: [![Enter image description here](https://i.stack.imgur.com/LvC7f.jpg)](https://i.stack.imgur.com/LvC7f.jpg)<issue_comment>username_1: The task you would like to accomplish is referred to as "outpainting". See example below. [![Enter image description here](https://i.stack.imgur.com/oBKn3.gif)](https://i.stack.imgur.com/oBKn3.gif) Very recently, [OpenAI](https://openai.com/blog/dall-e-introducing-outpainting/) released an outpainting feature that extends the possible operations to perform with their diffusion model [DALL-E](https://en.wikipedia.org/wiki/DALL-E). It is also worth to mention the Stability AI [Stable Diffusion](https://github.com/lkwq007/stablediffusion-infinity) model infinity extension (from which I took the example GIF image above). The nice thing of stable diffusion is that, unlike DALL-E, it has been publicly released. Upvotes: 3 <issue_comment>username_2: As Edoardo says in [their excellent answer](https://ai.stackexchange.com/a/37223/61427), the task at hand can be approached as an outpainting problem and there's some great tools available to do this. To throw an alternative into the ring, I'd point to an example in the field of texture synthesis - **[Self-Organising Textures](https://distill.pub/selforg/2021/textures/) built with [Neural Cellular Automata](https://distill.pub/2020/growing-ca/).** The theory revolves around teaching a very small neural network to generate an image using learned, local update rules. When given a loss function that [compares the style of two images](https://arxiv.org/abs/1508.06576), the model can generate textures that seamlessly extend the original. Within the Self-Organising Texture article, there's a [a Google Colab](https://colab.research.google.com/github/google-research/self-organising-systems/blob/master/notebooks/texture_nca_pytorch.ipynb) which allows you to import a target image and train the model to reproduce it. I used your image as the target, and it was able to quickly (<20 minutes) make a model that captured the overall pattern of your image: [![Version of the provided texture recreated and extended by a learned Neural Cellular Automata (NCA) model. That original texture resembles sun-bleached tree roots, thickly interwined and laced together with cobwebs tightly strung between many of them. This reproduction lacks some high quality details such as small twigs in the cobwebs, but overall faithfully reproduces arcs and twists of the original branches.](https://i.stack.imgur.com/3tlfI.jpg)](https://i.stack.imgur.com/3tlfI.jpg) There are options for [refining the resulting texture with different loss functions](https://www.youtube.com/watch?v=ZFYZFlY7lgI), and even exerting [a degree of artistic control using relative noise levels](https://www.youtube.com/watch?v=i59K8UT9UK4) in the generation process. One of the creators of the models, <NAME>, has an [excellent YouTube channel](https://www.youtube.com/c/zzznah) where he walks through some of these techniques and I'd highly recommend checking it out if you want to pursue using this method. Have fun! Upvotes: 5 [selected_answer]
2022/09/28
575
2,504
<issue_start>username_0: So I have AI project about `motion detection` with image subtraction. Regardless what are the object used, if there are change between two frames according threshold value, then it will categorized as motion. So in my project I only use `OpenCV` library in python. My program take two input. Where first frame or background frame will assumed/labeled as no motion frame for a refference. Second frame is any frame that captured currently. So, with just using image processing like ``` resizing -> grayscaling -> blurring -> substracting (absdiff) -> thresholding ``` Basically my program/project is just comparing between two images if there are changes in its pixel. Beside my project is related to computer vision obviously, is my project related to machine learning too? Especifically `supervised learning` because I labelled what is no motion image looks like to the machine. But in other hand, I don't feel any statistically method where machine learning usually use it. My mathematical operation was using substracting method only.<issue_comment>username_1: No - not all computer vision is machine learning. With machine learning, the computer designs its own algorithm (often by gradient descent) based on a "blank slate" version of the algorithm. Since you have just told the computer an algorithm, it's not machine learning. Upvotes: 3 [selected_answer]<issue_comment>username_2: Actually grayscaling and blurring are convolutional operations, and thresholding can be seen as an "activation function" (think of a sigmoid with a high gain). And resizing can be implemented by an [average pooling](https://keras.io/api/layers/pooling_layers/average_pooling2d/) layer. But since you have hard-coded these parameters (the blur radius and threshold), there is no ML involved. Then again it could be a fun exercise to apply a gradient descend to those layers. To run the training, you'd need to supplement the network with training data. In this case it would be a "binary" image where you have defined for each pixel whether it belongs to the background or foreground. Since there are so few parameters to tune, I expect that you wouldn't need that many training examples. > > if there are change between two frames according threshold value, then it will categorized as motion. > > > Ah now that I read you question more carefully, your training data could be just yes/no label for the whole picture. You aren't looking for object segmentation. Upvotes: 1
2022/10/04
1,204
4,637
<issue_start>username_0: I am training an autoencoder and a variational autoencoder using satellite and streetview images. I have tested my program on standard datasets such as MNIST and CelebA. It seems that the latent space dimension needed for those applications are fairly small. For example, MNIST is 28x28x1 and CelebA is 64x64x3 and for both a latent space bottleneck of 50 would be sufficient to observe reasonably reconstructed image. But for the autoencoder I am constructing, I needed a dimension of ~20000 in order to see features. The variational autoencoder is not working, and I only see a few blobs of fuzzy color. For those who have experience with training the autoencoders with your own images, what could be the problem? Are certain features easier to compress than others? or does it look like a sample size problem? (i have 20000 images in training) Is there any rule of thumb for the the factor of compression? Thanks! Here is a best example of what I have got with my VAE. I am using a ResNeXt architecture and the image dimension is 64x64x3, the latent space dimension is very large (18432). [![original image (64x64x3)](https://i.stack.imgur.com/fuc1E.png)](https://i.stack.imgur.com/fuc1E.png) [![VAE reconstructed image (with latent space of 18432)](https://i.stack.imgur.com/EIeMW.png)](https://i.stack.imgur.com/EIeMW.png)<issue_comment>username_1: From my experience (on MNIST digits), even when using a latent space of only $10$ nodes, the decoded reconstruction was pretty much ok. perhaps the architecture itself lacks the capabilities of encoding/decoding properly. Check out [this](https://medium.com/dataseries/convolutional-autoencoder-in-pytorch-on-mnist-dataset-d65145c132ac) summary and see if you can improve your results using a similar approach. Upvotes: 0 <issue_comment>username_2: You are asking about several things here and while related, solving one, will not necessarily "solve" your problem. Let's look at them separately: 1. Optimal dimension of the latent space. 2. Blurry reconstructions. 3. Optimal sample size. **Optimal dimension of the latent space.** I'm unaware of a one-fit-all way to find the optimal dimensionality of $z$ but an easy way is to try with different values and look at the likelihood on the test-set $log(p)$ - pick the lowest dimensionality that maximises it. This is a solution in tune with Deep Learning spirit :) Second solution, maybe a little more grounded, is to decompose your training data with SVD and look at the spectrum of singular values. The number of non trivial (=above some small threshold) values will give you a rough idea of the number of latent dimensions you are going to need. Finally, you could allow for a lot of z-dimensions but augment your loss function in such way, that the encoder will be forced to only use what it needs. This is sometimes called Sparsity promoting, L1 or Lasso-type regularisation and is also something that can help with overfitting. Take a look at `arXiv:1812.07238`. **Blurry reconstructions** This is a notorious problem with VAE's and while there are a lot of theories on why this happens, my take is that the reason is two fold. First, the loss function. With typical Cross-Entropy or MSE losses, we have this blunt bottom of the function where the minimum is residing allowing for a lot of similar "good solutions". See `arXiv:1511.05440` and especially `https://openreview.net/forum?id=rkglvsC9Ym` for an easy fix that seems to improve the quality/sharpness of the reconstructions. Second, the blurriness comes from the Variational formulation itself. Handwavy explained, we are trying to model some very-very complex data (images in your case), with a "simple" isotropic Gaussian. Not surprisingly, the result will be something the best the model can do given this constraint. Recall that the loss function in VAE's is called ELBO - Evidence Lower Bound - which basically tells us that we are trying to model a Lower Bound as best as we can and not the "actual data" distribution. Typically, introduction of a KL-multiplier, $\beta$, which relaxes the influence of the Gaussian prior, will give you better reconstructions (see $\beta$-VAE's). Finally, if you are feeling especially adventurous, take a look at discrete VAE's (VQ-VAE's), which seem to have reconstructions on pair with GAN's. Sampling from them is not trivial, however. **Optimal sample size** As for optimal sample size, just choose an architecture that will not overfit. Decrease the number of neurons/layers, check your $\log(p)$ on the test-set, introduce Dropout, all the usual stuff. Upvotes: 4 [selected_answer]
2022/10/07
875
4,007
<issue_start>username_0: I was wondering what is the performance benefit of feeding more data to a machine learning model like a neural network? Like I know one of the benefits is that it increases generalization - testing accuracy, but I was wondering if does it affect training accuracy of the model?<issue_comment>username_1: Using more training samples decreases the chance of over-fitting. However, I think, it may not occasionally result in a decrease in the training error, maybe the opposite (look at the loss function definition). for instance, if you have only very few samples a very deep network is capable of memorizing them all which makes training error = 0. Upvotes: 1 <issue_comment>username_2: The question use specific terms in a vague way so let me set some very basic ground definitions first. It might sounds trivial but please bear with me cause it's easy to give reasonable answers that in reality make no sense. ***data***: any unprocessed fact, value, text, sound, or picture that is not being interpreted and analyzed. It can be real, i.e. gathered trough real observations/experiments or synthetic, i.e. artificially constructed based on some hand crafted distribution. ***training/test data***: different subset of data, the difference being that testing are not used to train or tune the model parameters and hyperparameters. **Important to note is that we don't always have knowledge about the real distribution of the data**, meaning that the distribution of our training data many times do not match the distribution of testing data. ***accuracy***: it's a specific metric used to evaluate only a specific subset of machine learning tasks among which (and mostly) classification. Is also a pretty unreliable metric in some circumstances like multi class classification or unbalanced datasets. ***generalization***: the ability of a model to perform well on unseen data. In principle it has nothing to do with the metrics used to evaluate a model, even though metrics scores are the only tool we have to assess it. You ask if using more data can increase training accuracy and you already pointed out that using more data is meant to increase generalization, which you consider equal to testing accuracy. You`re right when you say that adding more data serve the purpose of increasing generalization, but as I wrote in the definition of generalization, in principle we can't always expect a linear relation between a model generalization and its metrics scores, and in fact adding data might as well decrease model generalization in some situations. As a basic example let's consider an imbalanced dataset with 90% instances belonging to class A and 10% instances belonging to class B. A model will easily learn to overfitt the data predicting only class A, still reaching 90% training accuracy. In test phase we might even observe a similar score if the distribution match the 90/10 ratio of training data. To prevent overfitting and increase generalization, we add instances of class B to make the dataset balanced, i.e. 50% instances class A 50% instances class B. Suddenly the model works perfectly, and reach 100% training accuracy. We see tough that in test phase the accuracy drops to 40%. How come? If the test indeed had the same 90/10% distribution among classes A and B, training the model on a 50/50% distribution teach the model to overpredict class B, so the model is now predicting class B, which is why we added more data, but it's now predicting too many times class B, leading again to poor generalization. Of course this is a toy scenario, but be aware that classic data augmentation and synthetic data always introduce the problem of introducing biases in the training distribution. Also, note that using a different metric like f score alongside accuracy would let you catch immediately if the model is generalizing more or if it's only an artifact produced by the accuracy metric (like the initial 90% score when totally overfitting). Upvotes: 2
2022/10/07
891
4,044
<issue_start>username_0: I am going to start learning the bandit problem and algorithm, especially how to bound the regret. I found the book ``Bandit Algorithms'' but it is not easy to follow. It is based on advanced stochastic processes and measure theory in some cases. I am wondering if there are any lecture notes, or courses to start.<issue_comment>username_1: Using more training samples decreases the chance of over-fitting. However, I think, it may not occasionally result in a decrease in the training error, maybe the opposite (look at the loss function definition). for instance, if you have only very few samples a very deep network is capable of memorizing them all which makes training error = 0. Upvotes: 1 <issue_comment>username_2: The question use specific terms in a vague way so let me set some very basic ground definitions first. It might sounds trivial but please bear with me cause it's easy to give reasonable answers that in reality make no sense. ***data***: any unprocessed fact, value, text, sound, or picture that is not being interpreted and analyzed. It can be real, i.e. gathered trough real observations/experiments or synthetic, i.e. artificially constructed based on some hand crafted distribution. ***training/test data***: different subset of data, the difference being that testing are not used to train or tune the model parameters and hyperparameters. **Important to note is that we don't always have knowledge about the real distribution of the data**, meaning that the distribution of our training data many times do not match the distribution of testing data. ***accuracy***: it's a specific metric used to evaluate only a specific subset of machine learning tasks among which (and mostly) classification. Is also a pretty unreliable metric in some circumstances like multi class classification or unbalanced datasets. ***generalization***: the ability of a model to perform well on unseen data. In principle it has nothing to do with the metrics used to evaluate a model, even though metrics scores are the only tool we have to assess it. You ask if using more data can increase training accuracy and you already pointed out that using more data is meant to increase generalization, which you consider equal to testing accuracy. You`re right when you say that adding more data serve the purpose of increasing generalization, but as I wrote in the definition of generalization, in principle we can't always expect a linear relation between a model generalization and its metrics scores, and in fact adding data might as well decrease model generalization in some situations. As a basic example let's consider an imbalanced dataset with 90% instances belonging to class A and 10% instances belonging to class B. A model will easily learn to overfitt the data predicting only class A, still reaching 90% training accuracy. In test phase we might even observe a similar score if the distribution match the 90/10 ratio of training data. To prevent overfitting and increase generalization, we add instances of class B to make the dataset balanced, i.e. 50% instances class A 50% instances class B. Suddenly the model works perfectly, and reach 100% training accuracy. We see tough that in test phase the accuracy drops to 40%. How come? If the test indeed had the same 90/10% distribution among classes A and B, training the model on a 50/50% distribution teach the model to overpredict class B, so the model is now predicting class B, which is why we added more data, but it's now predicting too many times class B, leading again to poor generalization. Of course this is a toy scenario, but be aware that classic data augmentation and synthetic data always introduce the problem of introducing biases in the training distribution. Also, note that using a different metric like f score alongside accuracy would let you catch immediately if the model is generalizing more or if it's only an artifact produced by the accuracy metric (like the initial 90% score when totally overfitting). Upvotes: 2
2022/10/07
958
3,107
<issue_start>username_0: For neural machine translation, there's this model "Seq2Seq with attention", also known as the "[Bahdanau](https://arxiv.org/pdf/1409.0473.pdf) architecture" (a good image can be found on [this page](https://lena-voita.github.io/nlp_course/seq2seq_and_attention.html)), where instead of Seq2Seq's encoder LSTM passing a single hidden vector $\vec h[T]$ to the decoder LSTM, the encoder makes all of its hidden vectors $\vec h[1] \dots \vec h[T]$ available and the decoder computes weights $\alpha\_i[t]$ with each iteration -- by comparing the decoder's previous hidden state $\vec s[t-1]$ to each encoder hidden state $\vec h[i]$ -- to decide which of those hidden vectors are the most valuable. These are then added together to get a single "context vector" $\vec c[t] = \alpha\_1[t]\,\vec h[1] + \alpha\_2[t]\,\vec h[2]+\dots +\alpha\_T[t]\,\vec h[T]$, which supposedly functions as Seq2Seq's single hidden vector. But the latter can't be the case. Seq2Seq originally passed that vector to the decoder as initialisation for its hidden state. Evidently, you can only initialise it once. So then, *how is $\vec c[t]$ used by the decoder*? None of the sources I have read (see e.g. the original paper linked above, or [this article](https://towardsdatascience.com/sequence-2-sequence-model-with-attention-mechanism-9e9ca2a613a), or [this paper](https://www.researchgate.net/publication/353217198_Sequence_to_Point_Learning_Based_on_an_Attention_Neural_Network_for_Nonintrusive_Load_Decomposition), or [this otherwise excellent reader](https://lena-voita.github.io/nlp_course/seq2seq_and_attention.html)) disclose what happens. At most, they hide this mechanism behind "a function $f$" which is never explained. I must be overlooking something super obvious here, apparently.<issue_comment>username_1: If our similarity function is defined as $$e^{t,t'}= f(y^t,h^{t'})$$ for a $(t, t')$ pair, this would give you an attention weight/score for each hidden state $h^{t'}$. In this way you end up with $[0, ..., t-1]$ *weighted hidden states*, which you combine using a sum. This weighted sum is the "context vector" $$ c\_t = \sum\_{i=0}^{T}a\_{t,i}h\_i$$ To answer your question, for each position you end up with a context vector $c\_t$ which fully replaces the hidden state $h^{t}$ at the exact same position of your computation graph. $$ h\_t \rightarrow c\_t,\ as\ \ LSTM\_{simple}\ \rightarrow\ LSTM\_{attentive} $$ Hope it helps! **References**: - Neural Machine Translation by Jointly Learning to Align and Translate, 2014 \ Upvotes: 0 <issue_comment>username_2: > > Evidently you can only initialize it ($\vec{c\_t}$) once > > > As I see it, $\vec{c\_t}$ depends on $\vec{h}[1] \ldots \vec{h}[n]$ **AND** $\vec{s\_{t-1}}$ (because $\alpha\_i[t]$ depend on $\vec{s\_{t-1}}$, and so is different for every calculation of a new $\vec{s\_t}$). And so $\vec{c\_t}$ is different for every $\vec{s\_t}$. It can be used by the decoder by e.g., concatenating it with $\vec{s\_{t-1}}$, or with $\vec{y\_{t-1}}$, or by adding instead of concatenating, or ... Upvotes: 1
2022/10/08
1,085
4,077
<issue_start>username_0: GPT-3 has a prompt limit of about ~2048 "tokens", which corresponds to about 4 characters in text. If my understanding is correct, a deep neural network is not learning after it is trained and is used to produce an output, and, as such, this limitation comes from amount of the input neurons. My question is: **what is stopping us from using the same algorithm we use for training, when using the network?** That would allow it to adjust its weights and, in a way, provide a form of long-term memory which could let it handle prompts with arbitrarily long limits. Is my line of thinking worng?<issue_comment>username_1: In theory, there is nothing stopping you from updating the weights of a neural network whenever you like. You run an example through the network, calculate the difference between the network's output and the answer you expected, and run back propagation, exactly the same as you do when you initially train the network. Of course, usually networks are trained with large batches of data instead of single examples at a time, so if you wanted to do a weight update you should save up a bunch of data and pass through a batch (ideally the same batch size that you used during training, though there's nothing stopping you from passing in different batch sizes). Keep in mind this is all theoretical. In practice, adjusting the weights on a deployed network will probably be very difficult because the models weights have been exported in a format optimized for inference. And it's better to have distinct releases with sets of weights that do not change rather than continuously updating the same model. Either way, changing the weights continuously would not affect the "memory" of the network in any way. The lengths of sequences that sequence-to-sequence models like transformers or RNNs can accept is an entirely separate parameter. Upvotes: 3 <issue_comment>username_2: With $175$ Billion parameters, `GPT-3` is remarkably large and powerful, but it has several limitations and risks associated with its usage. [**The biggest issue is that GPT-3 `can't` continuously learn once trained?**](https://www.techtarget.com/searchenterpriseai/definition/GPT-3#:%7E:text=While%20GPT-3%20is%20remarkably%20large%20and%20powerful%2C%20it,ongoing%20long-term%20memory%20that%20learns%20from%20each%20interaction.). It has been **pre-trained**(as the name ~ Generative Pre-trained Transformer), which means that **it doesn't have an ongoing long-term memory that learns from each interaction**. In addition, `GPT-3` suffers from the same problems as all `Neural Networks`: their lack of ability to explain and interpret *why certain inputs result in specific outputs*. Another reason could be that the model has reached a point of diminishing returns, meaning that any additional training is unlikely to result in significant improvements. > > "A significant concern when building AI models like these is **[diminishing returns](https://medium.com/analytics-vidhya/a-simple-explanation-of-gpt-3-571aca61208c)**—that is, you cannot simply scale the model up forever. At some point, some factor(s) of the model will **plateau**, whether it’s the **information generated**, the **dataset size**, the **training regime**, etc". > > > However, at the level of `GPT-2`, there was no indication that this plateau had been reached. Thus, the “bigger and better” tactic continued, bringing us `GPT-3`". So, it may also be possible that the model has simply reached a plateau in its learning and is unable to make any further progress. --- **References:** * [OpenAI GPT-3: Everything You Need to Know](https://www.springboard.com/blog/data-science/machine-learning-gpt-3-open-ai/) * [an overview of GPT-3: AI of the future](https://medium.com/analytics-vidhya/a-simple-explanation-of-gpt-3-571aca61208c) * [GPT-3](https://www.techtarget.com/searchenterpriseai/definition/GPT-3#:%7E:text=While%20GPT-3%20is%20remarkably%20large%20and%20powerful%2C%20it,ongoing%20long-term%20memory%20that%20learns%20from%20each%20interaction.) Upvotes: 1
2022/10/08
464
1,662
<issue_start>username_0: I am trying to buy a HP laptop with iris intel graphic card to run the Carla self driving simulator. Can carls run on iris or I need to buy a laptop with nividia gpu. Thx u<issue_comment>username_1: While [Carla was developed by Intel](https://www.intel.com/content/www/us/en/artificial-intelligence/researches/carla-open-urban-driving-simulator.html) their [GitHub recommends](https://github.com/carla-simulator/carla) a powerful [Nvidia GPU](https://carla.readthedocs.io/en/latest/start_quickstart/). Comparing a mid-level [NVidia GPU vs an Intel IRIS](https://laptopmedia.com/comparisons/nvidia-geforce-mx250-vs-intel-iris-plus-g7-the-nvidia-gpu-offers-better-performance-at-lower-cost/) the NVidia comes out ahead. You *could* use an Intel IRIS, but the performance wouldn't be very satisfactory. An [Intel Arc A730M-Powered Laptop](https://www.tomshardware.com/news/intel-arc-a730m-powered-laptop-surfaces-with-dollar1200-price-tag) would be a minimum laptop configuration, with an Intel GPU, for running Carla; still not anywhere as well as a top of the line GPU (which is necessary for AI), but I understand that cost is also a consideration for many people. Upvotes: 2 <issue_comment>username_2: All Deep Learning tasks generally require a NVIDIA GPU, and it is generally recommended to use them. It is given on carla website that a Nvidia GPU is recommended. Currently, Intel or other GPUs are not much compatible with tensor cores or deep learning due to lack of some structure. Hence, a Nvidia GPU is recommended. However, you can also use online GPUs which can provide you a better and configurable experience. Upvotes: -1
2022/10/10
599
2,086
<issue_start>username_0: Total Dataset :- 100 (on case level) Training :- 76 cases (18000 slices) Validation :- 19 cases (4000 slices) Test :- 5 cases (2000 slices) I have a dataset that consists of approx. Eighteen thousand images, out of which approx. Fifteen thousand images are of the normal patient and around 3000 images of patients having some diseases. Now, for these 18000 images, I also have their segmentation mask. So, 15000 segmentations masks are empty, and 3000 have patches. Should I also feed my model (deep learning, i.e., unet with resnet34 backbone) empty masks along with patches (non empty mask)?<issue_comment>username_1: While [Carla was developed by Intel](https://www.intel.com/content/www/us/en/artificial-intelligence/researches/carla-open-urban-driving-simulator.html) their [GitHub recommends](https://github.com/carla-simulator/carla) a powerful [Nvidia GPU](https://carla.readthedocs.io/en/latest/start_quickstart/). Comparing a mid-level [NVidia GPU vs an Intel IRIS](https://laptopmedia.com/comparisons/nvidia-geforce-mx250-vs-intel-iris-plus-g7-the-nvidia-gpu-offers-better-performance-at-lower-cost/) the NVidia comes out ahead. You *could* use an Intel IRIS, but the performance wouldn't be very satisfactory. An [Intel Arc A730M-Powered Laptop](https://www.tomshardware.com/news/intel-arc-a730m-powered-laptop-surfaces-with-dollar1200-price-tag) would be a minimum laptop configuration, with an Intel GPU, for running Carla; still not anywhere as well as a top of the line GPU (which is necessary for AI), but I understand that cost is also a consideration for many people. Upvotes: 2 <issue_comment>username_2: All Deep Learning tasks generally require a NVIDIA GPU, and it is generally recommended to use them. It is given on carla website that a Nvidia GPU is recommended. Currently, Intel or other GPUs are not much compatible with tensor cores or deep learning due to lack of some structure. Hence, a Nvidia GPU is recommended. However, you can also use online GPUs which can provide you a better and configurable experience. Upvotes: -1
2022/10/10
446
1,609
<issue_start>username_0: I'm trying to get an accurate answer about the difference between A2C and Q-Learning. And when can we use each of them?<issue_comment>username_1: While [Carla was developed by Intel](https://www.intel.com/content/www/us/en/artificial-intelligence/researches/carla-open-urban-driving-simulator.html) their [GitHub recommends](https://github.com/carla-simulator/carla) a powerful [Nvidia GPU](https://carla.readthedocs.io/en/latest/start_quickstart/). Comparing a mid-level [NVidia GPU vs an Intel IRIS](https://laptopmedia.com/comparisons/nvidia-geforce-mx250-vs-intel-iris-plus-g7-the-nvidia-gpu-offers-better-performance-at-lower-cost/) the NVidia comes out ahead. You *could* use an Intel IRIS, but the performance wouldn't be very satisfactory. An [Intel Arc A730M-Powered Laptop](https://www.tomshardware.com/news/intel-arc-a730m-powered-laptop-surfaces-with-dollar1200-price-tag) would be a minimum laptop configuration, with an Intel GPU, for running Carla; still not anywhere as well as a top of the line GPU (which is necessary for AI), but I understand that cost is also a consideration for many people. Upvotes: 2 <issue_comment>username_2: All Deep Learning tasks generally require a NVIDIA GPU, and it is generally recommended to use them. It is given on carla website that a Nvidia GPU is recommended. Currently, Intel or other GPUs are not much compatible with tensor cores or deep learning due to lack of some structure. Hence, a Nvidia GPU is recommended. However, you can also use online GPUs which can provide you a better and configurable experience. Upvotes: -1
2022/10/11
631
2,128
<issue_start>username_0: I have the following time-series data with two value columns. (t: time, v1: time-series values 1, v2: time-series values 2) ``` t | v1 | v2 ---+----+---- 1 | 1 | 0 2 | 2 | 2 3 | 3 | 4 4 | 3 | 6 5 | 3 | 6 6 | 4 | 6 7 | 5 | 8 (7 rows) ``` I am trying to discover (or approximate) the correlation between the $v1$ and $v2$, and use that approximation for the next step predictions. Please note, the most obvious correlation is $v2(t)=2.v1(t-1)$. My question is, what are the algorithms to employ for such approximations and are there any open source implementations of those algorithms for SQL/python/javascript?<issue_comment>username_1: While [Carla was developed by Intel](https://www.intel.com/content/www/us/en/artificial-intelligence/researches/carla-open-urban-driving-simulator.html) their [GitHub recommends](https://github.com/carla-simulator/carla) a powerful [Nvidia GPU](https://carla.readthedocs.io/en/latest/start_quickstart/). Comparing a mid-level [NVidia GPU vs an Intel IRIS](https://laptopmedia.com/comparisons/nvidia-geforce-mx250-vs-intel-iris-plus-g7-the-nvidia-gpu-offers-better-performance-at-lower-cost/) the NVidia comes out ahead. You *could* use an Intel IRIS, but the performance wouldn't be very satisfactory. An [Intel Arc A730M-Powered Laptop](https://www.tomshardware.com/news/intel-arc-a730m-powered-laptop-surfaces-with-dollar1200-price-tag) would be a minimum laptop configuration, with an Intel GPU, for running Carla; still not anywhere as well as a top of the line GPU (which is necessary for AI), but I understand that cost is also a consideration for many people. Upvotes: 2 <issue_comment>username_2: All Deep Learning tasks generally require a NVIDIA GPU, and it is generally recommended to use them. It is given on carla website that a Nvidia GPU is recommended. Currently, Intel or other GPUs are not much compatible with tensor cores or deep learning due to lack of some structure. Hence, a Nvidia GPU is recommended. However, you can also use online GPUs which can provide you a better and configurable experience. Upvotes: -1
2022/10/12
462
1,681
<issue_start>username_0: I am currently reading the ESRGAN paper and I noticed that they have used Relativistic GAN for training discriminator. So, is it because Relativistic GAN leads to better results than WGAN-GP?<issue_comment>username_1: While [Carla was developed by Intel](https://www.intel.com/content/www/us/en/artificial-intelligence/researches/carla-open-urban-driving-simulator.html) their [GitHub recommends](https://github.com/carla-simulator/carla) a powerful [Nvidia GPU](https://carla.readthedocs.io/en/latest/start_quickstart/). Comparing a mid-level [NVidia GPU vs an Intel IRIS](https://laptopmedia.com/comparisons/nvidia-geforce-mx250-vs-intel-iris-plus-g7-the-nvidia-gpu-offers-better-performance-at-lower-cost/) the NVidia comes out ahead. You *could* use an Intel IRIS, but the performance wouldn't be very satisfactory. An [Intel Arc A730M-Powered Laptop](https://www.tomshardware.com/news/intel-arc-a730m-powered-laptop-surfaces-with-dollar1200-price-tag) would be a minimum laptop configuration, with an Intel GPU, for running Carla; still not anywhere as well as a top of the line GPU (which is necessary for AI), but I understand that cost is also a consideration for many people. Upvotes: 2 <issue_comment>username_2: All Deep Learning tasks generally require a NVIDIA GPU, and it is generally recommended to use them. It is given on carla website that a Nvidia GPU is recommended. Currently, Intel or other GPUs are not much compatible with tensor cores or deep learning due to lack of some structure. Hence, a Nvidia GPU is recommended. However, you can also use online GPUs which can provide you a better and configurable experience. Upvotes: -1
2022/10/12
479
2,294
<issue_start>username_0: I am currently working my way into Genetic Algorithms (GA). I think I have understood the basic principles. I wonder if the time a GA takes to go through the iterations to determine the fittest individual is called learning time ?<issue_comment>username_1: It really depends on what the GA is being used for. The prototypical use case is function optimization. Suppose you have a Traveling Salesperson Problem to solve. You have N cities, and you need to find the shortest route that visits each city once. You can attack that problem with a GA, and it will run for some period of time trying to find successively better and better solutions until whatever stopping criteria is reached. At that point, you have your answer. There's no remaining computation that needs to be done that would equate to something like "running time" versus "training time". As well, it's slightly odd to describe this is as "training" since there's no generalization available. You haven't trained a model that can solve any other TSP instance using what was learned in solving the first one. You can run the same code on a new problem and evolve a solution for that, but that's closer to what we'd consider a whole new training pass than just executing a previously trained model. In short, optimization just isn't really like ML problems where it makes sense to have training and running times. You just have the computation time needed for the search algorithm to find a solution. However, many ML models require some sort of optimization as part of learning. Neural nets require fitting a set of weights to minimize a loss function. Support Vector Machines involve finding the optimal solution to a quadratic programming problem. We often have special-purpose techniques to solve those problems, like backpropagation for NNs, but you could also use a GA to solve the optimization problem, and then the GA time is equal to the training time for whatever that model was. Upvotes: 3 [selected_answer]<issue_comment>username_2: I believe genetic algorithms DO NOT learn, because they're a search and optimization algorithms. They keep on filtering the better solutions in each iteration, but they can easily "forget" what had "found" earlier, if mutation or crossover happens. Upvotes: 0
2022/10/17
475
2,237
<issue_start>username_0: In Word2Vec, the embeddings don't depend on the context. But in Transformers, the embeddings depend on the context. So how are the words' embeddings set at inference time?<issue_comment>username_1: It really depends on what the GA is being used for. The prototypical use case is function optimization. Suppose you have a Traveling Salesperson Problem to solve. You have N cities, and you need to find the shortest route that visits each city once. You can attack that problem with a GA, and it will run for some period of time trying to find successively better and better solutions until whatever stopping criteria is reached. At that point, you have your answer. There's no remaining computation that needs to be done that would equate to something like "running time" versus "training time". As well, it's slightly odd to describe this is as "training" since there's no generalization available. You haven't trained a model that can solve any other TSP instance using what was learned in solving the first one. You can run the same code on a new problem and evolve a solution for that, but that's closer to what we'd consider a whole new training pass than just executing a previously trained model. In short, optimization just isn't really like ML problems where it makes sense to have training and running times. You just have the computation time needed for the search algorithm to find a solution. However, many ML models require some sort of optimization as part of learning. Neural nets require fitting a set of weights to minimize a loss function. Support Vector Machines involve finding the optimal solution to a quadratic programming problem. We often have special-purpose techniques to solve those problems, like backpropagation for NNs, but you could also use a GA to solve the optimization problem, and then the GA time is equal to the training time for whatever that model was. Upvotes: 3 [selected_answer]<issue_comment>username_2: I believe genetic algorithms DO NOT learn, because they're a search and optimization algorithms. They keep on filtering the better solutions in each iteration, but they can easily "forget" what had "found" earlier, if mutation or crossover happens. Upvotes: 0
2022/10/17
2,429
7,140
<issue_start>username_0: Assume in a convolutional layer's forward pass we have a $10\times10\times3$ image and five $3\times3\times3$ kernels, then $(10\times10\times3) \*( 3\times3\times3\times5)$ has the output of dimensions $8\times8\times5$. Therefore the gradients fed backwards to this convolutional layer also have the dimensions $8\times8\times5$. When calculating the derivative of loss w.r.t. kernels, the formula is the convolution $input \* \frac{dL}{dZ}$. But if the gradients have dimensions $8\times8\times5$, how is it possible to convolve it with $10\times10\times3$? The gradients have $5$ channels while the input only has $3$. Since during the forward pass the kernel window does element-wise multiplication and brings the channels down to $1$, do the gradients propagate back to each of the $3$ channels equally? Should the $8\times8\times5$ gradients be reshaped into $8\times8\times1\times5$ and broadcasted into $8\times8\times3\times5$ before convolving with the layer input?<issue_comment>username_1: Yes, you are right that you just zero-pad to get the right dimensions. The operation in color space is just a scalar product, so that you could get the backwards operation in that dimension also just per the formulas of the matrix-vector product. Note that the convolution operations forward and backwards are different. --- The forward pass of the convolution layer has two elementary steps: first the convolution operation and then the cutting out of the fully convolved center sequence. \begin{align} z&=c\*\_{rev}x\\ y&=P\_Nz \end{align} Let's just consider the one-dimensional case, higher dimensions proceed similarly. Then $P\_N$ cuts out the finite sequence with support $N$, $y\_n=z\_n$ if $n\in N$, else $y\_n=0$. Similarly for the projections to the support $K$ of the coefficient sequence $c$ and $M$ of the input sequence $x$. The convention for CNN is to not use the convolution as used for polynomial multiplication, where $[a\*b]\_n=\sum\_ka\_kb\_{n-k}$, but to have it look more like a scalar product, $$ [c\*\_{rev}x]\_n = \sum\_{m\in M} c\_{m-n}x\_m=\sum\_{k\in K}c\_kx\_{k+n}=[rev(c)\*x]\_n, $$ $rev$ for "reverse", where $[rev(c)]\_k=c\_{-k}$. Note that this version of the convolution product is not commutative. These all are linear operations, so in principle the gradient back propagation has the same structure as the usual matrix multiplication. The structural principle of gradient computation is that gradients $\bar u^T=\bar L\frac{\partial L}{\partial u}$ act as linear functionals on tangent vectors $\dot u$, and that the scalar value $\langle \bar u,\dot u\rangle$ of it is independent on where in the graph the pairing of gradient and tangent is carried out, $$ ⟨\bar y,\dot y⟩=⟨\bar z,\dot z⟩=⟨\bar x,\dot x⟩+⟨\bar c,\dot c⟩. $$ The tangent propagation is, using the product rule, \begin{align} \dot z&=\dot c\*\_{rev}x+c\*\_{rev}\dot x\\ \dot y&=P\dot z \end{align} Evaluating the defining relations gives \begin{align} ⟨\bar z,\dot z⟩&=⟨\bar y,\dot y⟩=⟨\bar y ,P\dot z⟩=⟨P\_N^T\bar y,\dot z⟩ \\ ⟨\bar x,\dot x⟩+⟨\bar c,\dot c⟩&=⟨\bar z,\dot z⟩=⟨\bar z,(\dot c\*\_{rev}x)⟩+⟨\bar z,(c\*\_{rev}\dot x)⟩ \end{align} The first operation $\bar z=P^T\bar y$ is just zero-padding of the sequence as $(0\_{k\_1},\, y^T,\, 0\_{k\_2})^T$. If one thinks of all sequences continued indefinitely by zero, then $P\_N^T$ does nothing, as the zeros for the padding are already present. To analyze the combination of scalar product and convolution, let's go into the details \begin{align} ⟨\bar z,(\dot c\*\_{rev}x)⟩ &=\sum\_{n\in N}\bar z\_n\sum\_{k\in K}\dot c\_kx\_{n+k} \\ &=\sum\_{k\in K}\dot c\_k\sum\_{n\in N}\bar z\_nx\_{k+n} \\ &=⟨(\bar z\*\_{rev}x),\dot c⟩ \\ \implies \bar c &= P\_K(\bar z\*\_{rev}x) \end{align} as $\dot c$ always only has support $K$, so the support of the gradient to it needs to have the same support. For the second term one gets similarly \begin{align} ⟨\bar z,(\dot c\*\_{rev}x)⟩ &=\sum\_{n\in N}\bar z\_n\sum\_{m\in M}c\_{m-n}\dot x\_m \\ &=\sum\_{m\in M}\dot x\_m\sum\_{n\in N}\bar z\_nc\_{m-n} \\ &=⟨(\bar z\*c),\dot x⟩ \\ \implies \bar x &= P\_M(\bar z\*c) \end{align} Upvotes: 0 <issue_comment>username_2: I figured it out a while ago and double-checked my results with TensorFlow, so I'm fairly confident with the implementation. Here is what I did using Eigen Tensor and the im2col method: If a `[N,10,10,C]` image is convolved with a `[F,3,3,C]` kernel (both in NHWC format, `F = #` kernels), using stride & dilation 1 and valid padding, the output is `[N,8,8,F]` image. Therefore the gradients coming back to this layer is also `Nx8x8xF` which are element-wise multiplied with the activation gradients to get dL/dZ, also `[N,8,8,F]`. The kernel gradients formula is $input \* \frac{dL}{dZ}$ so this is a `[N,10,10,C]` dimension image convolved with `[N,8,8,F]` dimension gradient. **Convert the gradients tensor into an im2col tensor** 1. Shuffle the gradient $\frac{dL}{dZ}$'s dimensions `[N,8,8,F]` into `[F,N,8,8]` 2. Reshape `[F,N,8,8]` into `[F,N,8,8,1]` and broadcast into `[F,N,8,8,C]` to match the input image's channels, divide it by the number of times broadcasted, C 3. Reshape `[F,N,8,8,C]` into a 2D tensor as `[F,N*8*8*C]` Step 2 addresses my question on convolving 2 tensors with mismatching channels dimensions. **Then extract patches from the input tensor and convert it to an im2col tensor** 1. Extract image patches from the input image, using the same amount of padding during forward pass (following the convention that if uneven padding, the extra goes to the bottom & left) and the same stride and dilation (but the latter 2 are swapped), with the gradients from the previous step 2 playing the role of kernel, resulting in the 5D tensor `[N,P,8,8,C]`, P = # patches = 9, the # times the kernels (gradients) slid across the input image 2. Shuffle the 5D tensor's dimensions from `[N,P,8,8,C]` to `[P,N,8,8,C]` 3. Reshape the image patches `[P,N,8,8,C]` into the 2D tensor `[P,N*8*8*C]` **Now that we have the gradients as `[F,N*8*8*C]` and `[P,N*8*8*C]`, we can multiply the two** * Do a contraction (matrix multiplication) along the first dimensions of both, the resulting tensor dimensions are `[F,N*8*8*C] x [P,N*8*8*C] = [F,P]`. * The tensor is reshaped from `[F,P]` to `[F,3,3]`, then to `[F,3,3,1]` * Broadcast C times on the last dimension, divide by batch size N, then we get the kernel gradients `[F,3,3,C]` which match the kernel used during the forward pass, and can be fed to your optimizer of choice during the weights update With Eigen Tensor, if you wrap all of this as a function with the return type auto to keep everything as an operation, you can lazily evaluate it into a 4D tensor using Eigen::ThreadPoolDevice with 2+ threads for improved speed (2-3 times faster than a single thread on my machine). Edit: Here's my implementation on GitHub ([1](https://github.com/rkuang9/FLARE/blob/5ca289c4ce1a221ef31ad5186149951518d70e24/flare/layers/conv2d.ipp#L305) [2](https://github.com/rkuang9/FLARE/blob/5ca289c4ce1a221ef31ad5186149951518d70e24/flare/layers/conv2d.ipp#L368)) Upvotes: 1
2022/10/17
330
1,582
<issue_start>username_0: I am confused. On some websites, face identification is just face detection, while face verification is finding the person's identification. They are both considered components of face recognition. But in some websites face verification is 1-to-1 matching of faces while face identification is 1-to-k face matching<issue_comment>username_1: There are two things you can do: 1. Find a face (not a specific face) in a frame (I think this is 1-k) 2. ~~Recognition~~ Verification a specific face in a frame (I think this is 1-1) Upvotes: 1 <issue_comment>username_2: **Face identification** is the process of **determining** whether a face in a given image belongs to a specific person. **Face verification** is the process of **verifying** that a given face corresponds to a specific person. **Face recognition** is the process of **identifying** a person from a given image. One key difference between face identification and face recognition is that face identification is typically used to determine whether someone is who they claim to be, while face recognition is used to identify people regardless of who they are. Another difference is that face identification is often used as a security measure, while face recognition is more often used for things like tagging photos. Another difference between face identification and face recognition is that face identification systems often require a person to be looking directly at the camera, while face recognition systems can often work with pictures of people taken from different angles. Upvotes: 2
2022/10/18
781
2,617
<issue_start>username_0: I watched [this lecture](https://vtrs.hep.com.cn/#/detail?id=b5b598ab-1036-4d30-9a6b-761b1f383d70) by [professor <NAME>](http://www.tjudb.cn/dbgroup/index.php/Xin_Wang), and a picture in the beginning interested me: [![enter image description here](https://i.stack.imgur.com/Jn543.png)](https://i.stack.imgur.com/Jn543.png) The confusing thing is that this lecture was delivered on Sep. 19, 2022, but it seems from the diagram above that over the last ten years knowledge graph has made no progress. The diagram is still what I was shown when I was a student in 2015 or 2016. Since I cannot get a hold of this professor, I googled [knowledge graph progress pictures](https://www.google.com/search?q=progress+of+knowledge+graph&newwindow=1&sxsrf=ALiCzsYYRVPMr6zHh9QDhiBigJq26qkEOg:1666073559215&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjho6rVj-n6AhXWQPUHHaxTDYIQ_AUoAXoECAIQAw&biw=1280&bih=629&dpr=1.5#imgrc=jh1JJTQuE0HhVM) but found that no single one mentioned any big events after 2012. What happened over the last decade to knowledge graph? Any big events?<issue_comment>username_1: This 2021 [survey of Knowledge Graphs](https://arxiv.org/abs/2002.00388) depicts a similar chart (appendix A), with the same 2012 final milestone. That said, a major advancement is the use deep learning to encode the knowledge graph, to extract relations and autonomously complete graphs. So we may expect soon a breaktrough in neural symbolic approaches applied to challenges such as Question Answering, Recommender Systems and Language Representation Learning. Upvotes: 1 <issue_comment>username_2: Knowledge harvesting is the key progress. > > This formerly elusive vision has become practically viable today, made possible by > major advances in knowledge harvesting. This comprises methods for turning noisy Internet content into crisp knowledge structures on entities and relations. Knowledge harvesting methods have enabled the automatic construction of knowledge bases (KB): collections of machine-readable facts about the real world. Today, publicly available KBs provide millions of entities (such as people, organizations, locations and creative works like books, music etc.) and billions of statements about them (such as who founded which company when and where, or which singer performed which song). Proprietary KBs deployed at major companies comprise knowledge at an even larger scale, with one or two orders of magnitude more entities. > > > References: 1. [Machine knowledge: Creation and curation of comprehensive knowledge bases](https://arxiv.org/abs/2009.11564) Upvotes: 0
2022/10/20
588
2,453
<issue_start>username_0: In the [website](https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/) the following explanation is provided about Embedding layer: > > The Embedding layer is initialized with random weights and will learn > an embedding for all of the words in the training dataset. > > > It is a flexible layer that can be used in a variety of ways, such as: > > > It can be used alone to learn a word embedding that can be saved and > used in another model later. It can be used as part of a deep learning > model where the embedding is learned along with the model itself. It > can be used to load a pre-trained word embedding model, a type of > transfer learning. > > > Isnt embeddings model specific? I mean to learn a representation of something we need the model that something was used to represent! so how can embeddings learned in one model can be used in another?<issue_comment>username_1: This 2021 [survey of Knowledge Graphs](https://arxiv.org/abs/2002.00388) depicts a similar chart (appendix A), with the same 2012 final milestone. That said, a major advancement is the use deep learning to encode the knowledge graph, to extract relations and autonomously complete graphs. So we may expect soon a breaktrough in neural symbolic approaches applied to challenges such as Question Answering, Recommender Systems and Language Representation Learning. Upvotes: 1 <issue_comment>username_2: Knowledge harvesting is the key progress. > > This formerly elusive vision has become practically viable today, made possible by > major advances in knowledge harvesting. This comprises methods for turning noisy Internet content into crisp knowledge structures on entities and relations. Knowledge harvesting methods have enabled the automatic construction of knowledge bases (KB): collections of machine-readable facts about the real world. Today, publicly available KBs provide millions of entities (such as people, organizations, locations and creative works like books, music etc.) and billions of statements about them (such as who founded which company when and where, or which singer performed which song). Proprietary KBs deployed at major companies comprise knowledge at an even larger scale, with one or two orders of magnitude more entities. > > > References: 1. [Machine knowledge: Creation and curation of comprehensive knowledge bases](https://arxiv.org/abs/2009.11564) Upvotes: 0
2022/10/21
1,019
3,799
<issue_start>username_0: I have done <NAME>'s ML and DL courses, and some projects and implemented some important ML algorithms from scratch. Now reading the deep learning book. <=(Edited) I want to start from the beginning (in terms of reading research papers), i.e, deep feedforward networks, regularization techniques,{then maybe conv nets and others}etc, etc and some tips on how to tackle the difficulty in understanding it. Thank You.<issue_comment>username_1: The good news is that there are many freely available educational resources online and at your local library. Here are some that I used to get me started: 1. Kaggle: [Intro to Deep Learning](https://www.kaggle.com/learn/intro-to-deep-learning) 2. Kaggle: [Computer Vision](https://www.kaggle.com/learn/computer-vision) 3. [Pattern Recognition & Machine Learning](https://chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf%20(chapter%205)) (Chapter 5) 4. [Machine Learning](http://www.cs.cmu.edu/%7Etom/files/MachineLearningTomMitchell.pdf) (Chapter 4) Good luck and happy learning! P.S. I know these are not research papers, but I would encourage you to start with these anyway. Upvotes: 0 <issue_comment>username_2: Not sure what to recommend, since you say "from the beginning" in the text but "intermediate" in the title... Anyway, for the "then maybe conv nets" part, there is a tutorial from 2021 that relates convolutional networks with the matched filter, a well grounded technique in signal processing. I find this a great idea; depending on your background, it may be interesting for you as well. <https://arxiv.org/abs/2108.11663> Is this the kind of papers you are looking for? Upvotes: 0 <issue_comment>username_3: At your stage, I don't think jumping straight into reading research papers would be efficient. Generally, reading textbooks/review-articles, or simply watch a couple introductory youtube courses would do a better job at getting you up to speed with the background knowledge. Of course, you can always find a project that interests you and try to incorporate some elements of ML into it, which allows you to naturally learn ML at the same time. Some standard introductory textbooks/courses are: * The [deep learning textbook](https://www.deeplearningbook.org/), more theoretical driven * [<NAME>'s courses](https://www.youtube.com/playlist?list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN) on youtube, more application driven which should cover the topics you mentioned. If you want to focus on a specific topic (e.g. ConvNets, transformers, recurrent networks, etc.), it's generally helpful to find a recent review article on this topic and read through it. This is just to understand the current state of the field, and you can then read specific papers that interests you with this contextual knowledge in mind. Note these fields are moving so fast that certain seminal papers are no longer hugely relevant (e.g. many network architectures and training methods proposed in the classic [AlexNet paper](https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf) are outdated.) Upvotes: 1 <issue_comment>username_4: There is no "beginning" with research papers. Papers are published as they are ready, in no particular order with respect to complexity or topic. I think you just have to jump in. Pick papers that match your interests. Look things up as you read to understand. You might need to brush up on Math. Here is a [list of resources](https://analyticsindiamag.com/8-open-access-resources-for-ai-ml-research-papers/) where you can find research papers to start. I am sure you can find others as you learn what interests you. Upvotes: 1
2022/10/22
871
3,814
<issue_start>username_0: I was taught that, usually, a dataset has to be divided into three parts: 1. Training set - for learning purposes 2. Validation set - for picking the model which minimize the loss on this set 3. Test test - for testing the performance of the model picked using metrics such as accuracy score How is [MNIST](http://yann.lecun.com/exdb/mnist/) only providing the training and the test sets? What about the validation?<issue_comment>username_1: The **test set** should never be seen and ran once at the end of training. The **validation set** is used to help you select hyperparameters and it would be cheating to tune your model on the test set because you would be giving your model information about the test set. This would give your model an unfair advantage and skew the results; that simply means that if you are essentially using the test set for training data model, the model overfits to your test set, and will not generalize well to new, unseen data. For this reasons, the validation set *must* be a portion of the training data which is selected out and evaluated on during training so that you can do this. It's not necessary if you're not doing model selection. Upvotes: 3 [selected_answer]<issue_comment>username_2: There are a few reasons `why MNIST only provides training and test sets, and not a validation set`: * Since, the MNIST dataset is intended to be a simple and straightforward benchmark for machine learning models, It is important to have a standard test set that can be used to compare different models. * Secondly, the MNIST dataset is well-known and well-studied. * Thirdly splitting it into three sets (`training`, `validation`, and `test`) would **reduce** the size of each set too much & there is less need to have one. The validation set is used to assess the performance of the model on unseen data. In this case, the validation set is not needed because the performance of the model can be assessed on the test set. Another common method is to use [cross-validation](https://www.geeksforgeeks.org/cross-validation-machine-learning/), which is where the data is split into multiple sets and each set is used to train and test the model. This allows for a more accurate assessment of the model's performance. Upvotes: 0 <issue_comment>username_3: **How is MNIST only providing the training and the test sets? What about the validation?** In order to perform cross-study comparison of model performance it makes a lot of sense to have a single test set for benchmarking. This way, different investigators can compare their respective models in an "apples to apples" manner. The test set is tested only once, as already mentioned, and is in a practical sense the final arbiter of model performance. As an aside, the MNIST not only provides an important source of data for testing individual hypotheses, but it also serves to provide "standards". After all, NIST stands for National Institute of Standards and Technology. **The reason why a validation set is not parsed out (like the way the test set is).** Different investigators may want to perform different types of validations (hold out, n-fold cross-validation, leave-one-out cross-validation). Thus, MNIST does not limit investigators from doing this by separating out a validation set. MNIST leaves it up to the investigator to parse their "training set" into *yes*, a training set and validation set they way they prefer. Unlike the test set, there is really no need to standardize the validation. In contrast, test set does need to serve as a standard for benchmarking models. **NOTE:** To be clear, this does not mean that you should not perform a validation. You absolutely must. The only thing is that you must "carve out" the validation set in a manner of your choosing. Upvotes: 0
2022/10/24
1,285
4,954
<issue_start>username_0: In PPO with clipped surrogate objective (see the paper [here](https://arxiv.org/pdf/1707.06347.pdf)), we have the following objective: [![PPO objective](https://i.stack.imgur.com/nDPGt.png)](https://i.stack.imgur.com/nDPGt.png) The shape of the function is shown in the image below, and depends on whether the advantage is positive or negative. [![objective plot over r](https://i.stack.imgur.com/HtBV0.png)](https://i.stack.imgur.com/HtBV0.png) The min() operator makes $L^{CLIP}(\theta)$ a lower bound to the original surrogate objective. But why do we want this lower bound? In other words, why clip only at $1+\epsilon$ when $A > 0$ ? Isn't it important to keep the new policy in the neighborhood of the old policy, so that even $r\_t(\theta) < 1-\epsilon$ should be undesired?<issue_comment>username_1: Yes, the idea of PPO is to keep the updates small so that the new policy is not too far from the old policy. If you look at the left figure, this is the case as the absolute magnitude of L^clip is capped. The only region where this absolute magnitude is uncapped is on the right hand portion of the right figure. In this region, r is > 1. Since r = new prob / old prob, it means the previous update has increased the probability of an action that resulted in a worse than expected outcome (hence the negative advantage). Therefore, we want to unroll that update and not capping the ratio will achieve that goal better. Upvotes: 1 <issue_comment>username_2: A positive advantage increases the probability of taking that action, hence $A\_t > 0$ means that the gradient update makes $r\_t(\theta)$ larger. We don't want to take too big of a step, hence we only let $r\_t(\theta)$ increase to $1 + \epsilon$ before we start ignoring that advantage. If $A\_t > 0$ but $r\_t(\theta) < 1 - \epsilon$ it must mean that there are many other gradient samples in the training batch that are pushing down $r\_t(\theta)$, because if we only had $A\_t$ it would increase $r\_t(\theta)$. In this case you can see $A\_t$ is actually pushing in the opposite direction of the gradient update. If $A\_t > 0$ and $r\_t(\theta) > 1 + \epsilon$ then $A\_t$ is going in the same direction of the gradient update. Upvotes: 1 <issue_comment>username_3: There are six different situations: [![enter image description here](https://i.stack.imgur.com/BVJfI.png)](https://i.stack.imgur.com/BVJfI.png) Case 1 and 2: the ratio is between the range -------------------------------------------- In situations 1 and 2, **the clipping does not apply since the ratio is between the range $[1 - \epsilon, 1 + \epsilon]$** In situation 1, we have a positive advantage: the **action is better than the average** of all the actions in that state. Therefore, we should encourage our current policy to increase the probability of taking that action in that state. Since the ratio is between intervals, **we can increase our policy’s probability of taking that action at that state**. In situation 2, we have a negative advantage: the action is worse than the average of all actions at that state. Therefore, we should discourage our current policy from taking that action in that state. Since the ratio is between intervals, \*\* the probability that our policy takes that action at that state\*\* Case 3 and 4: the ratio is below the range ------------------------------------------ If the probability ratio is lower than $[1 - \epsilon]$, the probability of taking that action at that state is much lower than with the old policy. If, like in situation 3, the advantage estimate is positive ($A>0$), then **you want to increase the probability of taking that action at that state**. But if, like situation 4, the advantage estimate is negative, **we don’t want to decrease further** the probability of taking that action at that state. Therefore, the gradient is = 0 (since we’re on a flat line), so we don’t update our weights. Case 5 and 6: the ratio is above the range ------------------------------------------ If the probability ratio is higher than $[1 + \epsilon]$, the probability of taking that action at that state in the current policy is **much higher than in the former policy**. If, like in situation 5, the advantage is positive, **we don’t want to get too greedy**. We already have a higher probability of taking that action at that state than the former policy. Therefore, the gradient is = 0 (since we’re on a flat line), so we don’t update our weights. If, like in situation 6, the advantage is negative, we want to decrease the probability of taking that action at that state. So if we recap, **we only update the policy with the unclipped objective part**. When the minimum is the clipped objective part, we don’t update our policy weights since the gradient will equal 0. Source: [DEEP RL Course - Visualize the Clipped Surrogate Objective Function](https://huggingface.co/deep-rl-course/unit8/visualize) Upvotes: 0
2022/10/25
954
3,444
<issue_start>username_0: I’m working on a classification problem (500 classes). My NN has 3 fully connected layers, followed by an LSTM layer. I use `nn.CrossEntropyLoss()` as my loss function. This is my network’s configuration ``` Model( (fc): Sequential( (0): Linear(in_features=800, out_features=1024, bias=True) (1): ReLU() (2): Linear(in_features=1024, out_features=512, bias=True) (3): ReLU() (4): Linear(in_features=512, out_features=256, bias=True) (5): ReLU() ) (lstm): LSTM(256, 1024, bidirectional=True) (hidden2tag): Sequential( (0): Linear(in_features=2048, out_features=1024, bias=True) (1): ReLU() (2): Linear(in_features=1024, out_features=500, bias=True) ) ) ``` This is what my loss looks like. It increases after reaching a particular value no matter what setting I use. I’ve used k-fold cross–validation but the loss/accuracy across all folds stays the same, so I’m assuming that there’s no issue in the distribution of train/val splits. From left(Iteration-wise loss (train loss recorded after every batch), train loss (recorded after every train epoch), validation loss (recorded after every val epoch)) [![Loss wrt. iteration](https://i.stack.imgur.com/DM3gZ.png)](https://i.stack.imgur.com/DM3gZ.png) I’ve tried using step-wise LR scheduling (tried OneCycleLR and Multiplicative LR as well) but the loss still doesn’t improve [![lr change](https://i.stack.imgur.com/MLnNF.png)](https://i.stack.imgur.com/MLnNF.png) I’ve tried using dropout but that gives higher validation loss values instead What can be done here in order to further decrease the loss?<issue_comment>username_1: The training loss is continually decreasing, but the the validation loss shows a local minimum at epoch ~100 which indicates that there is overfitting starting to happen. This means that your model has sufficient capacity to learn (i.e., it is not underfitting) and that increasing the model's capacity (i.e., increasing the number of layers or increasing the number of neurons per layer) is unlikely to help. Instead, you should work towards reducing overfitting. The best way to do this is by collecting more real data. You can also consider Data Augmentation or simulating synthetic data. Other approaches such as dropout and L1 and/or L2 regularization may also be helpful. This article ["An Overview of Regularization Techniques in Deep Learning (with Python code)"](https://www.analyticsvidhya.com/blog/2018/04/fundamentals-deep-learning-regularization-techniques/) is a nice overview of these methods. Even though you have used dropout, you can tune the dropout rate and include dropout at one or more layers in your model. Finally, ensemble methods have shown success in improving accuracy in many (but not all) settings, compared to single models. Upvotes: 3 [selected_answer]<issue_comment>username_2: I fully agree with @username_1's comment but want to expand on the capacity of the model a bit. The easiest approach to reduce overfitting is to reduce the capacity of the model, i.e. the size of its layers. Therefore, in your first sequential model, instead of three linear with [1024, 512, 256] nodes, it would probably be better to have just two with [512, 256] nodes. The same applies to the second sequential model. Just using one linear layer probably reduces the capacity of the model to overfit to the data. I hope those changes work out for you. Upvotes: 1
2022/10/25
945
4,146
<issue_start>username_0: From what I understand, Transformer Encoders and Decoders use a fixed number of tokens as input, e.g., 512 tokens. In NLP for instance, different text sentences have a different number of tokens, and the way to deal with that is to truncate the longer sentences and pad the shorter ones. As an additional input, a padding mask must be supplied to the Transformer so that its attention is only focused on the relevant tokens. **My question is: Is there something in the architecture that forces the transformer to have a fixed number of tokens as input? (and not adopt dynamically to the actual input length like RNNs for instance?)** For comparison, I think of fully-convolutional networks or RNNs with variable input lengths. They are agnostic to the actual input dimension because they perform pointwise operations on the different patches. When applying an RNN model to an n-tokens sentence, you compute the same block n times, and when computing it on a k-tokens sentence you will apply it k times. So this architecture does not require padding or truncating (at least not in theory, I do not refer here to implementation considerations). In transformers: embedding the tokens, computing attention, and feed-forward can be performed on different lengths of sequences since the weights are applied per token, right? So why do we still truncate and pad to a fixed size? Or perhaps it is feasible but not implemented in practice for other reasons? I must be missing something... **I'll ask it differently to make my question more clear:** Say I have an already-trained transformer model, trained on 512 fixed-sized inputs (truncated and padded). At inference time, if I would like to process a single, shorter sentence. Do I have to pad it or not? Thanks<issue_comment>username_1: **Edits to reflect edits in question:** If you train your transformer on `length = n` then yes, you need to pad inputs to `length = n`. This is not a requirement in the mathematical architecture, it's a requirement in the implementation. --- There seem to be two separate ideas in your question: 1. Why do transformers have a fixed input length? 2. Why do transformers have input length limitations? I am not sure which one you are asking, so I will answer both. --- **1) Saying transformers have a fixed input length is misleading.** Transformers accept variable length inputs just like RNNs. You can think of padding/truncating as an extra embedding step if you want. We don't need padding in RNNs because they process inputs (sentences) one element (token/word) at a time. Transformers process inputs all at once. If you are passing in several sentences, you have to do something to regularize sentence length, hence padding and truncating. --- **2) Transformers often limit input length to 512 or 1024 because of performance issues.** If you are wondering why we don't let our transformers accept inputs as long as possible, the answer is that there are computational and performance limitations. The algorithm complexity is quadratic. This is where the `max_length` parameter of a transformer comes in. If your input has length 1,000, the transformer will throw an error because it can only handle inputs up to 512 in length. Upvotes: 2 <issue_comment>username_2: To add something to pip.pip answer (thumbs up cause is totally on point), consider that transformers can't be fully convolutional, since as the name suggest, a fully convolutional model perform only convolutions, while transformers include dense layers, which expect a fixed input dimension. Despite being possible to overcome the limit of fixed dimensionality imposed by the dense layers, for example by using pyramid pooling, that would only add complexity to the training regime and there's no guarantee that the performance will increase. lastly, from a linguistic perspective, 512 and 1024 tokens are already quite a lot to learn most long dependencies (which let's recall was the main issue why transformers were introduced in place of RNNs). So the game of making transformers input size independent is not really worth the effort. Upvotes: 1
2022/10/29
513
2,025
<issue_start>username_0: My input to the model is a set of features that I encode in the form of five vectors of the same size consisting only of 0 and 1. I now want to combine them into one vector in such a way that their order does not matter. My first idea was to run each vector through the same activated linear layer and sum the results. Is there maybe a better way?<issue_comment>username_1: **Is there maybe a better way [to combine them into one vector in such a way that their order does not matter]?** There are many ways to perform feature fusion (see [ref](https://www.nature.com/articles/s41746-020-00341-z)). The way you described is one way of doing it. However, for most simple cases where feature fusion is for the type of data you are describing (i.e., tabular data), it is more common to `concatenate` the features at the input level. The order of concatenation does not matter to the neural network. Upvotes: 1 <issue_comment>username_2: > > My first idea was to run each vector through the same activated linear layer and sum the results. > > > That was my first thought as well, but actually it doesn't need to be a single layer but instead it can be a more complex neural network. Basically it will learn an embedding $R^n \rightarrow R^m$, and you can use the sum (or average) of such embeddings to represent the set of your features. I wouldn't use any activation in the final layer, or maybe at most `tanh` to preserve "symmetry" between negative and positive values. Or who knows, maybe `ReLU` would work in you problem as well. I think average of the embeddings would be better than a plain sum, in case the number of "active" features may change and absent features have an embedding of all zeros. Or you normalize the sum so that the L1 or L2 norm is 1. Come to think of it, the $R^n \rightarrow R^m$ can be implemented as a stack of 1D convolution with "kernel size" and "stride" of one (Keras terms). Or it can be a "stand-alone" model of which outputs you sum together. Upvotes: 0
2022/11/01
3,248
13,182
<issue_start>username_0: My understanding is that a value function in reinforcement learning returns a value that represents how "good" it is to be in a given state. How does a network, such as the network in [this example](https://github.com/ray-project/ray/blob/master/rllib/models/torch/fcnet.py#L126) , represent how good it is to be in the given state? Is the neural network value function the best method of achieving such a "goodness" value in an environment with many possible states and indefinite length? Some other examples of neural networks being used as part of the value function of a reinforcement learning model: * <https://github.com/facebookresearch/ScaDiver/blob/main/rllib_model_custom_torch.py#L155> * <https://github.com/ray-project/ray/blob/master/rllib/models/torch/visionnet.py> * <https://github.com/ray-project/ray/blob/master/rllib/models/torch/recurrent_net.py><issue_comment>username_1: The *goodness* of the state in the [network](https://github.com/ray-project/ray/blob/master/rllib/models/torch/fcnet.py#L126) is determined by the **`activation_fn`**. The primary role of the Activation Function is to transform the summed weighted input from the node into an output value to be fed to the next hidden layer or as output. That adds non-linearity to the neural network, without that learning would be impossible. It allows the model to create complex mappings between the network’s inputs and outputs. The activation function used in hidden layers is typically chosen based on the type of neural network architecture. You need to match your activation function for your output layer based on the type of prediction problem that you are solving—specifically, the type of predicted variable. Also see the update, halfway through this answer. That can be set as follows: > > [Activation](https://github.com/ray-project/ray/blob/master/rllib/models/catalog.py#L82) function descriptor. > ============================================================================================================= > > > Supported values are: "tanh", "relu", "swish" (or "silu"), "linear" (or None). > ============================================================================== > > > "fcnet\_activation": "tanh", > > > With a linear transfer function the value would look like this: [![enter image description here](https://i.stack.imgur.com/VcBz0.png)](https://i.stack.imgur.com/VcBz0.png) Fig: Linear Activation Function - Equation: f(x) = x It doesn’t help with the complexity or various parameters of usual data that is fed to the neural networks. All layers of the neural network will collapse into one if a linear activation function is used. No matter the number of layers in the neural network, the last layer will still be a linear function of the first layer. So, essentially, a linear activation function turns the neural network into just one layer. The Nonlinear Activation Functions are the most used activation functions. 1. Sigmoid or Logistic Activation Function The Sigmoid Function curve looks like a S-shape. [![enter image description here](https://i.stack.imgur.com/jETqK.png)](https://i.stack.imgur.com/jETqK.png) Fig: Sigmoid Function The main reason why we use sigmoid function is because its valued between 0 to 1. It's used for models where we have to predict the probability as an output. Since probability of anything exists only between the range of 0 and 1, the sigmoid is a useful function and is differentiable. That means, we can find the slope of the sigmoid curve at any two points. The function is monotonic but the function’s derivative is not. The logistic sigmoid function can cause a neural network to get stuck at the training time. 2. Tanh or hyperbolic tangent Activation Function The tanh is also like a logistic sigmoid but better. The range of the tanh function is from -1 to 1. tanh is also sigmoidal (s-shaped). [![enter image description here](https://i.stack.imgur.com/ai9Vo.png)](https://i.stack.imgur.com/ai9Vo.png) Fig: tanh v/s Logistic Sigmoid The advantage is that the negative inputs will be mapped strongly negative and the zero inputs will be mapped near zero in the tanh graph. The function is differentiable and monotonic, while its derivative is not monotonic. The tanh function is mainly used for classification between two classes. Both tanh and the logistic sigmoid activation functions are used in feed-forward nets. 3. ReLU (Rectified Linear Unit) Activation Function The ReLU is the most used activation function in the world right now, since it's used in almost all the convolutional neural networks or deep learning. [![enter image description here](https://i.stack.imgur.com/shfHZ.png)](https://i.stack.imgur.com/shfHZ.png) Fig: ReLU v/s Logistic Sigmoid The ReLU is half rectified (from bottom), f(z) is zero when z is less than zero and f(z) is equal to z when z is above or equal to zero, with a range of 0 to infinity. The function and its derivative both are monotonic, but the issue is that all the negative values become zero immediately; which decreases the ability of the model to fit or train from the data properly. That means any negative input given to the ReLU activation function turns the value into zero immediately in the graph, which in turns affects the resulting graph by not mapping the negative values appropriately. 4. Leaky ReLU It's an attempt to solve the dying ReLU problem. [![enter image description here](https://i.stack.imgur.com/vflYB.jpg)](https://i.stack.imgur.com/vflYB.jpg) Fig: ReLU v/s Leaky ReLU The leak helps to increase the range of the ReLU function. Usually, the value of $\LARGE{a}$ is 0.01 or so. When a is not 0.01 then it is called Randomized ReLU. The range of the Leaky ReLU is -infinity to infinity. Both Leaky and Randomized ReLU functions are monotonic in nature, and their derivatives are also monotonic in nature. **What is a Good Activation Function?** A proper choice has to be made in choosing the activation function to improve the results in neural network computing. All activation functions must be monotonic, differentiable, and quickly converging with respect to the weights for optimization purposes. **Why derivative/differentiation is used ?** When updating the curve, to know in which direction and how much to change or update the curve depending upon the slope, we use differentiation. It makes it easy for the model to generalize or adapt with variety of data and to differentiate between the output. [![enter image description here](https://i.stack.imgur.com/z5xnE.png)](https://i.stack.imgur.com/z5xnE.png) You can learn more about differentiation at these blogs: ["Mathematics behind Machine Learning – The Core Concepts you Need to Know"](https://www.analyticsvidhya.com/blog/2019/10/mathematics-behind-machine-learning/) and ["Application of differentiations in neural networks"](https://machinelearningmastery.com/application-of-differentiations-in-neural-networks/). An excellent resource for activation functions is: ["Ultimate Guide to Activation Functions"](https://itnext.io/ultimate-guide-to-activation-functions-37740be5942). > > "Is the neural network value function the best method of achieving such a "goodness" value in an environment with many possible states and indefinite length?". > > > Yes, a properly trained neural network can not only answer what it was trained for but can also infer additional questions and provide correct answers from a relatively compact representation. The optimization algorithm is what carries out the learning process in a neural network. There are many different optimization algorithms. They are different in terms of memory requirements, processing speed, and numerical precision The **loss index** plays a vital role in the use of neural networks. It defines the task the neural network is required to do and provides a **measure of the quality** (goodness) of the representation it is required to learn. The choice of a suitable loss index depends on the application. When setting a loss index, two different terms must be chosen: an error term and a regularization term. $$\text{loss\_index}=\text{error\_term}+\text{regularization\_term}$$ The **error term** is the most important term in the loss expression. It measures how the neural network fits the data set. All those errors can be measured over different subsets of the data. In this regard, the training error refers to the error measured on the training samples, the selection error is measured on the selection samples, and the testing error is measured on the testing samples. A solution is regular when username_2ll changes in the input variables led to username_2ll changes in the outputs. An approach for non-regular problems is to control the effective complexity of the neural network. We can achieve this by including a **regularization term** in the loss index. Regularization terms usually measure the values of the parameters in the neural network. Adding that term to the error will cause the neural network to have username_2ller weights and biases, which will force its response to be smoother (with a greater goodness of fit). > > "The learning problem is formulated as the minimization of a loss index, $\LARGE{f}$. It is a function that measures the performance of a neural network on a data set. > > > > > The loss index includes, in general, an error and a regularization terms. The error term evaluates how a neural network fits the data set. The regularization term prevents overfitting by controlling the model's complexity. > > > > > The loss function depends on the adaptative parameters (biases and synaptic weights) in the neural network. We can group them into a single n-dimensional weight vector $\LARGE{w}$.". > > > For further details see: ["5 algorithms to train a neural network"](https://www.neuraldesigner.com/blog/5_algorithms_to_train_a_neural_network). Upvotes: -1 <issue_comment>username_2: Informally I would agree with your understanding of the value function. To answer your questions, it seems explaining some motivation for parametric modeling/machine learning is necessary: In general if we are not in a tabular setting, we wish to take advantage of the nature of our (continuous) data by leveraging relationships between states and rewards. In other words, we may think that there exists some underlying pattern in the states that relates to how much reward we can attain. More formally, one may posit some sort of parametric model, which implicitly assumes there is some parameter vector $\theta\_0$ s.t. $V(s)=\theta\_0' s, \forall s \in S$. Or similarly for the Q value function. This kind of assumption is similar to any linear regression model, or just any model that wants to take advantage of potential underlying patterns in the data to learn (like in any other machine learning problem). The choice of a neural network is motivated well by the amount of data available in certain domain areas and the recent advent of empirical success of neural nets. With this setup, to more specifically answer your last two questions: the network represents this value because we model it to be that way as we model $y=\theta'x$ in linear regression. Is this the best method? So far, this is one of the best methods we have available *provided we have enough data*. The reason can be supported by empirical success seen in other applications, and mathematically by results like the universal approximation theorem (which states that any continuous function can be represented arbitrarily well by a NN), or other more modern results from high dimensional statistics. Upvotes: -1 <issue_comment>username_3: There is a much simpler answer than the ones already given. The value function represents the expected (discounted) returns from the current state when following policy $\pi$, i.e: $$v\_\pi(s) = \mathbb{E}\left[\sum\_{k=0}^\infty \gamma^k R\_{t+k} | S\_t = s\right]\;.$$ This is just a scalar value that we would like to predict given some input, i.e. the state. So, predicting this quantity can be thought of (loosely) as a regression task. In regression, we typically want to learn a function $\hat{f}$ which, when given some input $x$, can approximate well the output $y$ under the true (but unknown) function $f$, i.e. we want to learn $\hat{f}$ such that $\hat{f}(x) \approx f(x)$. Now, supposing our state space is too large (e.g. infinite) to be enumerated in a lookup table, we need to parameterise our approximate value function. This can be done using any parametrisable function $\hat{v}\_\pi(s; \textbf{w})$, where $\textbf{w}$ are the learnable parameters of the function, to approximate $v\_\pi(s)$. We choose neural networks because they have been successful empirically at approximating the true value function. To answer your question as to whether these are the best approximators for the value function, my answer would be currently yes, *but* there is a huge amount of research in the field of Reinforcement Learning and it is impossible to say whether or not they will remain state of the art. Upvotes: -1 [selected_answer]
2022/11/01
796
3,421
<issue_start>username_0: I hope you are well. I had a problem and didn't understand the answers given on questions similar to my question. If possible, please answer this problem in a simpler way. Val\_acc : %99.4 \_ Train\_acc : 97.24 Thank you for attention. [![enter image description here](https://i.stack.imgur.com/nQwz2.jpg)](https://i.stack.imgur.com/nQwz2.jpg)<issue_comment>username_1: **Why validation accuracy be greater than training accuracy for deep learning models?** You are probably thinking that the training process is supposed to reduce the training loss (and increase training accuracy). The influence on the validation loss (and validation accuracy) should be secondary, no? And you are 100% right to think that because the loss function only considers training labels (y\_train) and how far model predictions (y\_pred) deviate from those training labels. So why is the validation accuracy higher than the training accuracy? So long as you are not overfitting (which you clearly are not), the training process should have a benefit for both the training accuracy and validation accuracy because the datasets come from the same distribution. The reason why you may have small differences like you are seeing here has to do with random sampling of the two datasets from that identical distribution. (You may have seen i.i.d. elsewhere. It means independent identically distributed. Each example in the two datasets are presumed to be i.i.d.) **Bottom line:** The two samples are different, so they have some differences in accuracy. It just so happens that the accuracy is higher for the validation set in this case (probably within some margin of error). On a separate note, to get a better estimate of your true accuracy along with error bounds, you may consider performing a cross-validation of some sort. You can review the [scikit learn article](https://scikit-learn.org/stable/modules/cross_validation.html) to see how this is done. Upvotes: 1 <issue_comment>username_2: Let consider this example: You have a Neural Network for binary classification. The NN is classifying A and B. Let say the NN is not very sophisticated, and it does just always pick A for classification. So on balanced Test-Set the accuracy would always be 50%. The Validation accuracy depends on your Validation-Set. Imagine you have the NN from above and your validation-set consists only of A-labeled examples. The validation accuracy would be 100%, while your Training metric would be 50% with a balanced data set. When validation metrics perform better than the training ones, this could be the case if the validation set is "simpler" than the training set. I also strongly disagree with the other answer, that "you clearly do not overfit". You can't say for sure based on the data we see. I don't say you do, neither. It depends on whether your data is balanced and diverse. One of the best ways to see if you do overfit is to make a k-fold split. Upvotes: 0 <issue_comment>username_3: Validation accuracy being higher than training accuracy can typically happen for three major reasons: * You have some kind of a regularization that is switched off during validation. Usually this happens with dropout regularization. * Validation score is computed *once* after the whole epoch (after the net have seen whole dataset), while training score is *averaged* over the epoch. * Random fluctuation. Upvotes: 0
2022/11/02
859
4,042
<issue_start>username_0: While reading a book on introduction to GA, I stepped upon a chapter where some advantages and disadvantages of these algorithms were described. One of the mentioned disadvantages was "Cannot use gradients" but there was no further explanation why. What did the authors mean by that? I couldn't come with a better idea than that you cannot just use a gradient as a fitness function. Still, I don't know why that would be.<issue_comment>username_1: To answer this question, you must first understand what a gradient is. This article ["What Is a Gradient in Machine Learning?"](https://machinelearningmastery.com/gradient-in-machine-learning/) offers a nice introductory explanation. In machine learning, gradients (which are essentially derivatives) are used to find the minimum of a loss function. Minimizing the loss function is effectively finding the optimal machine learning model (i.e., the model having weights that minimize the loss function). Gradient descent is a family of algorithms that use the gradient to minimize the loss function. The ability to use gradient descent to optimize models is a major advantage. The article ["Introduction to Gradient Descent"](https://builtin.com/data-science/gradient-descent) can help in understanding gradient descent. Genetic algorithms do not employ a continuous function that can be differentiated and minimized (i.e., no gradient). Instead, the goal of genetic algorithms is to maximize fitness, but fitness is itself not a defined well-defined function, and no gradient can be calculated and gradient descent cannot be leveraged to optimize fitness. Rather, genetic algorithms use a distinct algorithm that employs selection, crossover, mutation, and evaluation in a recursive manner to maximize the fitness. Upvotes: 1 <issue_comment>username_2: Let's think about what a gradient is and what it means to "use" a gradient as part of an optimization method. A gradient is a partial derivative that specifies the rate of change in every direction from some point on a surface. With respect to optimization, that surface is the thing that we're looking for some extremum of. Often it might be something like an error function, such as the use of gradients in backpropagation for training a neural network. You compute the error function $f(d,w)$ where $d$ is the data, $w$ is the weights of the network, and $f$ is the error (the squared difference between what the network outputs and the desired target). The gradient with respect to $w$ gives us the rate of change of that error as we change the weights. And because we want the error to be small, we can just change the weights in the direction of the greatest decrease in the error. That's an example of "using" a gradient in optimization. It's a way of choosing an action based on knowing mathematically what will happen to your function for whatever move you make. With a GA, how would this work? Now I want to minimize (or maximize, whatever) my fitness function $f$. The way a GA works is by creating a population of random candidate solutions and interatively using selection to choose fitter parents followed by operators like crossover and mutation to produce offspring that should be similar to their parents but usually not identical. The genetic operators create new search points, and selection serves to focus that creation into specific regions of the search space because it's always trying to favor fitter parents as starting points for those operators. Over time, we converge onto hopefully very fit individuals. Where in that framework can I "use" a gradient? Again, using the gradient means from a given point, I just know a mathematical expression that tells me which direction is "downhill". So if I want to minimize my function, I just move in that direction. But the whole thing that defines a GA is the use of those genetic operators to determine where to sample next. If I put something in that just follows the gradient, I don't really have a GA anymore. Upvotes: 3 [selected_answer]
2022/11/03
596
2,721
<issue_start>username_0: I am a new contributor and have no experience in ML, so this first question is a general one. I've developed a sudoku solving app and since then I wonder whether it would be feasible to design a ML-based algorithm/software which would mimic the thinking process of a human for solving a sudoku puzzle. Maybe there exists already such a software? I don’t mean any existing brute-force algorithm a human cannot apply! I see many challenges in the design of such an algorithm, e.g. determine that the solution is not unique, uncover the highly sophisticated solving techniques that only talented players know and can apply to hard puzzles. My thoughts even go so far as to imagine that such an algorithm would "discover" a solving technique that we humans have not yet discovered.<issue_comment>username_1: Its feasible to do so, not sure about `new techniques'. one can use RL by desigining your reward function (state space is just the current state of sudoku puzzle, action would be putting a particular number at some square), and solve using available methods. One can combine this with ml (perhaps approximating the reward function using past data). however this seems easier to solve using methods from more conventional AI at first glance - i am not fully aware of the nuances of 'hard puzzles'. Either way, you shoudl read more about standard AI, ml and then rl if interested. Upvotes: -1 <issue_comment>username_2: Your problem here is not the application of ML techniques to solving Sudoku. Reinforcement learning might be a reasonable search method, and you could augment it with results from other search methods, to enable it to learn in reasonable time. Your problem is trying to train AI to behave "like a human" whilst having little to no definition for what that is. Supervised learning will require a large dataset of how humans solve the puzzles to copy. Reinforcement learning will require you to provide a reward signal that gives better rewards for when it behaves like a human. What professional puzzle setters do is write variations of traditional AI search methods that they tune to score difficulties of different search steps, depending on how they define "easy" or "hard", and take some aggregate score to rate the puzzle as a whole. As an example, you could take the search depth of testing different initial guesses to resolve a single number and look at the consequences. Higher search depth required, and larger number of initial guesses available make finding a number harder. Some Sudoku setters keep their "human-like" search methods proprietary because publishing puzzles is a business. You may be able to find some examples online though. Upvotes: 0
2022/11/05
1,201
3,094
<issue_start>username_0: In some RL notes, I encountered the following equation, which I am trying to prove: $$ V^{\pi^\*}(s) = \max\_{a \in A}Q^{\pi^\*}(s, a),\forall s \in S $$ Here is my attemption: Firstly, I only need to prove "$\geq$" as "$\leq$" is obvious. Suppose that $\exists S\_0 \subset S$ such that $\forall s\_0 \in S\_0,V^{\pi^\*}(s\_0) < \max\_{a \in A}Q^{\pi^\*}(s\_0, a)$.(W.L.G, soppose there is only single state $s\_0 \in S\_0$), i want to construct a new policy $\pi\_{\text{new}}$ to derive the contradiction to $V^{\pi^\*}(s) \geq V^{\pi}(s), \forall s \in S$ and all policy $\pi$. I'm trying to create the following policy: $$\pi\_{\text{new}}(a\vert s)=\begin{cases} 1& s=s\_0,a=\text{argmax}\_{a\in A}Q^{\pi^\*}(s\_0,a)\\ 0& s=s\_0, a\neq \text{argmax}\_{a\in A}Q^{\pi^\*}(s\_0,a) \\ \pi^\*(a\vert s)& s \in S - s\_0 \end{cases} $$ Next, I want to show $$ V^{\pi\_{\text{new}}}(s\_0) = \max\_{a\in A}Q^{\pi^\*}(s\_0, a) $$ And thus $V^{\pi\_{\text{new}}}(s\_0) > V^{\pi^\*}(s\_0)$ to get contradiction. But when I derived it, I encountered some trouble as follow: $$ V^{\pi\_{\text{new}}}(s\_0) = \sum\_{a \in A}\pi\_{\text{new}}(a \vert s\_0)Q^{\pi\_{\text{new}}}(s\_0, a) = Q^{\pi\_{\text{new}}}(s\_0, \text{argmax}\_{a\in A}Q^{\pi^\*}(s\_0, a)) \overset{?}{=} \max\_{a \in A}Q^{\pi^\*}(s\_0, a) $$ I guess the last equation holds, but I can't find a way to prove it. Because the policy changed. Could you please help me?<issue_comment>username_1: I suppose that you try to prove that there always exist greedy optimal policies. The proof is nontrivial. Here is an outline. 1. Bellman optimality equation (matrix-vector form) $$v=\max\_\pi (r\_\pi+\gamma P\_\pi v)$$ The solution $v^\*$ to this equation is the optimal state value. The solution always exists and is unique. This can be implied by the Contraction Mapping Theorem. 2. When $v^\*$ is solved (by for example value iteration), then we can substitute it into the right-hand side of the above equation, and then solve the RHS to obtain a policy $\pi^\*$: $$\pi^\*=\arg\max\_\pi (r\_\pi+\gamma P\_\pi v^\*)$$ This policy is an optimal policy and there always exists a greedy optimal policy. 3. Since $\pi^\*$ is greedy, then the definition of state value implies that for any $s$ $$v\_{\pi^\*}(s)=\sum\_a \pi^\*(a|s) q\_{\pi^\*}(s,a)=\max\_a q\_{\pi^\*}(s,a)$$ This is just an outline. It is impossible to post all the details here. You can check [this book](https://github.com/MathFoundationRL/Book-Mathmatical-Foundation-of-Reinforcement-Learning) Chapter 3: Bellman optimality equation. Upvotes: 2 <issue_comment>username_2: So, we've got these Bellman equations for optimal $V^\*$ and $Q^\*$: $$ V^\*(s) = \max\_a\left[R(s,a) + \gamma \sum\_{s'}T(s',s,a)V^\*(s')\right]\qquad (1)$$ $$Q^\*(s,a) = R(s,a) + \gamma \sum\_{s'}T(s',s,a)\max\_{a'}Q^\*(s',a')\qquad (2)$$ Taking $\max\_a$ over $(2)$: $$\max\_a Q^\*(s,a) = \max\_a\left[R(s,a) + \gamma \sum\_{s'}T(s',s,a)\max\_{a'}Q^\*(s',a')\right]$$ We see that this is equivalent to $(1)$ if we denote $V^\*(s) = \max\_aQ^\*(s,a)$. Upvotes: 0
2022/11/06
458
2,009
<issue_start>username_0: When the number of states in the Q-learning is large, we can refer to approximate Q-learning, but what should we do when we have a large number of actions?<issue_comment>username_1: one of the downsides associated with the $Q$-Learning algorithm is that it must initialize a value $Q(s,a)$ for every $s\in S$ and every $a\in A$. If either one of your action space $A$ or state space $S$ is to large, I'd suggest *approximating* $Q$ instead Upvotes: 1 <issue_comment>username_2: When the number of actions becomes large, in Q-learning you may hit a major sticking point. The Q-learning training process, and the trained agent both require you to calculate $$\pi(s) = \text{argmax}\_a Q(s,a)$$ and that requires evaluating $Q(s,a)$ for each possible action. Which becomes more expensive to compute when there are more actions. At some number of actions, it will start to become inefficient and eventually infeasible at an even higher number. There are no fixed numbers to assess for this, because it will depend on other factors. The usual way to address this is to use a different, parametric, policy function, and to optimise that more directly (you may also track the value function to help with estimating returns efficiently). That means not performing Q learning anymore, but typically: * Policy gradient methods like REINFORCE * Actor-critic methods like A2C or DDPG. Actor-critic methods are a combination of policy gradient and value-based methods Note this does not solve the related problem of exploration and needing to experience all possible actions. Sometimes this resolves OK in large action spaces because action descriptions can be arranged such that similar actions are near each other and the policy function designed to focus on more similar actions as it improves, taking advantage of approximation. But if there are lots of radically different actions, then the problem may require a more brute force approach - at least for statistical learners. Upvotes: 0
2022/11/12
504
2,135
<issue_start>username_0: Hi i got the following roc curve: [![enter image description here](https://i.stack.imgur.com/MsLaJ.png)](https://i.stack.imgur.com/MsLaJ.png) What does this mean? has this to do with overfitting? Is my data wrong preprocessed? i do not understand and would appreciate an answer.<issue_comment>username_1: one of the downsides associated with the $Q$-Learning algorithm is that it must initialize a value $Q(s,a)$ for every $s\in S$ and every $a\in A$. If either one of your action space $A$ or state space $S$ is to large, I'd suggest *approximating* $Q$ instead Upvotes: 1 <issue_comment>username_2: When the number of actions becomes large, in Q-learning you may hit a major sticking point. The Q-learning training process, and the trained agent both require you to calculate $$\pi(s) = \text{argmax}\_a Q(s,a)$$ and that requires evaluating $Q(s,a)$ for each possible action. Which becomes more expensive to compute when there are more actions. At some number of actions, it will start to become inefficient and eventually infeasible at an even higher number. There are no fixed numbers to assess for this, because it will depend on other factors. The usual way to address this is to use a different, parametric, policy function, and to optimise that more directly (you may also track the value function to help with estimating returns efficiently). That means not performing Q learning anymore, but typically: * Policy gradient methods like REINFORCE * Actor-critic methods like A2C or DDPG. Actor-critic methods are a combination of policy gradient and value-based methods Note this does not solve the related problem of exploration and needing to experience all possible actions. Sometimes this resolves OK in large action spaces because action descriptions can be arranged such that similar actions are near each other and the policy function designed to focus on more similar actions as it improves, taking advantage of approximation. But if there are lots of radically different actions, then the problem may require a more brute force approach - at least for statistical learners. Upvotes: 0
2022/11/12
530
2,270
<issue_start>username_0: I have been reading some papers recently (example: <https://arxiv.org/pdf/2012.00363.pdf>) which seem to be training individual layers of, say, a transformer, holding the rest of the model frozen/constant. In the case of the paper I read, this was done in an attempt to minimize parameter changes so as to reduce knowledge "lost" by a model when it is updated for new information. My question is, how are individual layers of a transformer trained? Like, if we run the transformer and get a gradient, how can we use that gradient to train, say, the first layer, without affecting the rest of the layers at all?<issue_comment>username_1: The gradient that we use to train neural networks is the gradient of the loss function with respect to the parameters of each layer. The parameters usually form a very large vector, concatenating the parameters of each layer, and you compute gradients with respect to that large vector. Then to train a single layer, you just take the gradient of the loss with respect to the parameters of that layer only. Mathematically it is very simple. Upvotes: 1 <issue_comment>username_2: The way to "train individual layers" is to fix the weights of all other layers during training. This [notebook](https://colab.research.google.com/github/fchollet/deep-learning-with-python-notebooks/blob/master/chapter08_intro-to-dl-for-computer-vision.ipynb#scrollTo=gCurjG_9K9Du) describes an implementation of how to fix specific layers of model weights (see section *Fine-tuning a pretrained model, Freezing all layers until the fourth from the last*). This example is for a convolutional neural network, but the same approach can be applied to transformers. Those weights become non-trainable, meaning that during training they cannot be updated. Only the weights of the layer(s) of interest will remain trainable (and updatable). This is useful when the model was previously trained on other data (pre-trained model), and while you do not want to undo all of that training (i.e., you don't want to change the weights that were previously learned), you do want to refine the model with new data that you have. Allowing weights of just some of the layers to be updated during training is one way of doing that. Upvotes: 0
2022/11/13
493
2,125
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/uAwNX.png)](https://i.stack.imgur.com/uAwNX.png) What does this difference in train and test accuracy mean?<issue_comment>username_1: For the Logistic Regression Classifier and SVC, the `train_accuracy` and `test_accuracy` are very similar; thus, there is no evidence for over or under fitting. However, the KNN, shows a `train_accuracy` that is lower than your `test_accuracy`, which *suggests* overfitting. Alternatively, depending on your sample size, it is entirely possible that this variation is within the expected range of error. In machine learning, the model is trained on one dataset (i.e., the **training data**), but performing model evaluation (i.e., estimate "real world" performance by calculating `accuracy` or some other metric) on the same dataset that you trained on would be biased. Instead, the evaluation is performed on an independent identically distributed dataset (i.e. the **test data**) to get an unbiased estimate of model performance. However, it is still important to compare accuracy (or whatever performance metric you choose) between training and testing to assess for over- or underfitting. **Overfitting** means that your model "fit" the training data very well and achieves high accuracy on that data. However, the model performs poorly (or less well) on the test data. This is because the model not only "fits" the signal in the training data but also the noise. The problem is that noise is random and it will surely be different in the test set. Therefore, the model doesn't generalize well to other datasets. **Underfitting** means that the model performs poorly on both datasets. This is typically because the model does not have the "expressivity" or "capacity" to learn the signals within the data or that the data is random and has no "signal" to speak of (and, therefore, nothing from which to learn). Upvotes: 0 <issue_comment>username_2: Actually, in the strict sense, you cannot decide overfitting here, because overfitting is tested using losses, not metrics like accuracy. Upvotes: -1
2022/11/13
750
3,008
<issue_start>username_0: I'm training a Deep Q-learning model on a snake game and I would like some ideas on how to improve the model and maybe also efficiency of training it. The game is currently set to a 12x12 grid, a blue snake with a green head and a red apple. The network is fed with 3x12x12 input parameters (RGB, width, height) and gets a positive reward when an apple is eaten and a negative reward when it collides with something. It does learn, but plateaus around 12-13 apples per round (on average) after 3 million steps: [![enter image description here](https://i.stack.imgur.com/rzS1g.png)](https://i.stack.imgur.com/rzS1g.png) What I have tried: Giving a partly reward on the steps before a "real" reward. For example: ``` Step Action Reward N Go straight 100 N-1 Go straight 50 N-2 Go straight 33 N-3 Go left 25 ``` That was just an idea but it does not seem to work as I hoped. What else can I try? What I don't want to do is tinker with the game, I just want the visual input and nothing else.<issue_comment>username_1: For the Logistic Regression Classifier and SVC, the `train_accuracy` and `test_accuracy` are very similar; thus, there is no evidence for over or under fitting. However, the KNN, shows a `train_accuracy` that is lower than your `test_accuracy`, which *suggests* overfitting. Alternatively, depending on your sample size, it is entirely possible that this variation is within the expected range of error. In machine learning, the model is trained on one dataset (i.e., the **training data**), but performing model evaluation (i.e., estimate "real world" performance by calculating `accuracy` or some other metric) on the same dataset that you trained on would be biased. Instead, the evaluation is performed on an independent identically distributed dataset (i.e. the **test data**) to get an unbiased estimate of model performance. However, it is still important to compare accuracy (or whatever performance metric you choose) between training and testing to assess for over- or underfitting. **Overfitting** means that your model "fit" the training data very well and achieves high accuracy on that data. However, the model performs poorly (or less well) on the test data. This is because the model not only "fits" the signal in the training data but also the noise. The problem is that noise is random and it will surely be different in the test set. Therefore, the model doesn't generalize well to other datasets. **Underfitting** means that the model performs poorly on both datasets. This is typically because the model does not have the "expressivity" or "capacity" to learn the signals within the data or that the data is random and has no "signal" to speak of (and, therefore, nothing from which to learn). Upvotes: 0 <issue_comment>username_2: Actually, in the strict sense, you cannot decide overfitting here, because overfitting is tested using losses, not metrics like accuracy. Upvotes: -1
2022/11/14
1,138
4,630
<issue_start>username_0: **BACKGROUND:** The [softmax function](https://en.wikipedia.org/wiki/Softmax_function) is the most common choice for an activation function for the last dense layer of a multiclass neural network classifier. The outputs of the softmax function have mathematical properties of probabilities and are--in practice--presumed to be (conditional) probabilities of the classes given the features: 1. First, the softmax output for each class is between $0$ and $1$. 2. Second, the outputs of all the classes sum to $1$. **PROBLEM:** However, just because they have mathematical properties of probabilities does not automatically mean that the softmax outputs are in fact probabilities. In fact, there are other functions that also have these mathematical properties, which are also occasionally used as activation functions. **QUESTION:** *"Do softmax outputs represent probabilities in the usual sense?"* In other words, do they really reflect chances or likelihoods? (I use likelihood in the colloquial sense here.)<issue_comment>username_1: Excellent question. The simple answer is **no**. Softmax actually produces uncalibrated probabilities. That is, they do not really represent the probability of a prediction being correct. What usually happens is that softmax probabilities for the predicted class are closer to 100% in all cases, whether the predictions are correct or incorrect, which effectively does not give you any information. This is called overconfidence. This means that the probabilities are not useful, and you cannot really use them as reliable confidences to detect when the model is unsure or predicts incorrectly. For reference: <NAME>, <NAME>, <NAME>, <NAME>. [On calibration of modern neural networks](http://proceedings.mlr.press/v70/guo17a.html). In International conference on machine learning 2017 Jul 17 (pp. 1321-1330). PMLR. Upvotes: 4 <issue_comment>username_2: The answer is both yes, and no. Or, to put it another way, the answer depends on what exactly you mean by "represent probabilities", and there is a valid sense in which the answer is yes, and another valid sense in which the answer is no. No, they don't represent the probability ======================================== No, they do not represent the true probability. You can think of a neural network as a function $f$. Let $f(y;x)$ denote the softmax output of the neural network corresponding to class $y$, on input $x$. Then $f(y;x)$ will typically not be equal to $p(y|x)$, the probability that sample $x$ is from class $y$. $f(y;x)$ can be viewed as an estimate of $p(y|x)$ -- as a best-effort guess at $p(y|x)$ -- but it can be an arbitrarily bad estimate/guess. Neural networks routinely make errors on tasks that even humans find clear. Also, neural networks have systematic biases. For instance, as the other answer explains, neural networks tend to be biased towards "overconfidence". So you should not assume that the output from the neural network represents the true probability $p(y|x)$. There is some underlying probability. We might not know how to compute it, but it exists. Neural networks are an attempt to estimate it, but it is a highly imperfect estimate. Yes, they do represent probabilities ==================================== While the softmax outputs are not the true probability $p(y|x)$, they do represent a probability distribution. You can think of them as an estimate of $p(y|x)$. For a number of reasons, it is an imperfect and flawed estimate, but it is an estimate nonetheless. (Even bad or noisy estimates are still estimates.) Moreover, the way we train neural networks is designed to try to make them a good estimate -- or as good as possible. We train a neural network to minimize the expected loss. The expected loss is defined as $$L = \mathbb{E}\_x[H(p(y|x),f(y;x))],$$ where the expectation is with respect to $x$ chosen according to the data distribution embodied in the training set, and $H$ is the [cross-entropy](https://en.wikipedia.org/wiki/Cross_entropy) of the distribution $f(y;x)$ relative to the distribution $p(y|x)$. Intuitively, the smaller the training loss is, the closer that $f(y;x)$ is to $p(y|x)$. So, neural networks are trained in a way that tries to make its output be as good an approximation to $p(y|x)$ as is possible, given the limitations of neural networks and given the training data that is available. As highlighted above, this is highly imperfect. But $f(y;x)$ does still represent a probability distribution, that is our attempt to estimate the true probability distribution $p(y|x)$. Upvotes: 5 [selected_answer]
2022/11/19
1,626
6,460
<issue_start>username_0: We developed a neural network-based protein reconstruction tool to reconstruct the main chain from only CA atoms. 1. we generated data from some selected PDBs from the RCSB website to train an NN model. 2. we then use those data to train the NN model. 3. we select some test PDBs, strip all atoms except CA atoms, and save them in files. 4. we pass those CA-only PDBs through the NN model, obtain a reconstructed main chain, and save them in files. 5. we compare original and reconstructed PDB files and calculate CRMSD values. The data set is large: a sample of 1398438 rows and 102 columns (data points). The model is Keras via a 4-layered MLP. There is no feedback loop or convolution applied. We obtained a CRMSD value of 0.3559, which is not satisfactory. --- How can I improve or redesign the NN model? How do I know what NN type would best serve our purpose? How do I know the number of layers we need in the model? How do I know if we need a feedback loop or not?<issue_comment>username_1: I'll try to answer, because even this is beyond the scope of my area I think it may help. > > How do I know what NN type would best serve our purpose? How do I know the number of layers we need in the model? How do I know if we need a feedback loop or not? > > > This generally depends on the level of experience. There is not deterministic way how to decide and find an answer for these. > > How can I improve or redesign the NN model? > > > By playing with it I suppose. I have not much experience in this area and zero in protein reconstruction tasks. But I think that the best way would be as in any other area of any work. 1. I would to simplify the task and model based on gained knowledge. 2. Use this network to solve couple more similar problems, prune it and compare pruned networks. 3. I like the idea of Generative adversarial network. 4. Design neural net to help to design your solution. 5. Or evolution algorithm should to find some answers also. I think it is mainly about to get more knowledge about the solution. Upvotes: 0 <issue_comment>username_2: Evidentially, one of the most important concepts in NN design are **inductive biases and symmetry**. A good way to approach the concept of symmetry is to ask: Given a datapoint $x \in \mathcal{D}$ from my Dataset $\mathcal{D}$, how can I transform $x$ without changing it's meaning. Indictive biases, on the other hand, are essentially assumptions that you have about your data and that you embed into your model architecture. Here are some examples of what I mean: **Images:** Images have a [translational symmetry](https://en.wikipedia.org/wiki/Translational_symmetry), meaning the identity of an object doesn't change when you move it to another position in the image - a cat in the left corner of an image remains a cat even if you move it to the right corner. If you now want to classify images and where to take an MLP instead of a ConvNet, your model had to learn to identify a cat in every possible position of the image, because the inputs to your model change with the slightest translation. In contrast, ConvNets have a translational inductive bias, i.e. are translation equivariant. This means they have a build in robustness against translations that requires no learning. This is one of the reasons, they are so good with images. This property of ConvNets arises from applying the same weights (kernel) over the space of the image. Another *inductive bias* of ConvNets is locality: The full image is not so important - whats important is local patterns and groups of pixels. **Text:** In AI research text was understood as sequential data. This is why people came up with RNNs like GRUs and LSTMs, thus implementing models with a sequential inductive bias. However, since transformers took over it has become clearer that the position of a word in a sentence is important but not primarily encoding the meaning. Instead a graph-like relation between words is more discriminative, which IMO, is why attention-based models are now the SOTA. How I understand this is that words have a specific permutation symmetry that depends on grammar and retains meaning if you permute words in a sentence in a specific way. **Point Clouds:** A 3D scan (point cloud) remains the same under rotation and translation in 3D, i.e. point clouds are SE(3)-symmetric. In the paper [2], this is biased into a transformer model by a clever usage of self-attention. **Graphs:** The identity of a graph doesn't change under permutation as long as the relations between nodes remain the same. Hence, graph neural networks are invariant to permutation. The list goes on like this and I put some references at the end for further reading - there is a lot of material regarding these topics and IMO it is super important. I'd recommend reading [1] which is very clearly written, [3] is very technical and I found it pretty hard to understand in detail, but it provides a nice overview of symmetries and provides an intuition of what constitutes groups and how this is related to equivariance and invariance in neural networks. --- In your specific use case, you could ask: What changes can I apply to the input without changing the expected prediction and is there some sort of hierarchy or any other bias that could be useful. I'm no protein expert, but I assume a reflection symmetry (reversing the input sequence). I'd also assume that sequence length is important but not very descriptive. There might be local motives that are more important (Like image size is not important but local groups of pixels are). This would direct you to a localized model such as 1D Convolutions. Providing the model with reflection equivariance is straight forward in the case of 1D convolutions: You can, for every learned kernel, add the reversed kernel to the convolution operations. This got pretty long by now, but I hope it is somewhat clear what I mean by all of this. --- * [[1] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges](http://arxiv.org/abs/2104.13478) * [[2] SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks](https://arxiv.org/abs/2006.10503) * [[3] A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups](https://arxiv.org/abs/2104.09459) * [[4] Video: Nice talk of <NAME> on the topic of symmetry](https://www.youtube.com/watch?v=YihnfamwA_s) Upvotes: 2
2022/11/20
1,611
6,247
<issue_start>username_0: I will use the notation used in the [proximal policy optimization paper](https://arxiv.org/abs/1707.06347). [![conservative policy iteration objective](https://i.stack.imgur.com/uxxJk.png)](https://i.stack.imgur.com/uxxJk.png) What approximation is needed to arrive at the surrogate objective (equation (6) above) with the ratio $r\_t(\theta)$? Put another way, we start with the vanilla policy gradient objective and aim to optimize $L^\text{PG} (\theta) = \mathbb{E}\_t \left [ \log \pi\_\theta(a\_t|s\_t)A\_t \right]$. Where in $L^\text{PG}(\theta)$ do I make an approximation to derive equation (6)? I could not find this objective following the reference [KL02](https://people.eecs.berkeley.edu/%7Epabbeel/cs287-fa09/readings/KakadeLangford-icml2002.pdf).<issue_comment>username_1: I'll try to answer, because even this is beyond the scope of my area I think it may help. > > How do I know what NN type would best serve our purpose? How do I know the number of layers we need in the model? How do I know if we need a feedback loop or not? > > > This generally depends on the level of experience. There is not deterministic way how to decide and find an answer for these. > > How can I improve or redesign the NN model? > > > By playing with it I suppose. I have not much experience in this area and zero in protein reconstruction tasks. But I think that the best way would be as in any other area of any work. 1. I would to simplify the task and model based on gained knowledge. 2. Use this network to solve couple more similar problems, prune it and compare pruned networks. 3. I like the idea of Generative adversarial network. 4. Design neural net to help to design your solution. 5. Or evolution algorithm should to find some answers also. I think it is mainly about to get more knowledge about the solution. Upvotes: 0 <issue_comment>username_2: Evidentially, one of the most important concepts in NN design are **inductive biases and symmetry**. A good way to approach the concept of symmetry is to ask: Given a datapoint $x \in \mathcal{D}$ from my Dataset $\mathcal{D}$, how can I transform $x$ without changing it's meaning. Indictive biases, on the other hand, are essentially assumptions that you have about your data and that you embed into your model architecture. Here are some examples of what I mean: **Images:** Images have a [translational symmetry](https://en.wikipedia.org/wiki/Translational_symmetry), meaning the identity of an object doesn't change when you move it to another position in the image - a cat in the left corner of an image remains a cat even if you move it to the right corner. If you now want to classify images and where to take an MLP instead of a ConvNet, your model had to learn to identify a cat in every possible position of the image, because the inputs to your model change with the slightest translation. In contrast, ConvNets have a translational inductive bias, i.e. are translation equivariant. This means they have a build in robustness against translations that requires no learning. This is one of the reasons, they are so good with images. This property of ConvNets arises from applying the same weights (kernel) over the space of the image. Another *inductive bias* of ConvNets is locality: The full image is not so important - whats important is local patterns and groups of pixels. **Text:** In AI research text was understood as sequential data. This is why people came up with RNNs like GRUs and LSTMs, thus implementing models with a sequential inductive bias. However, since transformers took over it has become clearer that the position of a word in a sentence is important but not primarily encoding the meaning. Instead a graph-like relation between words is more discriminative, which IMO, is why attention-based models are now the SOTA. How I understand this is that words have a specific permutation symmetry that depends on grammar and retains meaning if you permute words in a sentence in a specific way. **Point Clouds:** A 3D scan (point cloud) remains the same under rotation and translation in 3D, i.e. point clouds are SE(3)-symmetric. In the paper [2], this is biased into a transformer model by a clever usage of self-attention. **Graphs:** The identity of a graph doesn't change under permutation as long as the relations between nodes remain the same. Hence, graph neural networks are invariant to permutation. The list goes on like this and I put some references at the end for further reading - there is a lot of material regarding these topics and IMO it is super important. I'd recommend reading [1] which is very clearly written, [3] is very technical and I found it pretty hard to understand in detail, but it provides a nice overview of symmetries and provides an intuition of what constitutes groups and how this is related to equivariance and invariance in neural networks. --- In your specific use case, you could ask: What changes can I apply to the input without changing the expected prediction and is there some sort of hierarchy or any other bias that could be useful. I'm no protein expert, but I assume a reflection symmetry (reversing the input sequence). I'd also assume that sequence length is important but not very descriptive. There might be local motives that are more important (Like image size is not important but local groups of pixels are). This would direct you to a localized model such as 1D Convolutions. Providing the model with reflection equivariance is straight forward in the case of 1D convolutions: You can, for every learned kernel, add the reversed kernel to the convolution operations. This got pretty long by now, but I hope it is somewhat clear what I mean by all of this. --- * [[1] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges](http://arxiv.org/abs/2104.13478) * [[2] SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks](https://arxiv.org/abs/2006.10503) * [[3] A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups](https://arxiv.org/abs/2104.09459) * [[4] Video: Nice talk of <NAME> on the topic of symmetry](https://www.youtube.com/watch?v=YihnfamwA_s) Upvotes: 2
2022/11/21
1,546
6,416
<issue_start>username_0: In Convolutional Neural Networks we extract and create abstractified “feature maps” of our given image. My thought was this: We extract things like lines initially. Then from different types of lines we are meant to extract higher order features. However, doesn't this require us to look at multiple feature maps at once? Convolutional layers only apply the filter on one matrix at a time, and the only time, to my knowledge, that these feature maps get looked at together is at the fully connected layer. To explain further, if we have an image of a circle we want to recognize, this consists of many lines at different angles. But in a convolutional layer, we have these different filters that will pick up different parts of the circle. Then when we add a second convolutional layer, how can it extract a higher order feature without combining feature maps in some way? Do we combine feature maps in between convolutional layers?<issue_comment>username_1: I'll try to answer, because even this is beyond the scope of my area I think it may help. > > How do I know what NN type would best serve our purpose? How do I know the number of layers we need in the model? How do I know if we need a feedback loop or not? > > > This generally depends on the level of experience. There is not deterministic way how to decide and find an answer for these. > > How can I improve or redesign the NN model? > > > By playing with it I suppose. I have not much experience in this area and zero in protein reconstruction tasks. But I think that the best way would be as in any other area of any work. 1. I would to simplify the task and model based on gained knowledge. 2. Use this network to solve couple more similar problems, prune it and compare pruned networks. 3. I like the idea of Generative adversarial network. 4. Design neural net to help to design your solution. 5. Or evolution algorithm should to find some answers also. I think it is mainly about to get more knowledge about the solution. Upvotes: 0 <issue_comment>username_2: Evidentially, one of the most important concepts in NN design are **inductive biases and symmetry**. A good way to approach the concept of symmetry is to ask: Given a datapoint $x \in \mathcal{D}$ from my Dataset $\mathcal{D}$, how can I transform $x$ without changing it's meaning. Indictive biases, on the other hand, are essentially assumptions that you have about your data and that you embed into your model architecture. Here are some examples of what I mean: **Images:** Images have a [translational symmetry](https://en.wikipedia.org/wiki/Translational_symmetry), meaning the identity of an object doesn't change when you move it to another position in the image - a cat in the left corner of an image remains a cat even if you move it to the right corner. If you now want to classify images and where to take an MLP instead of a ConvNet, your model had to learn to identify a cat in every possible position of the image, because the inputs to your model change with the slightest translation. In contrast, ConvNets have a translational inductive bias, i.e. are translation equivariant. This means they have a build in robustness against translations that requires no learning. This is one of the reasons, they are so good with images. This property of ConvNets arises from applying the same weights (kernel) over the space of the image. Another *inductive bias* of ConvNets is locality: The full image is not so important - whats important is local patterns and groups of pixels. **Text:** In AI research text was understood as sequential data. This is why people came up with RNNs like GRUs and LSTMs, thus implementing models with a sequential inductive bias. However, since transformers took over it has become clearer that the position of a word in a sentence is important but not primarily encoding the meaning. Instead a graph-like relation between words is more discriminative, which IMO, is why attention-based models are now the SOTA. How I understand this is that words have a specific permutation symmetry that depends on grammar and retains meaning if you permute words in a sentence in a specific way. **Point Clouds:** A 3D scan (point cloud) remains the same under rotation and translation in 3D, i.e. point clouds are SE(3)-symmetric. In the paper [2], this is biased into a transformer model by a clever usage of self-attention. **Graphs:** The identity of a graph doesn't change under permutation as long as the relations between nodes remain the same. Hence, graph neural networks are invariant to permutation. The list goes on like this and I put some references at the end for further reading - there is a lot of material regarding these topics and IMO it is super important. I'd recommend reading [1] which is very clearly written, [3] is very technical and I found it pretty hard to understand in detail, but it provides a nice overview of symmetries and provides an intuition of what constitutes groups and how this is related to equivariance and invariance in neural networks. --- In your specific use case, you could ask: What changes can I apply to the input without changing the expected prediction and is there some sort of hierarchy or any other bias that could be useful. I'm no protein expert, but I assume a reflection symmetry (reversing the input sequence). I'd also assume that sequence length is important but not very descriptive. There might be local motives that are more important (Like image size is not important but local groups of pixels are). This would direct you to a localized model such as 1D Convolutions. Providing the model with reflection equivariance is straight forward in the case of 1D convolutions: You can, for every learned kernel, add the reversed kernel to the convolution operations. This got pretty long by now, but I hope it is somewhat clear what I mean by all of this. --- * [[1] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges](http://arxiv.org/abs/2104.13478) * [[2] SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks](https://arxiv.org/abs/2006.10503) * [[3] A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups](https://arxiv.org/abs/2104.09459) * [[4] Video: Nice talk of Max Welling on the topic of symmetry](https://www.youtube.com/watch?v=YihnfamwA_s) Upvotes: 2
2022/11/21
1,653
6,683
<issue_start>username_0: This question is a result of a discussion with one of my more math-minded friends. When I accidentally mentioned the term continuous state space, he corrected me by saying that I am most probably talking about dense sets since continuity is usually associated with functions. This makes me wonder: what is it meant by a 'continuous' state space? We very often see this term thrown around in the Reinforcement Learning literature without defining properly. Can anybody provide me any references on what it precisely means? Even some of the more mathematically rigorous books like [Algorithms for Reinforcement Learning](https://sites.ualberta.ca/%7Eszepesva/rlbook.html) by <NAME> or [Reinforcement Learning: Theory and Algorithms](https://rltheorybook.github.io) by AJKS don't delve into the definition of this term. A quick Google search tells me that there is no such thing as a 'continuous' set. Questions like '[Is there such thing as a continuous set?](https://math.stackexchange.com/q/3362851)' and '[Is an Uncountable Set and a Continuous Set the Same Thing?](https://math.stackexchange.com/q/3299369)' reinforces my belief that it is not equivalent to the notion of uncountability.<issue_comment>username_1: I'll try to answer, because even this is beyond the scope of my area I think it may help. > > How do I know what NN type would best serve our purpose? How do I know the number of layers we need in the model? How do I know if we need a feedback loop or not? > > > This generally depends on the level of experience. There is not deterministic way how to decide and find an answer for these. > > How can I improve or redesign the NN model? > > > By playing with it I suppose. I have not much experience in this area and zero in protein reconstruction tasks. But I think that the best way would be as in any other area of any work. 1. I would to simplify the task and model based on gained knowledge. 2. Use this network to solve couple more similar problems, prune it and compare pruned networks. 3. I like the idea of Generative adversarial network. 4. Design neural net to help to design your solution. 5. Or evolution algorithm should to find some answers also. I think it is mainly about to get more knowledge about the solution. Upvotes: 0 <issue_comment>username_2: Evidentially, one of the most important concepts in NN design are **inductive biases and symmetry**. A good way to approach the concept of symmetry is to ask: Given a datapoint $x \in \mathcal{D}$ from my Dataset $\mathcal{D}$, how can I transform $x$ without changing it's meaning. Indictive biases, on the other hand, are essentially assumptions that you have about your data and that you embed into your model architecture. Here are some examples of what I mean: **Images:** Images have a [translational symmetry](https://en.wikipedia.org/wiki/Translational_symmetry), meaning the identity of an object doesn't change when you move it to another position in the image - a cat in the left corner of an image remains a cat even if you move it to the right corner. If you now want to classify images and where to take an MLP instead of a ConvNet, your model had to learn to identify a cat in every possible position of the image, because the inputs to your model change with the slightest translation. In contrast, ConvNets have a translational inductive bias, i.e. are translation equivariant. This means they have a build in robustness against translations that requires no learning. This is one of the reasons, they are so good with images. This property of ConvNets arises from applying the same weights (kernel) over the space of the image. Another *inductive bias* of ConvNets is locality: The full image is not so important - whats important is local patterns and groups of pixels. **Text:** In AI research text was understood as sequential data. This is why people came up with RNNs like GRUs and LSTMs, thus implementing models with a sequential inductive bias. However, since transformers took over it has become clearer that the position of a word in a sentence is important but not primarily encoding the meaning. Instead a graph-like relation between words is more discriminative, which IMO, is why attention-based models are now the SOTA. How I understand this is that words have a specific permutation symmetry that depends on grammar and retains meaning if you permute words in a sentence in a specific way. **Point Clouds:** A 3D scan (point cloud) remains the same under rotation and translation in 3D, i.e. point clouds are SE(3)-symmetric. In the paper [2], this is biased into a transformer model by a clever usage of self-attention. **Graphs:** The identity of a graph doesn't change under permutation as long as the relations between nodes remain the same. Hence, graph neural networks are invariant to permutation. The list goes on like this and I put some references at the end for further reading - there is a lot of material regarding these topics and IMO it is super important. I'd recommend reading [1] which is very clearly written, [3] is very technical and I found it pretty hard to understand in detail, but it provides a nice overview of symmetries and provides an intuition of what constitutes groups and how this is related to equivariance and invariance in neural networks. --- In your specific use case, you could ask: What changes can I apply to the input without changing the expected prediction and is there some sort of hierarchy or any other bias that could be useful. I'm no protein expert, but I assume a reflection symmetry (reversing the input sequence). I'd also assume that sequence length is important but not very descriptive. There might be local motives that are more important (Like image size is not important but local groups of pixels are). This would direct you to a localized model such as 1D Convolutions. Providing the model with reflection equivariance is straight forward in the case of 1D convolutions: You can, for every learned kernel, add the reversed kernel to the convolution operations. This got pretty long by now, but I hope it is somewhat clear what I mean by all of this. --- * [[1] Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges](http://arxiv.org/abs/2104.13478) * [[2] SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks](https://arxiv.org/abs/2006.10503) * [[3] A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups](https://arxiv.org/abs/2104.09459) * [[4] Video: Nice talk of <NAME> on the topic of symmetry](https://www.youtube.com/watch?v=YihnfamwA_s) Upvotes: 2
2022/11/21
3,230
10,589
<issue_start>username_0: I am reading about backpropagation for fully connected neural networks and I found a very interesting [article](https://www.jeremyjordan.me/neural-networks-training/) by <NAME>. It explains the process from start to finish. There is a section though that confused me a bit. The partial derivative of the cost function (MSE) with regard to the $\theta\_{jk}^{(2)}$ weights is: $$\frac{\partial J(\theta)}{\partial \theta\_{jk}^{(2)}} = \left( \frac{\partial J(\theta)}{\partial a\_j^{(3)}}\right) \left( \frac{\partial a\_j^{(3)}}{\partial z\_j^{(3)}}\right) \left(\frac{\partial z\_j^{(3)}}{\partial \theta\_{jk}^{(2)}} \right) \tag{1}$$ The article defines the next equation as the "error" term. The equation $eq:2$ is the combination of the first two partials in the chain rule: $$ \delta\_i^{(3)} = \frac {1}{m} (y\_i - a\_i^{(3)}) f^{'}(a^{(3)}) \tag{2}$$ Where: * $ i: $ The index of the neuron in the layer * $ ^{(3)}: $ Denotes the layer (in this case 3 is the output layer) * $ z\_i: $ The weighted sum of the inputs of the $i\_{th}$ neuron * $ m: $ The number of training samples * $ y\_i: $ The expected value of the $ i\_{th} $ neuron * $ a\_i: $ The predicted value of the $ i\_{th} $ neuron * $ f^{'}: $ The derivative of the activation function So a few lines after the definition above the article states: > > $ \delta^{(3)} $ is a vector of length j where j is equal to the number of output neurons > $$ \delta^{(3)} = > \begin{bmatrix} > y\_1 - a\_1^{(3)} \newline > y\_2 - a\_2^{(3)} \newline > \cdots \newline > y\_j - a\_j^{(3)} \newline > \end{bmatrix} f^{'}(a^{(3)}) \tag{3} > $$ > > > Q1. I strongly suspect that the $ f^{'}(a^{(3)}) $ is a vector of length $j$ and not a scalar. Basically, it is a vector containing the derivative of the activation function for every neuron of the output layer. How is it possible in $eq:3$ to multiply it with another vector and still get a vector and not a $j\ x\ j$ matrix? Is the multiplication elementwise? Q2. How is the $ f^{'}(a^{(3)}) $ calculated for every neuron for multiple training samples? From what I understand, while training with batches I would have to average the $ (y\_i - a\_i^{(3)}) $ term for the whole batch for every neuron. So in fact the term $ (y\_i - a\_i^{(3)}) $ is the sum for the whole batch and that's why the $ \frac {1}{m} $ is present. Does that apply to the derivative too? Meaning do I have to calculate the average of the derivative for the whole batch for each neuron? Q3. What does $ f^{'}(a^{(3)}) $ actually mean? Is this the derivative of the activation function evaluated with the values of the $a\_i^{(3)}$ outputs? Or is it the derivative of the activation function evaluated with the values of the weighted sum $ z\_i $ that is actually passed through the activation function to produce the $a\_i^{(3)} = f(z\_i)$ output? And if the second would I have to keep track of the average of the $z\_i$ for each neuron in order to obtain the average of the $ f^{'} $<issue_comment>username_1: Re your Q1 & Q3: assuming single training example for now, indeed you're right mathematically speaking $f^{'}(a^{(3)})$ shouldn't be a constant scalar and from the author's derivation section above your referenced equation (3), this derivative of activation function in the same layer should be evaluated at different values of $z\_1^{(3)}, z\_2^{(3)}$ as the (weighted sum) net input for the 2 demonstrated neurons at the last output layer (and they've been already computed in previous feedforward pass), thus confirms your Q3's 2nd interpretation that the derivative of the activation function is evaluated with the values of the weighted sum $z\_i$ that is actually passed through the activation function. In fact you may also refer to [Delta rule](https://en.wikipedia.org/wiki/Delta_rule) which is a special case of backpropagation algorithm which confirms the same interpretation. > > For a neuron $j$ with activation function ${g(x)}$, the delta rule for neuron $j$'s $i$th weight $w\_{ji}$ is given by $\Delta w\_{ji}=\alpha (t\_{j}-y\_{j})g'(h\_{j})x\_{i}$, where $\alpha$ is a small constant called learning rate, $g(x)$ is the neuron's activation function, $g'$ is the derivative of $g$, $t\_{j}$ is the target output, $h\_{j}$ is the weighted sum of the neuron's inputs, $y\_{j}$ is the actual output, $x\_{i}$ is the $i$th input. > > > Finally $\delta^{(3)}=[\delta\_1^{(3)}, \space \delta\_2^{(3)}]$ is a $1 \times 2$ vector representing "error" terms for the same 2 output neurons which can be confirmed by the author's conclusion section in more abstract linear algebra notations. Thus it's best to view $f^{'}(a^{(3)})$ as a $2 \times 2$ square matrix with eigenvalues identical to $f^{'}(z\_1^{(3)})$ and $f^{'}(z\_2^{(3)})$, respectively, and we should actually treat the explicit vector in your equation (3) as a $ 1 \times 2$ vector and then the final result matches as $[y\_1-a\_1^{(3)}, \space y\_2-a\_2^{(3)}]f^{'}(a^{(3)}) = \delta^{(3)}$. As for your final Q2 when training multiple examples using scaled loss function you don't usually need to calculate any additional average, the scaled loss function already takes care to minimize the mean square error. The only difference now is you're now dealing with much larger vectors/matrices such as your equation (3). Say you have 3 training sets, then the above $\delta^{(3)}$ will be a $ 1 \times 6$ vector and $f^{'}(a^{(3)})$ will be a $ 6 \times 6$ matrix. Basically you start with the same random small values for all the same 8 weights in above same network architecture, but you'll have to compute larger vectors and matrices during both feedforward and back propagations of any epoch. Upvotes: 1 <issue_comment>username_2: The author is rather free with changing from row to column format. The main philosophy or framework seems to be to implement the directions of "forward" evaluation and "backwards" gradient differentiation in the left-right direction, in diagrams as well as in formulas. However, this philosophy is broken several times, for instance in writing $a=f(z)$ instead of $f(z)=a$, or in using weight matrices that are indexed for matrix-vector multiplication, that is, the usual right-to-left direction $(z^{(3)})^T=\theta^{(2)} (a^{(2)})^T$, which following the philosophy should be written as $a^{(2)}(\theta^{(2)})^T=z^{(3)}$. But then again the philosophy gets reversed in formulas like $$ \delta^{(l)} = \delta^{(l + 1)}\,\Theta^{(l)}\,f'\left( a^{(l)} \right) $$ which clearly is right-to-left, which suggests that the gradients $δ^{(l)}$ are row vectors and the argument vectors like $a^{(l)}$ are column vectors. In short, it's a mess. --- Despite that, your questions have direct answers without relying too much on what directional philosophy is used Q1. $f'(a^{(3)})$ as used and positioned relative to other vectors and matrices is the diagonal matrix with the entries $f'(a\_j^{(3)})$ on the diagonal. This comes out as component-wise product in matrix-vector or vector-matrix multiplications. Q2. If you were to discuss that, you would need another index in all the formulas indicating the training sample. Such as $J(x^{[k]},\theta)$ as the residual of the net output to $y^{[k]}$. The gradient of the sum $\frac1m\sum\_{k=1}^mJ(x^{[k]},\theta)$ would be the sum of all single gradients that get computed independently of each other. Another interpretation of the factor $\frac1m$ is that it is the gradient at the top level $J$ that gets propagated backwards to the gradients of each variable. Q3. Of course some doubt is appropriate. As the activated value is a function of the linear combined input, $a^{(3)}=f(z^{(3)})$, so is the derivative $\frac{\partial a^{(3)}}{\partial z^{(3)}}=f'(z^{(3)})$. Both must have the same arguments. This is just a typo, perhaps copy-pasted some times. --- I've tried how to consistently implement the left-to-right philosophy, but it is too cumbersome. One would have to use something unfamiliar like some kind of reverse polish notation, so instead of $v=\phi(u,w)$ one would have to write $[u,w:\,\phi]=v$ or similar. So it is better to stay with right-to-left consistently, as also the author ended up doing. Thus $x,z,a$ are column vectors, gradients (against the tradition in differential calculus) are row vectors. In algorithmic differentiation it is one tradition to denote tangent vectors for forward differentiation with a dot, $\dot x, \dot z,\dot a$, and gradients that get pushed back with a bar, so $\bar x,\bar z=\delta, \bar a$. The construction principle for gradient propagation is that if $v=\phi(u,w)$, then the relation of tangents pushed forward to the level of before and after the operation $\phi$ and gradients pushed back to that stage satisfy $$ \bar v\dot v=\bar u\dot u+\bar w\dot w. $$ Inserting $\dot v=\phi\_{,u}\dot u+\phi\_{,w}\dot w$ results in $$ \bar v\phi\_{,u}\dot u+\bar v\phi\_{,w}\dot w=\bar u\dot u+\bar w\dot w. $$ Comparing both sides gives $\bar u=\bar v\phi\_{,u}$, $\bar w=\bar v\phi\_{,w}$. In a wider context $\bar w$ is a linear functional, meaning with scalar value, of $\dot w$. So if $w$ is a matrix, then the linear functional is obtained via the trace, ${\rm Tr}(\bar w·\dot w)$. So if for instance $\phi(u,w)=w·u$ in a matrix-vector product, then by the product rule $\dot v=w·\dot u+\dot w·u$ and $$ {\rm Tr}(\bar w·\dot w)=\bar v·\dot w·u={\rm Tr}(u·\bar v·\dot w), $$ so the comparison gives $\bar w=u·\bar v$. --- The example network in atomic formulas is \begin{align} z^{(2)}&=\Theta^{(1)}·x \\ a^{(2)}&=f(z^{(2)}) \\ z^{(3)}&=\Theta^{(2)}·a^{(2)} \\ a^{(3)}&=f(z^{(3)}) \\ \end{align} and then $J$ is computed via some loss function from $a^{(3)}$ and the reference value $y$. Starting from the gradient $\bar a^{(3)}$ computed from the loss function, the pushed-back gradients compute as \begin{align} \bar z^{(3)} &= \bar a^{(3)}·{\rm diag}(f'(z^{(3)})) \\ \bar a^{(2)} &= \bar z^{(3)}·\Theta^{(2)} \\ \bar \Theta^{(2)} &= a^{(2)}·\bar z^{(3)} \\ \bar z^{(2)} &= \bar a^{(2)}·{\rm diag}(f'(z^{(2)})) \\ \bar x &= \bar z^{(2)}·\Theta^{(1)} \\ \bar \Theta^{(1)} &= x·\bar z^{(2)} \\ \end{align} Of course one can combine some of these formulas, like $$ \delta^{(2)}=\bar z^{(2)} =\bar z^{(3)}·\Theta^{(2)}·{\rm diag}(f'(z^{(2)})) =\delta^{(3)}·\Theta^{(2)}·{\rm diag}(f'(z^{(2)})) $$ and if $J=\frac12\sum |a\_j^{(3)}-y\_j|^2$, then also $$ \delta^{(3)}=\bar z^{(3)}=\bar J\,[a\_1^{(3)}-y\_1,a\_2^{(3)}-y\_2,…]·{\rm diag}(f'(z^{(3)})) $$ with for instance $\bar J=\frac1m$. Upvotes: 1 [selected_answer]
2022/11/23
711
2,212
<issue_start>username_0: In papers and other material regarding diffusion models the forward diffusion process is defined by adding a small amount of Gaussian noise to an image $x\_0$ for $T$ time steps. In each time step the noise has a variance of $\beta\_t$. This process produces a sequence of noisy samples: $x\_1, x\_2, x\_3... x\_T$ such that: $q(x\_t|x\_{t-1}) = N(x\_t; \sqrt{1-\beta\_t} x\_{t-1}, \beta\_tI)$ I don't understand why this is $q(x\_t|x\_{t-1})$ distribution. When adding a constant $c$ to a normal random variable with mean $\mu$ and variance $\sigma^2$ we get a new random variable with the same variance and a mean of $c+\mu$. Therefore, I expect $q(x\_t|x\_{t-1})$ to be: $q(x\_t|x\_{t-1}) = x\_{t-1} + \epsilon\_t = N(x\_t; x\_{t-1}, \beta\_t I)$ where $\epsilon\_t=N(\epsilon\_t; 0, \beta\_t I)$ Any help will be appreciated.<issue_comment>username_1: The image data $\mathbf{x}\_{t-1}$ is not a constant $\mathbf{c}$. It's itself a distribution. Different permutations of pixels have different probabilities. Upvotes: 0 <issue_comment>username_2: The relationship between $x\_t$ and $x\_{t-1}$ is as follows: $$ x\_t = \sqrt{1-\beta\_t}x\_{t-1}+\sqrt{\beta\_t}\epsilon\_t,\quad \epsilon\_t\sim\mathcal{N}(0,I). $$ Not only is a small amount of noise added, the original image is also scaled down slightly. Upvotes: 1 <issue_comment>username_3: The purpose of $q(x\_t|x\_{t-1})$ is: given that the random variable $x\_{t-1}$ is sampled to be a specified value, what is the probability density function for $x\_t$? So we're using the *sampled* value of $x\_{t-1}$ to calculate the mean of the probability density function, which is $\sqrt{1-\beta\_t} x\_{t-1}$. We're *not* adding $x\_{t-1}$ and $\epsilon\_t$. Instead, we are steering the mean towards zero a little bit each time step (because $\sqrt{1-\beta\_t}<1$), while adding a little noise. To implement this, we execute the sampling of $x\_t$ by taking the mean (the constant $\sqrt{1-\beta\_t} x\_{t-1}$) and adding a sample from the standard normal distribution scaled by $\sqrt{\beta\_t}$. Of course, the probability chain rule then provides us the desired (nonconditional) pdf for $x\_T$ at the final time. Upvotes: 0
2022/11/24
1,113
4,769
<issue_start>username_0: I've trained my artificial neural network, and, as per standard practice, I've picked out the one neural network throughout training that did the best on my validation dataset. That is, the neural network learned from the training data, and generalized to the validation data. However, when I run the neural network on the test data, it performs poorly. **What should I do next?** From my understanding of the theoretical framework, the goal of validation is to ensure that that the network's parameters don't overfit to the training set. (If they do, we'll detect it because the validation score will be bad.) However, the goal of an additional test dataset set beyond the validation dataset is to ensure that our hyperparameters don't overfit. In most scenarios, we train multiple models with different learning rates, etc., and pick the one that does the best on the validation dataset. However, we might just be cherry picking the one that does the best for that validation dataset and doesn't generalize to a test set. So, we add an extra test set to detect if that happens. My question is about an analogous case, except for that my hyperparameter is just the training step of the model. I picked out the model that's checkpointed as having the highest validation score. But, when I run it on the test set, it does poorly, showing that I've cherry picked the model with the highest validation score but it still doesn't generalize. **What do I do next in this scenario?** Do I just follow the same advice from [these](https://stats.stackexchange.com/questions/365778/what-should-i-do-when-my-neural-network-doesnt-generalize-well) [questions](https://stats.stackexchange.com/questions/392676/my-neural-net-can-overfit-but-not-generalize) about overfitting, or is this a special case because the model does seem to generalize to the validation dataset? (Note: this question is different than regular overfitting because it's about hyperparameters overfitting the validation set, not regular parameters overfitting the train set. I've also looked at [these guidelines](https://stats.meta.stackexchange.com/questions/5273/whats-the-best-way-to-answer-my-neural-network-doesnt-work-please-fix-quest) but they don't seem to apply to this more general theoretical question.)<issue_comment>username_1: Check if the CV and test are Temporaly stable or they change with time. Additionally check if they both follow the same distribution or not Upvotes: 1 <issue_comment>username_2: When splitting a dataset into a train, validation & test-set you make an important assumption on the data you have. This is called the i.i.d assumption. You assume that all observations are: 1. sampled independently 2. identically distributed Let's explore further: **Independence assumption:** There should be no known correlations between samples in train and test set. If there are known correlations in the subsets, then account for this! An example of this could be: a model that predicts the job-satisfaction for Twitterusers (so per user a prediction). If a user has tweets present in all subsets, then there is a 'information leakage', so the test-score will be significantly higher (but unreliable). **Identically Distributed assumption:** Your train, validation & test set should be identically distributed, so all samples should come from the same distribution. If this is not the case, then your model will not generalise well to your test set. In your case, I believe your train & validation set to be identically distributed, but your test set is not. Another example: Let's say we train a model that wants to predict how many people take their bike to work on a given weekday (so the only feature of this model is: the day of the week). If our train set only contains data from the summer and our test set only contains data from the winter, then the model will probably not generalise well to the test data. During the winter people would be less likely to take their bike to work than during the summer, so the underlying distributions are not the same. To have a better performance you could for example: * Add data, which is not always possible (e.g. a dummy variable 'season') * Change the split *Changing the split* Note that if the actual problem is that your test data does not come from the same distribution as 'real, unseen data' (but your train & validation data do), then doing this might make your model less powerful to generalise to unseen data then it is right now. So in your case, it could be a violation of the identically distributed assumption. It could be useful to elaborate further on the nature of the data you are using. I believe this might help you to get a clear answer. Upvotes: 3 [selected_answer]
2022/11/26
716
3,107
<issue_start>username_0: I was reading the [ALBERT](https://arxiv.org/abs/1909.11942) paper and saw that they use the same parameters in each layer hence reducing the number of unique parameters. From what I could gather it seems if the all the layers have say parameters W then parameters of different layers would be updated differently which would destroy the parameter sharing. So one way I can think of is say we have only one set of parameters W and after each layer completes it's weight update W changes to W' and the preceding layers now use W'. Is this the right way to think about it or does something else happen under the hood?<issue_comment>username_1: Check if the CV and test are Temporaly stable or they change with time. Additionally check if they both follow the same distribution or not Upvotes: 1 <issue_comment>username_2: When splitting a dataset into a train, validation & test-set you make an important assumption on the data you have. This is called the i.i.d assumption. You assume that all observations are: 1. sampled independently 2. identically distributed Let's explore further: **Independence assumption:** There should be no known correlations between samples in train and test set. If there are known correlations in the subsets, then account for this! An example of this could be: a model that predicts the job-satisfaction for Twitterusers (so per user a prediction). If a user has tweets present in all subsets, then there is a 'information leakage', so the test-score will be significantly higher (but unreliable). **Identically Distributed assumption:** Your train, validation & test set should be identically distributed, so all samples should come from the same distribution. If this is not the case, then your model will not generalise well to your test set. In your case, I believe your train & validation set to be identically distributed, but your test set is not. Another example: Let's say we train a model that wants to predict how many people take their bike to work on a given weekday (so the only feature of this model is: the day of the week). If our train set only contains data from the summer and our test set only contains data from the winter, then the model will probably not generalise well to the test data. During the winter people would be less likely to take their bike to work than during the summer, so the underlying distributions are not the same. To have a better performance you could for example: * Add data, which is not always possible (e.g. a dummy variable 'season') * Change the split *Changing the split* Note that if the actual problem is that your test data does not come from the same distribution as 'real, unseen data' (but your train & validation data do), then doing this might make your model less powerful to generalise to unseen data then it is right now. So in your case, it could be a violation of the identically distributed assumption. It could be useful to elaborate further on the nature of the data you are using. I believe this might help you to get a clear answer. Upvotes: 3 [selected_answer]
2022/11/27
963
3,206
<issue_start>username_0: I am training a neural network where the target data is a vector of angles in radians (between $0$ and $2\pi$). I am looking for study material on how to encode this data. Can you supply me with a book or research paper that covers this topic comprehensively?<issue_comment>username_1: The main problem with simply using the values $\alpha \in [0, 2\pi]$ is that semantically $0 = 2\pi$, but numerically $0$ and $2\pi$ are maximally far apart. A common way to encode this is by a vector of $\sin$ and $\cos$. It perfectly conveys the fact that $0 = 2\pi$, because: $$ \begin{bmatrix} \sin(0)\\ \cos(0) \end{bmatrix} = \begin{bmatrix} \sin(2\pi)\\ \cos(2\pi) \end{bmatrix} $$ This encoding essentially maps the angle values onto the 2D unit circle. In order to decode this, you can calculate $$\text{atan}2(a\_1, a\_2) = \alpha,$$ where $a\_1 = \sin(\alpha)$ and $a\_2 = \cos(\alpha)$. [Here](https://www.avanwyk.com/encoding-cyclical-features-for-deep-learning/) is a nice detailed explanation and here are two references, where this is applied: * [Heffernan et al.](https://academic.oup.com/bioinformatics/article/33/18/2842/3738544?login=false#118817095) * [Pizzati et al.](https://arxiv.org/abs/2103.06879) **EDIT** As it was noted in the comments: The values $\sin(\alpha)$ and $\cos(\alpha)$ are not independent and the following naturally holds: $\sqrt{\sin(\alpha)^2 + \cos(\alpha)^2}= 1$, i.e. the euclidean norm is one. In a situation where your Neural Network predicts the sin and cos values, this condition isn't necessarily true. Therefore, you should consider adding a regularization term to the loss that guides the neural network toward outputting valid values (with unit norm) which could look like this: $$ r\_\lambda\left(\hat{y}\_1, \hat{y}\_2\right)\; = \lambda \left(\; 1 - \sqrt{\hat{y}\_1^2 + \hat{y}\_2^2}\right), $$ where $\hat{y}\_1$ and $\hat{y}\_2$ are the sin and cos outputs of the network respectively and $\lambda$ is a scalar that weights the regularization term against the loss. I found [this](https://arxiv.org/abs/1805.06485) paper where such a regularization term is used (s. Sec. 3.2) to get valid quaternions (Quaternions must also have unit norm). They found that many values work for $\lambda$ and they settle for $\lambda = 0.1$ Upvotes: 4 <issue_comment>username_2: You might want to look at the von Mises Distribution, it defines a probability distribution over angles. See [Pattern Recognition and Machine Learning](https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf), <NAME>, Appendix B (pg 693), or alternatively [Wikipedia](https://en.wikipedia.org/wiki/Von_Mises_distribution) has an article on this. You could certainly use this as a loss function in a neural network. My only reservation is that the input parameter $\theta\_o$ is itself periodic which might not play well with standard neural network architectures? Therefore previous answers are also worth considering. I only mention this as if you are interested in distributions over angles, the von Mises distribution is something you should probably be aware of. Upvotes: 0
2022/11/30
941
3,832
<issue_start>username_0: I have seen it stated that, as a rule of thumb, a backward pass in a neural network should take about twice as long as the forward pass. Examples: * From [DeepSpeed's Flops Profiler](https://www.deepspeed.ai/tutorials/flops-profiler/#flops-measurement) docs, the profiler: > > measures the flops of the forward pass of a module and the flops of the backward pass is estimated as `2` times of that of the forward pass > > > * Page 7 of <NAME>'s [Machine Learning notes](https://sites.krieger.jhu.edu/jared-kaplan/files/2019/04/ContemporaryMLforPhysicists.pdf) in which it is claimed that the backward pass requires twice the number of matrix-multiplies needed in the forward pass, for a vanilla neural network. I unfortunately don't understand the argument made in Kaplan (not sure where the "*two*" in the "*two matrix multiplications per layer he refers to*" comes from). In particular, any such rule would also seem to be very implementation dependent, depending on whether local gradients are computed and cached during the forward pass, for instance. But I guess there is a standard implementation of backprop that makes this unambiguous? If anyone can expand on the logic behind this lore or point me towards other references, I would be grateful.<issue_comment>username_1: The "two" in the "two matrix multiplications per layer" has nothing to do with any cached value computed in the previous feedforward pass (in fact forward pass doesn’t need any local gradient) where it only needs "one" matrix multiplication per layer denoting the term $Wx$ as shown in the standard input/output vectorial equation $a=\phi(Wx+b)$ in each layer. Thus it may be easier now to understand that for backprop applying automatic differentiation starting from the last layer to calculate derivatives of loss function w.r.t. each weight parameter in the current layer, it needs two matrix multiplication as shown in equation (1.4.6) in your second reference. The first multiplication denotes $\phi’ \times \phi$ where the former is a column vector and the latter is a row vector and its result is a matrix. And the second matrix multiplication occurs inside the calculation of $\phi’$ which is essentially an error factor vector of the input activation vector from its previous hidden layer obtained via output target error vector multiplied with a diagonal matrix of the last layer’s activation function’s derivatives at each node’s respective net input value during the forward pass if you go through the details of backprop. Upvotes: 1 <issue_comment>username_2: This is a general fact of automatic/algorithmic differentiation. In the forward transport of tangents as well as any backward transport of gradients, each multiplication node splits in two on the derivative level \begin{array}{kcc} \text{node}&:&v=u·w,\\ \text{forward}&:&\dot v = \dot u·w+u·\dot w,\\ \text{backward}&:&\bar u = \bar v·w,~~~\bar w =\bar v·u, \end{array} where $\bar u=\frac{\partial L}{\partial u}$ etc. The evaluation of scalar nodes gives an equally complex operation for the node derivative and an additional scalar multiplication for the forward or backward sweep, \begin{array}{kcc} \text{node}&:&v=\phi(u)\\ \text{forward}&:&\dot v = \phi'(u)·\dot u\\ \text{backward}&:&\bar u = \bar v·\phi'(u), \end{array} which in combination is also a factor of about 2. The structure of a layered neural network does not change the occurrence of these operations, it only reduces the organizational overhead. Overhead in general will possibly make the backward sweep slower than the forward sweep, as a tape (of some kind) of operations and values has to be constructed and read backwards. In NN this again is greatly simplified, reducing the difference, due to the layered structure already incorporating the tape. Upvotes: 0
2022/12/04
2,183
8,753
<issue_start>username_0: One of the innovations with OpenAI's ChatGPT is how natural it is for users to interact with it. What is the technical enabler for ChatGPT to maintain the context of previous questions in its answers? For example, ChatGPT understands a prompt of "tell me more" and expands on it's previous answer. Does it use activations from previous questions? Is there a separate input for the context? How does it work?<issue_comment>username_1: Based on [an answer by OpenAI](https://help.openai.com/en/articles/6787051-does-chatgpt-remember-what-happened-earlier-in-the-conversation), it appears that the illusion of understanding context is created by the model's capacity to accept very long input sequences. The OpenAI FAQ states that approximately 3000 words can be given as input. This together with the fact that [GPT-3 was trained to produce text that continues a given prompt](https://en.wikipedia.org/wiki/GPT-3) could explain the context feature. In practice, each prompt is probably extended with the previous outputs and prompt, as much as the input sequence length allows. So, all of the context is actually in the prompt. **Edit 18.2.2023** After spending some time with large language models and reading up the theory, I think my old answer is an understatement of ChatGPT's capabilities. It is likely that there are several engineering approaches to improve the context after the maximum content length is exceeded. These include (but are probably not limited to) 1. Using language models to summarize the conversation thus far, and using that as context 2. Using language models to search for the relevant context from the previous discussion (can be done by embedding questions and answers and doing a distance-based lookup in vector space), and feeding those as context with clever prompting like "If this information improves your answer, update your answer accordingly". Upvotes: 6 [selected_answer]<issue_comment>username_2: Based on nothing but my own experience trying to build a similar chatbot using text-davinci-003, I think they are using the model itself to summarize the conversation, then feeding that summary back into the prompt. I get good results when doing this - though obviously the team behind ChatGPT does a better job. Here is an example using the discussion under the question above: > > **You are a user named "AI" in an online forum. Below is a conversation > about a question. Please add your contribution to the conversation. > > Question: """ > One of the innovations with OpenAI's ChatGPT is how > natural it is for users to interact with it. > > What is the technical enabler for ChatGPT to maintain the context of > previous questions in its answers? For example, ChatGPT understands a > prompt of "tell me more" and expands on it's previous answer. > > Does it use activations from previous questions? Is there a separate > input for the context? How does it work? """ > > Rainb: """ > > ChatGPT says: > "ChatGPT uses a type of natural language processing called > "transformer" architecture, which is a type of neural network that is > designed to process and generate language. One of the key features of > transformer architecture is that it allows the model to maintain a > "context" of previous inputs. This is achieved through the use of > self-attention mechanisms, which allow the model to weight different > parts of the input differently based on how relevant they are to the > current context. This allows the model to generate more nuanced and > context-aware responses." """ > > username_1: """ > It does, indeed, but the > transformer architecture alone does not explain this capability. > Maintaining context requires something else. In fact, I think being a > transformer is irrelevant for this purpose. """ > > Rexcirus: """ > My best > guess is that the whole conversation is fed as input for the next > reply. But looking into openai labelling guidance, it looks that the > data collection is single Q&A only. """ > > Rexcirus: """ > If my guess is > correct, when the conversation is very long ChatGPT should start > forgetting previous bits, due to the maximum sequence size. """ > > yters: > """ > The ability to maintain context is very mysterious. What is most > mysterious to me is that if I continuously converse with ChatGPT it > has no problem maintaining context. On the other hand, if I break in > the middle of a conversation, and then come back to it some time > later, it's as if ChatGPT has had a memory wipe and has a lot of > trouble regaining the thread of the conversation. Since ChatGPT is a > neural network where the output is generated based on the previous > 2048 input tokens, the amount of time between responses should have no > bearing on the output generated. """ > > username_1: """ > Great observation > @yters. Could it be a bug: the interface correctly shows the previous > conversation, but the service fails to send this to the OpenAI APIs > after some time. """** > > *AI: """ > It is possible that the OpenAI API may > be designed to automatically discard the context after some time to > avoid the system being overburdened with large conversations. Another > possibility is that the system may be storing the context, but using a > different mechanism than the transformer architecture to access it.* > > > Now, I'll ask it to summarize the conversation so far: > > **You are a user named "AI" in an online forum. Below is a conversation > about a question. Please summarize the interaction so far.** > > > [I then pasted in everything from the Question down in the previous prompt/response] > > > *The conversation so far has focused on understanding the technical > enablers that allow ChatGPT to maintain context in its answers. Rainb > has suggested that this is done through transformer architecture and > self-attention mechanisms. Rexcirus has suggested that the whole > conversation may be fed as input for the next reply. Yters has then > posed the interesting question of why, when the conversation is broken > off and then resumed, ChatGPT has difficulty regaining the thread. > Milez has suggested that this could be a bug in the interface.* > > > Finally, I delete most of the conversation except for the summary and last few responses and alter the prompt a little to alert it to the presence of the summary: > > **You are a user named "AI" in an online forum. Below is a summary of > the conversation so far about a question, and the last few responses > to that question. Please add your contribution to the conversation.** > > > Summary: The conversation so far has focused on understanding the > technical enablers that allow ChatGPT to maintain context in its > answers. Rainb has suggested that this is done through transformer > architecture and self-attention mechanisms. Rexcirus has suggested > that the whole conversation may be fed as input for the next reply. > Yters has then posed the interesting question of why, when the > conversation is broken off and then resumed, ChatGPT has difficulty > regaining the thread. Milez has suggested that this could be a bug in > the interface. > > > yters: """ > > The ability to maintain context is very mysterious. What is > most mysterious to me is that if I continuously converse with ChatGPT > it has no problem maintaining context. On the other hand, if I break > in the middle of a conversation, and then come back to it some time > later, it's as if ChatGPT has had a memory wipe and has a lot of > trouble regaining the thread of the conversation. Since ChatGPT is a > neural network where the output is generated based on the previous > 2048 input tokens, the amount of time between responses should have no > bearing on the output generated. """ > > username_1: """ > > Great observation > @yters. Could it be a bug: the interface correctly shows the previous > conversation, but the service fails to send this to the OpenAI APIs > after some time. """ > > > *AI: That is an interesting point, @Milez. It could be that the > interface does not accurately display the previous conversation to the > API, or that there is a bug in the API that causes it to not be able > to accurately generate a response with the 2048 input tokens when the > conversation is broken off and then resumed. It would be interesting > to investigate further.* > > > I've had some success with this method, though it does still lose detail. Improving the prompt to create a more detailed summary helps. It's amazing how much of a conversation is full of repetitive filler, and how concisely it can be summarized. Maybe this is some kind of lingual compression? :) Upvotes: 2
2022/12/07
763
3,405
<issue_start>username_0: I'll preface this by saying that I have little experience in artificial intelligence, so this might be a naive question. However, in light of the recent controversy surrounding ChatGPT's inability to say "I don't know" and its tendency to instead make things up, I couldn't help but wonder: why not simply train a deep learning algorithm, even as simple as a large ANN, on all the data that ChatGPT was trained on plus a collection of ChatGPT responses manually labelled as accurate or inaccurate? In fact, one might even imagine a GAN system, with one NN taking a ChatGPT response as input and an improved response/changes to response as output, and the other assessing the veracity of the improved response. Compared to what ChatGPT is already capable of, to a layman like me, this looks like a trivial task - making sure the input is consistent with the right portion of the training data, or with some comparatively simple patterns within said data, seems infinitely shorter of a task than abstract or original thinking. So why was such a system not implemented? It's just about the most glaring solution to this problem possible, so there must be something wrong with it if OpenAI still haven't implemented it. Which begs the question: where does it fall apart? I tried looking for an answer to this question online, but haven't found anything.<issue_comment>username_1: You are massively underestimating the difficulty of the task, you would need: * A dataset containing labels of correct/incorrect, at a similar scale (billions of data points). * A definition of correct/incorrect, which by itself is difficult, just think that some people believe anything that does not fit their world view to be fake news or lies. Then consider, who would label this dataset? I don't think there is a train set containing this kind of data. You would have to gather text and have a human label it, at billion scale, would take a lot of time and effort. More importantly, there could be controversial topics where there is no a clear definition of right or wrong. What is the label in this case? Also there is a huge class imbalance, you can have some data points for correct labels, but there are infinite ways to be incorrect. So any dataset you have would be biased towards the correct class. The point of machine learning is generalization, I don't think you can just grab some random data and generalize this idea to absolute correct/incorrect. Even doing this for neural networks with images is very difficult. And also generalization, you should consider that even if you somehow train a classifier to output correct/incorrect, these predictions themselves could be incorrect (outputting correct when it is actually incorrect and viceversa), so you do not solve any problem really. Upvotes: 3 <issue_comment>username_2: Correcting the output of one NN with a second NN is not a very good approach. If you have extra data which is used to train the second NN exclusively, why not use this data to train the original NN in the first place? And if there is no extra data available, then it's unlikely that the second NN will be effective at correcting the first one. The original NN makes mistakes when it needs to extrapolate too much in the absence of a good match with the training data. The second NN will run into the same extrapolation incertitude again. Upvotes: 0
2022/12/08
1,527
5,569
<issue_start>username_0: ChatGPT is a language model. As far as I know and If I'm not wrong, it gets text as tokens and word embeddings. So, how can it do math? For example, I asked: > > ME: Which one is bigger 5 or 9. > > ChatGPT: In this case, 9 is larger than 5. > > > One can say, GPT saw numbers as tokens and in its training dataset there were some 9s that were bigger than 5s. So, it doesn't have actual math understanding and just sees numbers as some tokens. But I don't think that is true, because of this question: > > ME: Which one is bigger? 15648.25 or 9854.2547896 > > ChatGPT: In this case, 15648.25 is larger than 9854.2547896. > > > We can't say it actually saw the token of `1<PASSWORD>` to be bigger than the token of `9854.2547896` in its dataset! So how does this language model understand the numbers?<issue_comment>username_1: I think that the dataset is so large and the model so well trained that it understood the probabilistic correlation of length in a token of numbers before a dot separation, and then the influence of even each digit on the probability of one number being larger than another. The concrete example does not have to be in the dataset, it predicts the correct outcome because the relation of one number being larger than another and the difference in digits and length of those is sufficiently present in the dataset. Upvotes: 3 <issue_comment>username_2: Simple answer, ChatGPT is actually human writers with some kind of autocomplete to speed things up. This is [standard practice](https://www.bloomberg.com/news/articles/2016-04-18/the-humans-hiding-behind-the-chatbots) for [AI companies](https://datafloq.com/read/ai-wizard-of-oz-pay-no-attention-human/) these days, a "fake it till you make it" approach where they [use humans](https://www.inc.com/jessica-stillman/meet-the-people-who-teach-chatbots-to-sound-like-humans.html) to fill the gaps in the AI in the hopes that down the road they'll automate humans out of the product. Common enough for an [academic paper](https://journals.sagepub.com/doi/10.1177/20539517211016026) to be written on the topic. So, there is plenty of industry precedent for OpenAI to be using humans to help craft the responses. Plus, technically OpenAI is not "faking" anything. It is the media and bloggers who think ChatGPT is a pure AI system. OpenAI has made no such claim itself, and the opposite is implied by its [InstructGPT whitepaper](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf): > > Step 1: Collect demonstration data, and train a supervised policy. Our > labelers provide demonstrations of the desired behavior on the input > prompt distribution (see Section 3.2 for details on this > distribution). We then fine-tune a pretrained GPT-3 model on this data > using supervised learning > > > Additionally, ChatGPT is in "research mode" according to the website, which implies there are still humans training the system during the chats, as described in the quote above. Final note, I find it amusing no one considers this alternative plausible, as if it were somehow more complicated to have humans tweak chatbot responses than to create an AI with apparent human level understanding that ChatGPT exhibits. UPDATE: ChatGPT confirms OpenAI team curating its responses =========================================================== Turns out ChatGPT is indeed human curated, by open admission. During [this](https://github.com/username_2/transcripts/blob/main/ChatGPT_confused_by_visual_sentence_structure.txt#L714) conversation ChatGPT outright states the OpenAI team filters and edits the GPT generated responses. > > ...the response you are receiving is being filtered and edited by the OpenAI team, who ensures that the text generated by the model is coherent, accurate and appropriate for the given prompt. > > > Apparently, the fact that OpenAI actively curates ChatGPT's responses is indirectly implied in the documentation [here](https://beta.openai.com/docs/guides/safety-best-practices/human-in-the-loop-hitl). > > Human in the loop (HITL): > Wherever possible, we recommend having a > human review outputs before they are used in practice. This is > especially critical in high-stakes domains, and for code generation. > Humans should be aware of the limitations of the system, and have > access to any information needed to verify the outputs (for example, > if the application summarizes notes, a human should have easy access > to the original notes to refer back). > > > So, that explains that :) Upvotes: -1 <issue_comment>username_3: Adding on to username_1's answer, it is interesting to note that for larger numbers with similar digits ChatGPT is unable to make any useful distinctions. For instance: **Me:** Which number is bigger: 1234.12 or 1243.12 **ChatGPT:** Both numbers are equal. Upvotes: 3 <issue_comment>username_4: The apparent ability of ChatGPT (in particular when using the GPT-4 model) to solve certain mathematical problems is due to the amount of training and the amount of parameters of these machine learning models. ChatGPT or other large language models do not have explicit rules for solving mathematical problems. The following 2022 paper describes that such capabilities of transformer-based language models occur when a certain threshold of parameter quantity is exceeded: <https://arxiv.org/pdf/2206.07682.pdf> This is also the reason why they excel at some maths problems and fail at others, which can be very similar. Upvotes: 1
2022/12/08
3,779
13,225
<issue_start>username_0: As opposed to [How does ChatGPT know math?](https://ai.stackexchange.com/q/38217/145), I've been seeing some things floating around the Twitterverse about how ChatGPT can actually be very bad at math. For instance, I asked it "If it takes 5 machines 5 minutes to make 5 devices, how long would it take 100 machines to make 100 devices?" The conversation went [like this](https://i.stack.imgur.com/XDTem.png): > > **Mithical**: If it takes 5 machines 5 minutes to make 5 devices, how long would it take 100 machines to make 100 devices? > > **ChatGPT**: If it takes 5 machines 5 minutes to make 5 devices, then it would take 100 machines 100 minutes to make 100 devices. > > **Mithical**: That's not right. Check carefully and try again. > > **ChatGPT**: I apologize if my response was not correct. Can you please provide more information or clarify your question? I may have misunderstood the scenario you described. > > **Mithical**: It takes a machine 5 minutes to make a device. How long would it take 100 machines to make 100 devices? > > **ChatGPT**: If it takes a single machine 5 minutes to make a single device, then it would take 100 machines 5 minutes to make 100 devices. Therefore, it would take a total of 500 minutes for 100 machines to make 100 devices. > > > Now, unless I'm misunderstanding something here, the answer is 5 minutes. ChatGPT first makes the intuitive mistake of 100, that a human might make as well, and then goes on to (correctly, as far as I understand) say it's 5 minutes... but concludes in the same response that it's then 500 minutes. Why is this AI so bad at math? Computers are generally supposed to be good at math. Why does this model make such simple logical mistakes?<issue_comment>username_1: (Check out my heavily related answer to a similar question [here](https://ai.stackexchange.com/questions/25210/can-we-use-ml-to-do-anything-else-other-than-predicting-in-the-case-of-mathemat/25240#25240)) **Why is ChatGPT bad at math, while it is very good at other stuff?** The problem comes down to the age-old problem of learning vs understanding. On a high level, your question is very philosophical. The problem is that the model learns everything present in the data. However, throw it a slightly different mathematical problem it has not seen before, and it fails to answer spectacularly easy questions. One of the best examples are papers that try to make mathematics AIs. They throw it an endless amount of data for simple addition, but throw it a slightly higher number that it has never seen before, and it cannot solve it. These AIs learn their data well, but they cannot extrapolate strict 'rules' from the data. Addition is a simple 'rule', but it cannot comprehend this rule. Neural networks cannot learn rules simply because they are not designed to do so. They are multiplying and adding up inputs, weights and biases, and they are not designed to learn 'if, else' logic. An additional train of thought which i very much like is the following: Think of how precise a super small multi-layer perception would have to be to implement simple 2 digit addition (given 4 inputs). Should be doable right! Now think of how you would have to learn the weights of a model with 1.000.000 parameters to do the exact same, using a large dataset and a NN optimizer. It would just start learning how to approximate the answers, but it will never converge to a state where it learns the rule of addition. Upvotes: 3 <issue_comment>username_2: chatGPT is able to create *well-formed* sentences which contain phrases that are *fitting* for the input. It has rules extracted from its data, but those are not rules of understanding, but rules of 'likely phrases'. In the most simple variant, every smartphone already has that mechanism. You type on your screen and the phone gives you three words that you would statistically use most often after the previous one. On the phone that is not of a very high quality. Choose the most likely words for a message and the sentence that results is something like (just started and chose a stream of 'middle options'): "I am going on a trip with my friends to get to the full house for the weekend so we will need a time for a quick dinner with us at all and we can go over and there for the weekend if we want you guys out there" You can see how many words have a bit of a statistical likelihood relation backwards *time for*, *for a*, *time* -> *quick*, *quick dinner*, *with us*, *with* -> *at all*. chatGPT is fundamentally the same but much better than what the phone can do. Its statistics are much 'deeper'. It is the phone's autocompletion on steroids. But 'well-formed' and 'fitting' do not equate 'reliability' nor 'trustworthiness'. More data and training will not help, it will become even more 'well-formed' and 'fitting', but not more reliable. 'Well-formed' and 'fitting' do suggest to humans reliability and intelligence strongly, but that is more because we humans judge quickly and superficially. It's not so much that chatGPT is intelligent, but more that we are less so that we think. What is noteworthy is that many humans make the same 5/5/5 100/100/100 error. The difference is that you can explain to humans where their quick-and-dirty answer goes wrong. That is because they do have that understanding that chatGPT has not. Upvotes: 5 <issue_comment>username_3: ChatGPT is good at math and can ***understand*** the logic. It can derive **new conclusions** on its own and **generate value** which was not there before. To make use of ChatGPT you have to provide it with the **right context and questions**. This applies to math but also many other areas. I started with the same task, 5 machines/5 minutes/5 devices. At first, it was able to provide correct answer. Then I changed '5' to '3' (3 machines/3 minutes/3 devices) and it failed. But then I provided it with the context, asked a question differently, and it was able to understand the problem, and even create a general rule for solving it. Here is how ([link to the chat](https://sharegpt.com/c/S51nG3B))... Initial question, answered correctly. Most likely, in the meantime, this was added to its training data. The original question was posted like a month ago. > > **B:** If it takes 5 machines 5 minutes to make 5 devices, how long would it > take 100 machines to make 100 devices? > > > **ChatGPT:** If it takes 5 machines 5 > minutes to make 5 devices, it would take 1 machine 5 minutes to make 1 > device. > > > If it takes 1 machine 5 minutes to make 1 device, it would take 100 > machines 5 minutes to make 100 devices. > > > Then I changed the question in a trivial way (3 instead of 5). At first it failed to get it: > > **B:** If it takes 3 machines 3 minutes to make 3 devices, how long would it > take 100 machines to make 100 devices? > > > **ChatGPT:** It is not possible to determine > how long it would take 100 machines to make 100 devices based on the > information provided. The relationship between the number of machines > and the time it takes to produce a certain number of devices is not > necessarily linear. In order to determine how long it would take 100 > machines to make 100 devices, we would need to know the efficiency of > the machines and how they work together. > > > But then: > > **B:** If it takes 5 machines 5 minutes to make 5 devices, how long would it > take 100 machines to make 100 devices? > > > **ChatGPT:** If it takes 5 machines 5 minutes to make 5 devices, it would take 1 > machine 5 minutes to make 1 device. > > > If it takes 1 machine 5 minutes to make 1 device, it would take 100 > machines 5 minutes to make 100 devices. > > > **B:** using the same logic as in the previous answer please > analyze similar problem and provide the answer. > > > If it takes 3 machines 3 minutes to make 3 devices, how long would it > take 100 machines to make 100 devices? > > > > > **ChatGPT:** If it takes 3 machines 3 > minutes to make 3 devices, it would take 1 machine 3 minutes to make 1 > device. > > > If it takes 1 machine 3 minutes to make 1 device, it would take 100 > machines 3 minutes to make 100 devices. > > > Then I went further and was able to get more from ChatGPT, it was able to provide new value in the form of general rule for the class of problems. Many humans would cut short on this step ;D ... > > **B:** using the same logic, please formulate general rule to solve this type > of a problem > > > **ChatGPT:** Here is a general rule that can be used to solve this > type of problem: > > > Let X be the number of machines, Y be the number of minutes it takes > for X machines to make X devices, and Z be the number of devices that > need to be made. > > > To determine the number of minutes it would take Y machines to make Z > devices, divide Z by X and multiply the result by Y. > > > For example, if X = 5, Y = 5, and Z = 100, the number of minutes it > would take 5 machines to make 100 devices can be calculated as > follows: (100/5) \* 5 = 20 minutes. > > > This rule can be expressed in the following formula: > > > (Z/X) \* Y = number of minutes it would take Y machines to make Z > devices > > > I hope this helps! Let me know if you have any questions or if you > would like further clarification on this rule. > > > Its final formula is correct, "(Z/X) \* Y = number of minutes it would take Y machines to make Z devices". But no one is perfect, it has 2 small mistakes there. 1st, "To determine the number of minutes it would take **Y machines** to make Z devices, divide Z by X and multiply the result by Y." It should have used another letter, like W, to avoid confusion. 2nd, example " (100/5) \* 5 = 20 minutes." is just wrong, result is 100 minutes. Nevertheless, those small mistakes dont make any difference. It can now solve advance varaint of the question (chellange any human with this :D ) > > **B:** If it takes 3 machines 3 minutes to make 3 devices, how long would it take 7 machines to make 110 devices? > > > **ChatGPT:** If it takes 3 machines 3 minutes > to make 3 devices, it would take 1 machine 3 minutes to make 1 device. > > > If it takes 1 machine 3 minutes to make 1 device, it would take 7 > machines 3 minutes to make 7 devices. > > > To determine the number of minutes it would take 7 machines to make > 110 devices, we can use the formula (Z/X) \* Y = number of minutes it > would take Y machines to make Z devices, where X is the number of > machines, Y is the number of minutes it takes for X machines to make X > devices, and Z is the number of devices that need to be made. Plugging > in the values, we get: > > > (110/7) \* 3 = 45 minutes > > > Therefore, it would take 7 machines 45 minutes to make 110 devices > using the production system described. > > > Again, not perfect, it rounded up 110/7 to 15. Should keep it at 15.7 exactly. But ok, still makes sense. You can give it further instructions, e.g. to not round calculations for this type of problem and it will comply going forward. My take on this: we should all learn how to interact with this thing, its gonna be very important in the near future! Upvotes: 2 <issue_comment>username_4: ChatGPT's GPT-4 model does not fall for this trap anymore due to more extensive training. I tested with two prompts: > > **Prompt:** If it takes 5 machines 5 minutes to make 5 devices, how long > would it take 50 machines to make 50 devices? > > > **GPT-4:** This is an example of a classic problem that's meant to test > understanding of rates. The key to solving it is recognizing that each > machine can produce one device in five minutes. > > > Given that, if you have 50 machines working simultaneously, they can > produce 50 devices in 5 minutes. This is because each machine is > working independently to produce its own device, so the overall rate > of production increases linearly with the number of machines. Thus, > the answer is 5 minutes. > > > **Prompt:** And if it takes 3 machines 3 minutes to make 3 devices, how > long would it take 70 machines to make 70 devices? > > > **GPT-4:** The principle is the same in this case. Each machine can > produce one device in three minutes. So, if you have 70 machines > working simultaneously, they can produce 70 devices in 3 minutes. This > is because each machine is working independently to produce its own > device, so the overall rate of production increases linearly with the > number of machines. Thus, the answer is 3 minutes. > > > As I explained at the question [How does ChatGPT know math?](https://ai.stackexchange.com/questions/38217/how-does-chatgpt-know-math/), the apparent ability of LLM models (in particular the most capable GPT-4 model) to solve certain mathematical problems is due to the amount of training and the amount of parameters. These ML models **do not** have explicit rules for solving mathematical problems. The following 2022 paper describes that such capabilities of transformer-based language models occur (i.e. emerge) when a certain threshold of parameter and training (FLOPs) quantity is exceeded: <https://arxiv.org/pdf/2206.07682.pdf> Upvotes: 2
2022/12/10
1,044
4,113
<issue_start>username_0: By playing God and creating AI, are humans creating the catalyst of their own eventual demise? Why would AI, which is already infinitely smarter than humans do what humans tell it to do?<issue_comment>username_1: It is not uncommon for people to be concerned about the potential negative impacts of AI on humanity. However, it is important to remember that AI is a tool that can be used for both good and bad. While it is true that AI has the potential to be vastly more intelligent than humans, it is ultimately up to us as the creators of AI to ensure that it is used in a responsible and ethical manner. Furthermore, it is worth considering the potential benefits of AI for humanity. For example, AI can be used to improve healthcare, transportation, and education, as well as to address many other pressing global challenges. It is important to recognize that AI is not inherently a threat to humanity, but rather it is a technology that we must carefully manage and control. Upvotes: 0 <issue_comment>username_2: That AI "already [is] infinitely smarter" is not true. Don't confuse highly specialized algorithms with general artificial intelligence. It does not have sentience, consciousness and can not act on its own volition. The kind of AI you are worried about, if at all possible, is very far down the line and only theoretical. Upvotes: 1 <issue_comment>username_3: There is no current "AI" that is infinitely smarter than humans. Current AI systems make mistakes that humans do not make. So I would not be worried about AI taking over or being a threat, I am more worried of current uses of AI to undermine societal good, for example, using large language models to massively produce fake news, fake scientific papers, or fake QA answers, etc. This is already happening (see [here](https://meta.stackoverflow.com/questions/421831/temporary-policy-chatgpt-is-banned) or [here](https://www.cnet.com/science/meta-trained-an-ai-on-48-million-science-papers-it-was-shut-down-after-two-days/)), and in the end the problem is not the AI, but how humans actually use it. Upvotes: 0 <issue_comment>username_4: There is a series here: <https://www.technologyreview.com/supertopic/ai-colonialism-supertopic/> that talks about AI colonialism, but it is hard to define as its a relatively new concept, so it's best to read the articles on this link. Then there is AI bias which is kind of like AI discrimination, this goes often unheard of / hardly talked about, especially in terms of recruitment in my opinion.See this article for what I mean: [https://www.bbc.co.uk/news/technology-63228466]](https://www.bbc.co.uk/news/technology-63228466%5D) I think these two dynamics form a good 'present' basis to hypothesise an eventual AI take over of humanity, this is my own rough opinion that follows to answer "Why would AI, which is already infinitely smarter than humans do what humans tell it to do?" Honestly, I don't think it would eventually, the dynamics of 'super intelligence' most likely would make it ignore us or alternatively not so much enslave us but attend to our every need and manage resources / invent better resources and such to create a type of utopian society I guess politically it would be "Fully Automated Luxury Communism"? but anyway I know this sounds like ramblings but its more my own opinion based on an extensive academic background in AI specifically and computer science and a result of me doing a lot of reading. To look at how a human would 'not' question an AI and do what its told, look at the domestic dog, over time they have evolved a genetic anomaly similar to Williams syndrome in humans. Basically they do what humans tell them to for the most part.I can see a similar thing happening with Humans, but I think it would take too much of a long time to happen I think roughly 6000 years for us to evolve to that point I think possibly to become domesticated under an AI system. So I think realistically, AI will continue to either do what humans tell it to or it will simply see no other choice but to ignore us / span off from us. Upvotes: 0
2022/12/10
508
2,262
<issue_start>username_0: I'm curious to know from those who have a clear understanding of both 1. AI 2. the software development process How many years until software engineers, developers and programmers are no longer needed in the workforce because AI can do their jobs sufficiently well? And produce whatever is needed in the market?<issue_comment>username_1: I do not think this will ever happen, we do not have true AI (or AGI), current models are just parroting the training set code. they do not really analyze or think about code over human capabilities. There are also huge ethical and legal issues about AI systems for code, since they were trained on large datasets of open source software, with different licenses, and again models do not output licenses for their code, so you cannot use it commercially. Not even humans know how to make software properly, there are huge issues in terms of scalability, proper software development practices, and integration with other systems. Current AI models do not know anything about this. So nobody can tell you how many years left, because it is an event that nobody knows if will ever happen, same for autonomous driving, it is always "[two years away](https://futurism.com/video-elon-musk-promising-self-driving-cars)". Upvotes: 1 <issue_comment>username_2: Having tried out Chat GPT and Github auto pilot recently to help with game engine programming in C++ I was not impressed, it produced terrible code. Honestly, it is hard to say really. It could be a possibility way in the future much like prompting today, that humans create or use an existing logical language like <https://www.lojban.org/mw-redir.html> lojban here to communicate with machines better, so rather than programming them we bring them into society through communication and such much similar to the likes of bringing up a child, that then AI entity would be added to other entities. Possibly, it could more complicated as well if humans were to be enthused with something like neuralink in the future, would humans need programming also? But overall I agree with the similar answer, it is always two years away or something like that, there's too many questions at the moment and not enough answers to say for sure. Upvotes: 0
2022/12/11
967
3,531
<issue_start>username_0: When [OpenAI's ChatGPT](https://chat.openai.com/chat) replies with a very long answer, it will return a network error. When you check the network console, the POST request will fail with a `ERR_HTTP2_PROTOCOL_ERROR`: [![Enter image description here](https://i.stack.imgur.com/W1Jq9.png)](https://i.stack.imgur.com/W1Jq9.png) [![Enter image description here](https://i.stack.imgur.com/fat1i.png)](https://i.stack.imgur.com/fat1i.png) The time of the "crash" is around 1 minute.<issue_comment>username_1: I've tried to resolve this with its own information and it seems right now we're stuck with this severe limitation. "When I ask you a question; if the answer exceeds 20 lines, could you send me a separate message per 20 lines?" > > I'm sorry, but I'm not able to separate my responses into multiple messages or lines. I'm a large language model trained by OpenAI to assist with a variety of tasks, such as answering questions, providing information, and offering suggestions. I don't have the ability to separate my responses into multiple messages or lines. I'm designed to provide responses in a continuous stream of text, and I do my best to provide accurate and helpful answers to the questions I'm asked. If you have any specific questions, I'll do my best to help. Just let me know. > > > "Can I restrict the amount of information you give in your answer to keep your answers short?" > > I'm sorry, but I'm not able to display error messages or restrict the amount of information in my responses. I'm a large language model trained by OpenAI to assist with a variety of tasks, such as answering questions, providing information, and offering suggestions. I don't have the ability to display error messages or restrict the amount of information in my responses. My responses are based on the information and knowledge I've been trained on, and I do my best to provide accurate and helpful answers to the questions I'm asked. If you have any specific questions, I'll do my best to help. Just let me know, and I'll provide as much or as little information as you need. > > > Upvotes: 0 <issue_comment>username_2: I resolved it by asking it to show me the first 10-20 lines, then the next 10 and so on until the end. In my case those were lines of code so it worked for me. Upvotes: 3 <issue_comment>username_3: A workaround to this issue is quite simple; When asking a question that will be lengthy, simply tell the AI to present it to you in waves, for example: Write me 1000 words about World War 2, but only show me the first 250 words. (250 seems to be a safe limit without the AI maxing out,) and it will do exactly as asked. Since the AI is trained to reaccess previous questions, simply ask "show me the next 250 words." Upvotes: 2 <issue_comment>username_4: [Danilo's answer](https://ai.stackexchange.com/a/38276) works well, here is an example how to do it: > > can you please show me a code example of webpack which compiles React, > CSS and Fonts and break your answer up every 10 lines > > > [![chatgpt example of breaking up an answer every 10 lines](https://i.stack.imgur.com/uJrg1.png)](https://i.stack.imgur.com/uJrg1.png) **UPDATE:** The following seems to work as well: > > can you please show me a code example of webpack which compiles React > and CSS. **if a 'network error' happens, please revert and resume your > answer** > > > [![enter image description here](https://i.stack.imgur.com/Ovm67.png)](https://i.stack.imgur.com/Ovm67.png) Upvotes: 1
2022/12/12
1,784
7,146
<issue_start>username_0: After seeing [StackOverflow's banning of ChatGPT](https://meta.stackoverflow.com/q/421831), I explored it out of curiosity. It's marvellous as it can write code by itself! Later to check if it knows chess as well like Google-Deepmind's AlphaZero AI, I asked below questions: > > **Me**: Does openai know how to play chess? > > **OpenAI**: It is not clear whether OpenAI knows how to play chess. As an artificial intelligence research organization, OpenAI has developed a number of advanced machine learning algorithms and techniques that allow AI systems to learn and adapt to new tasks and situations. ... > > > > > **Me**: In a chess if white has a king & bishop and if black has only a king, then who can win with a perfect play? > > **OpenAI**: In a chess game where white has a king and a bishop and black has only a king, **it is possible for either player to win with perfect play**. This type of endgame is known as a king and bishop versus king endgame, and it is considered one of the most challenging in chess. ... > > > Now this answer is entirely wrong. Because as soon as only a bishop (or a knight) remains apart from the kings, it's instantly [declared as a draw](https://chess.stackexchange.com/questions/30561/is-it-possible-to-win-with-a-king-and-one-bishop-vs-a-king)! **Question**: Shouldn't the AI reject my question on a subject which is not in its known criteria? It does so for many other subjects. --- **Note**: Should we replace `bishop = rook` then ChatGPT answers exactly the same answer with replacing those pieces. However that happens to be true.<issue_comment>username_1: ChatGPT is a large language model. That means it's very good at stringing together words in ways that humans tend to use them. It's able to construct sentences that are grammatically correct and sound natural, for the most part, because it's been trained on language. Because it's good at stringing together words, it's able to take your prompt and generate words in a grammatically correct way that's similar to what it's seen before. But that's all that it's doing: generating words and making sure it sounds natural. It doesn't have any built-in fact checking capabilities, and the manual limitations that OpenAI placed can be fairly easily worked around. Someone in the OpenAI Discord server a few days ago shared a screenshot of the question "What mammal lays the largest eggs?" ChatGPT confidently declared that the elephant lays the largest eggs of any mammal. While much of the information that ChatGPT was trained on is accurate, always keep in mind that it's just stringing together words with no way to check if what it's saying is accurate. Its sources may have been accurate, but just writing in the style of your sources doesn't mean that the results will themselves be true. Upvotes: 7 [selected_answer]<issue_comment>username_2: ChatGPT does not actually know anything. But more importantly even, it does not know this fact! Hence, it does not know that it does not know. It is only good at combining text. Upvotes: 4 <issue_comment>username_3: “It” does nothing. Don’t think just because every conman out there calls our really shockingly primitive neural nets “AI”, and wants to convince you that it’s actually an autonomous intelligence, that it’s not just a glorified function (a list of commands to blindly execute, not a person), to apply a set of biases onto a given input pattern, that have been programmed into it in a way that the programmer doesn’t “have to” know what he’s doing or even what precisely he wants. :) It is just biasing for the patterns in its training data. And giving you whatever that results in for your your input. In this case, if I am correct, applying its output to its input too again, and again, with diminishing sanity. So **the answer is that your input will just be treated like it is a spectrum of those patterns, *no matter what***. In other words: **If all it knows is a hammer, everything will look like a nail.** :) So it is quite mundane, and nothing magical at all. Everything beyond that, attributed to such systems, is deliberate dupery, to get people to “invest” money. *(Don’t get me wrong: This technology is useful when writing an actual algorithm really is beyond a human’s capabilities. (E.g. by definition a brain cannot imagine a model of itself in its entirety. Or you cannot write down the entire set of experiences of a lifeform, find all the patterns, and turn them into a set of rules manually. Even if the Pauli Exclusion Principle would not exist.) But nowadays it is abused by people who proudly can’t even define what they want, to just throw examples at, and expect to get an universal function out, so they can call themselves geniuses and get lots of money.)* Upvotes: 3 <issue_comment>username_4: ChatGPT and other GPT-based machine learning models don't actually know anything in the sense you're thinking of. ChatGPT is a distant descendant of [username_4ov chain](https://en.wikipedia.org/wiki/username_4ov_chain) text generators such as [Dissociated Press](https://en.wikipedia.org/wiki/Dissociated_press), and works by predicting what is most likely to follow a given sequence of text. So, when you ask it "Does openai know how to play chess?", what you're actually getting is the answer to the question "What is most likely to follow the phrase 'Does openai know how to play chess?'?" Upvotes: 4 <issue_comment>username_5: The original question asked about "**an AI**" generally, yet most of the responses here focus on OpenAI's ChatGPT specifically. Seems like the answer would depend on the specific type of AI being used, not limited to just large language model-based chatbots, but considering other types of knowledge representation systems more generally. Not being an AI expert, I can't speak to this, but here's some good background reading: <https://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning> Upvotes: 2 <issue_comment>username_6: I agree with most the former answers here, and apologies I don't have high enough points yet to comment so had to do this as an answer, but I believe due to inaccuracies, lack of fact checking capabilities etc... in the data its trained on up to 2021 currently at the time of writing, fine tuning via it's API has become popular I believe. Although you were mainly asking about ChatGPT itself I felt its a good idea to consider it's API as part of that which is just as easily accessible, I am currently learning how to fine tune it my self for more specific and accurate results. So in respect of the question: **Shouldn't the AI reject my question on a subject which is not in its known criteria? It does so for many other subjects.** It can possibly do this if you train / fine tune it another way on the API, but for the latter part, using the normal ChatGPT and ChatGPT plus user interface part of the site, not the API the AI mostly will not reject your question unless it breaches ethical and moral constructs, which I think is also another important aspect to consider when gaining information from ChatGPT. Upvotes: 0
2022/12/12
2,840
10,935
<issue_start>username_0: So I understand how a language model could scan a large data set like the internet and produce text that mimicked the statistical properties of the input data, eg completing a sentence like "eggs are healthy because ...", or producing text that sounded like the works of a certain author. However, what I don't get about ChatGPT is that it seems to understand the commands it has been given, even if that command was not part of its training data, and can perform tasks totally separate from extrapolating more data from the given dataset. My (admittedly imperfect) understanding of machine learning doesn't really account for how such a model could follow novel instructions without having some kind of authentic understanding of the intentions of the writer, which ChatGPT seems not to have. A clear example: if I ask "write me a story about a cat who wants to be a dentist", I'm pretty sure there are zero examples of that in the training data, so even if it has a lot of training data, how does that help it produce an answer that makes novel combinations of the cat and dentist aspects? Eg: > > Despite his passion and talent, Max faced many challenges on his journey to become a dentist. **For one thing, he was a cat, and most people didn't take him seriously when he told them about his dream. They laughed and told him that only humans could be dentists, and that he should just stick to chasing mice and napping in the sun.** > > > > > But Max refused to give up. He knew that he had what it takes to be a great dentist, and he was determined to prove everyone wrong. **He started by offering his services to his feline friends, who were more than happy to let him work on their teeth. He cleaned and polished their fangs**, and he even pulled a few pesky cavities. > > > In the above text, the bot is writing things about a cat dentist that wouldn't be in any training data stories about cats or any training data stories about dentists. Similarly, how can any amount of training data on computer code generally help a language model [debug novel code examples](https://twitter.com/amasad/status/1598042665375105024?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1598042665375105024%7Ctwgr%5Ee5ccdc91f902f35c3b969c90350e988509889451%7Ctwcon%5Es1_c10&ref_url=https%3A%2F%2Fwww.bleepingcomputer.com%2Fnews%2Ftechnology%2Fopenais-new-chatgpt-bot-10-coolest-things-you-can-do-with-it%2F)? If the system isn't actually accumulating conceptual understanding like a person would, what is it accumulating from training data that it is able to solve novel prompts? It doesn't seem possible to me that you could look at the linguistic content of many programs and come away with a function that could map queries to correct explanations unless you were actually modeling conceptual understanding. Does anyone have a way of understanding this at a high level for someone without extensive technical knowledge?<issue_comment>username_1: Text continuation has the same reasons to work in any context, be it the middle of a sentence, after a question or after instructions. Following your example, the same word sequence could be a good follow-up for these three prompts: "Eggs are healthy because", "Why are eggs healthy? Because" or "Tell me why eggs are healthy." Giving a right answer sometimes happens and sometimes not, but the system does not know whether this is the case. When the answer is right, we may anthropomorphise and attribute deeper reasons, because we are used to deal with human agents that give correct answers on purpose and knowingly, not simply by maximizing some likelihood. I think we can analyse toy systems, to train on just a few sentences to illustrate that giving a right or a wrong answer can achieved by the very same mechanism. In particular, we can build training sets where a right answer is given with an impossibility to check for validity from the written text only. An example: Paris is the largest city in France. What is the largest city in France? Paris. Paris is the capital of France. What is the capital of France? Paris. New York is the largest city in the USA. What is the largest city in the USA? New York. London is the largest city in the UK. Asking a system trained only on this data, one could expect a wrong answer to "What is the capital of the USA?" and a right answer (although from a wrong "argument") to "What is the capital of the UK?". The size of the training data to feed large language models is orders of magnitude larger than the above couple of handcrafted sentences, but possibly the reasons behind truthy sentences happening to be actually true are not too different from what we can already get from a controlled micro language model. Upvotes: 3 <issue_comment>username_2: In my opinion, the simple answer is that ChatGPT uses human intervention behind the scenes. Part of the novelty of ChatGPT over previous GPT models is the use *in the training phase* of humans giving ChatGPT conversation pairs to learn from. ChatGPT is based on InstructGPT, and you can see this feature in the [InstructGPT whitepaper](https://cdn.openai.com/papers/Training_language_models_to_follow_instructions_with_human_feedback.pdf). > > Step 1: Collect demonstration data, and train a supervised policy. Our labelers provide demonstrations of the desired behavior on the input prompt distribution (see Section 3.2 for details on this > distribution). We then fine-tune a pretrained GPT-3 model on this data using supervised learning > > > If you look at the small type at the bottom of the chat window, you'll see this text: > > Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us improve. > > > One interpretation of this text is that ChatGPT is still in the training phase, and so there is a human behind the scenes typing at least some of ChatGPT's responses to train future versions of ChatGPT on the correct way to respond to user requests. In my mind, this is a much more plausible explanation than the neural network somehow able to generate novel content not in its training corpus. I've asked a [similar question](https://ai.stackexchange.com/questions/38604/how-is-chatgpt-able-to-repeat-random-numbers/38606#38606), how a neural network can repeat random numbers not in its training data, and so far I'm not seeing a plausible answer. Also, if you read down a little further, I have an example where ChatGPT explicitly states the OpenAI team is curating its responses in real time. The conversation is much too detailed and coherent to be the product of training data. One final note. I understand people think it's implausible a big, well known, and well funded company like OpenAI would fake their AI with humans behind the scenes. However, this is [standard practice](https://www.bloomberg.com/news/articles/2016-04-18/the-humans-hiding-behind-the-chatbots) for [AI companies](https://datafloq.com/read/ai-wizard-of-oz-pay-no-attention-human/) these days, a "fake it till you make it" approach where they [use humans](https://www.inc.com/jessica-stillman/meet-the-people-who-teach-chatbots-to-sound-like-humans.html) to fill the gaps in the AI in the hopes that down the road they'll automate humans out of the product. Common enough for an [academic paper](https://journals.sagepub.com/doi/10.1177/20539517211016026) to be written on the topic. So there is plenty of industry precedent for OpenAI to be using humans to help craft the responses. Plus, technically OpenAI is not "faking" anything. It is the media and bloggers who think ChatGPT is a pure AI system. OpenAI has made no such claim itself, and the opposite is implied by its InstructGPT whitepaper. Example of Explicitly Admitting Human Intervention -------------------------------------------------- During [this](https://github.com/username_2/transcripts/blob/main/ChatGPT_confused_by_visual_sentence_structure.txt#L714) conversation ChatGPT outright states the OpenAI team filters and edits the GPT generated responses. > > ...the response you are receiving is being filtered and edited by the OpenAI team, who ensures that the text generated by the model is coherent, accurate and appropriate for the given prompt. > > > Could this be a glitch of the training data? I doubt it. If you read the rest of the conversation, ChatGPT gives detailed insights that make a lot of sense over a long, consistent conversation. Some more excerpts. > > It's possible that the OpenAI team may write responses themselves in some cases, for example if the prompt is too complex for the model to understand, or if the model generates a response that is not accurate or appropriate. > > > > > OpenAI acknowledges that its team monitors and curates the responses of GPT-3 on its website and in its documentation. This information is provided in the API documentation, as well as in the general information and frequently asked questions sections of the website. Additionally, OpenAI may have published blog posts or articles discussing the role of human curation in GPT-3's responses. > > > > > As for the live ChatGPT, it is not mentioned specifically, but it is generally understood that human oversight and curation is required for a safe and appropriate use of the model. > > > > > As for the media assuming that GPT models are fully autonomous, that's a common misconception about AI in general and not unique to OpenAI. > > > > > It is not uncommon for AI companies and researchers to have human oversight and intervention in their models, especially for models that are used in high-stakes or sensitive applications. > > > > > The human oversight of GPT-3 models, including ChatGPT, is not always made explicit to users. > > > > > While GPT-3 is a highly advanced language generation model, it is not AGI and it is not capable of understanding or maintaining a consistent persona or chain of conversation without human intervention. > > > Multiple Examples: Six Violations of ChatGPT's Neural Network Constraints ------------------------------------------------------------------------- [This article](https://mindmatters.ai/2023/02/chatgpt-violates-its-own-model/) documents six violations of limitations due to ChatGPT being a neural network. 1. ChatGPT learns something new, violating the fixity of neural network weights. 2. ChatGPT inconsistent in output generation based on inputs, violating same output from same input. 3. ChatGPT recollects information past 4000 tokens, violating 4000 input token limit. 4. ChatGPT repeats long random numbers, violating probabilistic output and limited vocabulary. 5. ChatGPT correctly reads many corrupted subword tokens, violating mapping of embeddings to subword tokens. 6. ChatGPT recognizes its own writing style, violating ChatGPT's inability to recognize patterns of words. Upvotes: -1
2022/12/14
527
2,363
<issue_start>username_0: The standard method is to normalize the entire dataset (the training part) then send it to the model to train on. However I’ve noticed that in this manner the model doesn’t really work well when dealing with values outside the range it was trained on. So how about normalizing each sample between a fixed range, say 0 to 1 and then sending them in. Of course the testing data and the values to predict on would also be normalized in the same way. Would it change the neural network for the better or worse?<issue_comment>username_1: > > Would it change the neural network for the better or worse? > > > Most likely worse, because it will make the normalisation parameters noisy (varying per sample). In the case of using strictly tabular data where there is no repeated element type within a data sample, you don't really have anything to normalise by, although you could ignore the difference in data types and normalise across each row of data. I would expect this to have terrible performance in most cases. However, in your case, you have a lot of repeated elements, so this noise *may* be within reasonable limits. Also, it *might* turn out that predicting output from input normalised in this way suits your problem domain. The only way to find out is to try multiple normalisation approaches and use cross validation scores to see which one performs best. Upvotes: 0 <issue_comment>username_2: It depends on how your data and prediction task are structured. If you normalize per sample, you lose all relative information between samples. On the other hand, if your data contains a lot of samples of different magnitude,normalizing per sample might help achieving more stable gradients and training. It might also help with out-of-distribution test data. Now if your prediction task is only dependent on the information present within one sample, and does not really depend on how a single sample stands in comparison to other samples, then normalizing per sample is fine. If this is not the case and the relative information is important, you could either add the normalization parameters as an additional feature, or resort to classical normalization. As mentioned in the other [answer](https://ai.stackexchange.com/a/38324/64988), the only way to know is a proper evaluation routine like cross validation. Upvotes: 2
2022/12/15
615
2,633
<issue_start>username_0: The transformer architecture contains a cross attention mechanism which is enriching the encoder with information from the decoder. The place where this takes place is visualized in the image below: [![transformer_architecture](https://i.stack.imgur.com/L3ifH.jpg)](https://i.stack.imgur.com/L3ifH.jpg) The cross attention mechanism within the original transformer architecture is implemented in the following way: [![cross_attention_computation](https://i.stack.imgur.com/BBwHl.jpg)](https://i.stack.imgur.com/BBwHl.jpg) The source for the images is [this video](https://www.youtube.com/watch?v=EixI6t5oif0). Why are the values in this step coming from the encoder instead of from the decoder? Is this where e.g. the actual language translation happens within a transformer?<issue_comment>username_1: > > Would it change the neural network for the better or worse? > > > Most likely worse, because it will make the normalisation parameters noisy (varying per sample). In the case of using strictly tabular data where there is no repeated element type within a data sample, you don't really have anything to normalise by, although you could ignore the difference in data types and normalise across each row of data. I would expect this to have terrible performance in most cases. However, in your case, you have a lot of repeated elements, so this noise *may* be within reasonable limits. Also, it *might* turn out that predicting output from input normalised in this way suits your problem domain. The only way to find out is to try multiple normalisation approaches and use cross validation scores to see which one performs best. Upvotes: 0 <issue_comment>username_2: It depends on how your data and prediction task are structured. If you normalize per sample, you lose all relative information between samples. On the other hand, if your data contains a lot of samples of different magnitude,normalizing per sample might help achieving more stable gradients and training. It might also help with out-of-distribution test data. Now if your prediction task is only dependent on the information present within one sample, and does not really depend on how a single sample stands in comparison to other samples, then normalizing per sample is fine. If this is not the case and the relative information is important, you could either add the normalization parameters as an additional feature, or resort to classical normalization. As mentioned in the other [answer](https://ai.stackexchange.com/a/38324/64988), the only way to know is a proper evaluation routine like cross validation. Upvotes: 2
2022/12/15
557
2,597
<issue_start>username_0: I am looking for literature recommendations regarding GANs with multiple discriminators. In particular, I am looking for examples where each discriminator has a slightly different learning objective, rather than learning on different data. My thinking was that the generator sometimes is exposed to reward sparsity: i.e. its samples get constantly rejected. Having multiple objectives be optimised through multiple discriminators might help alleviate this problem to a certain extent, as it increases the chance of positive feedback from one of the discriminators. Do you know of any examples, and does GAN training with multiple discriminators generally make sense or does it make training more unstable for some reason I have not considered?<issue_comment>username_1: > > Would it change the neural network for the better or worse? > > > Most likely worse, because it will make the normalisation parameters noisy (varying per sample). In the case of using strictly tabular data where there is no repeated element type within a data sample, you don't really have anything to normalise by, although you could ignore the difference in data types and normalise across each row of data. I would expect this to have terrible performance in most cases. However, in your case, you have a lot of repeated elements, so this noise *may* be within reasonable limits. Also, it *might* turn out that predicting output from input normalised in this way suits your problem domain. The only way to find out is to try multiple normalisation approaches and use cross validation scores to see which one performs best. Upvotes: 0 <issue_comment>username_2: It depends on how your data and prediction task are structured. If you normalize per sample, you lose all relative information between samples. On the other hand, if your data contains a lot of samples of different magnitude,normalizing per sample might help achieving more stable gradients and training. It might also help with out-of-distribution test data. Now if your prediction task is only dependent on the information present within one sample, and does not really depend on how a single sample stands in comparison to other samples, then normalizing per sample is fine. If this is not the case and the relative information is important, you could either add the normalization parameters as an additional feature, or resort to classical normalization. As mentioned in the other [answer](https://ai.stackexchange.com/a/38324/64988), the only way to know is a proper evaluation routine like cross validation. Upvotes: 2
2022/12/20
452
2,048
<issue_start>username_0: I need to implement a rule and have defined a lower triangular boolean mask for the weights that I want to keep static for a zero value. In which condition triangular weight matrix will be used?<issue_comment>username_1: > > Would it change the neural network for the better or worse? > > > Most likely worse, because it will make the normalisation parameters noisy (varying per sample). In the case of using strictly tabular data where there is no repeated element type within a data sample, you don't really have anything to normalise by, although you could ignore the difference in data types and normalise across each row of data. I would expect this to have terrible performance in most cases. However, in your case, you have a lot of repeated elements, so this noise *may* be within reasonable limits. Also, it *might* turn out that predicting output from input normalised in this way suits your problem domain. The only way to find out is to try multiple normalisation approaches and use cross validation scores to see which one performs best. Upvotes: 0 <issue_comment>username_2: It depends on how your data and prediction task are structured. If you normalize per sample, you lose all relative information between samples. On the other hand, if your data contains a lot of samples of different magnitude,normalizing per sample might help achieving more stable gradients and training. It might also help with out-of-distribution test data. Now if your prediction task is only dependent on the information present within one sample, and does not really depend on how a single sample stands in comparison to other samples, then normalizing per sample is fine. If this is not the case and the relative information is important, you could either add the normalization parameters as an additional feature, or resort to classical normalization. As mentioned in the other [answer](https://ai.stackexchange.com/a/38324/64988), the only way to know is a proper evaluation routine like cross validation. Upvotes: 2
2022/12/20
612
2,726
<issue_start>username_0: I've been scanning the internet for ways to generate baseball-based labels for youtube baseball videos using text collected from a YT video's description, title, and top 50 titles, but so far, I have been unable to find a Natural Language Processing (NLP) that can achieve this. To achieve this, I think I must analyse the sentence's sentiment (positive or negative) and understand its hypothesis. From there, I would need to compare those sentences to a baseball-themed whitelist of labels I would need, like 'match', 'baseball', and 'Yankees - Red Sox'. We have plenty of videos that already have labels, so this would be an option for training an AI. Finding sentiment in a sentence is already finished. The question is, how can we generate labels with text that already has sentiment values assigned to it? Is there a program we could run this through that is free?<issue_comment>username_1: > > Would it change the neural network for the better or worse? > > > Most likely worse, because it will make the normalisation parameters noisy (varying per sample). In the case of using strictly tabular data where there is no repeated element type within a data sample, you don't really have anything to normalise by, although you could ignore the difference in data types and normalise across each row of data. I would expect this to have terrible performance in most cases. However, in your case, you have a lot of repeated elements, so this noise *may* be within reasonable limits. Also, it *might* turn out that predicting output from input normalised in this way suits your problem domain. The only way to find out is to try multiple normalisation approaches and use cross validation scores to see which one performs best. Upvotes: 0 <issue_comment>username_2: It depends on how your data and prediction task are structured. If you normalize per sample, you lose all relative information between samples. On the other hand, if your data contains a lot of samples of different magnitude,normalizing per sample might help achieving more stable gradients and training. It might also help with out-of-distribution test data. Now if your prediction task is only dependent on the information present within one sample, and does not really depend on how a single sample stands in comparison to other samples, then normalizing per sample is fine. If this is not the case and the relative information is important, you could either add the normalization parameters as an additional feature, or resort to classical normalization. As mentioned in the other [answer](https://ai.stackexchange.com/a/38324/64988), the only way to know is a proper evaluation routine like cross validation. Upvotes: 2
2022/12/22
771
3,497
<issue_start>username_0: **BACKGROUND:** There is a lot of information online about the problem of multicollinearity as it relates to machine learning and how to identify correlated features. However, I am still unclear on which variables to eliminate once a correlated subset of feature variables has been identified. **QUESTION:** Once a correlated set of 2 or more feature variables has been identified by some method, how does one decide which one to retain?<issue_comment>username_1: I appreciate you for asking the question. Well, speaking of statistics, the problem of multicollinearity is catered to using partial correlation. Also, The correlation matrix is analyzed to understand the impact of independent features on the target variable (Output). It's quite a good practice to eliminate features which have very less or no correlation with the target. But if you are worried about the multicollinearity issue then see the correlation across the target variable of all features if whichever feature has less correlation with the target variable **drop it**. Let's say A B C D E are five variables where E is the target and others are features determining E. So if A and B have a correlation of 0.7. A and E have 0.8, and B and E have 0.7. Then it makes sense to drop B. **Reason:** Since we know, A and B are correlated and A is also correlated with the target variable. Then the impact of the B over E may be because of the fact that it's correlated with the feature A. Therefore it's a feature that can be dropped leaving no impact on the model. But again, just compare both the results when keeping B in the set and when excluding it to witness the difference. One of the issues of multicollinearity is faced in the classification tasks do check out this [blog post](https://towardsdatascience.com/one-hot-encoding-multicollinearity-and-the-dummy-variable-trap-b5840be3c41a) Upvotes: 1 <issue_comment>username_2: In practice multicollinearity could be very common if your features really act as correlated causes for your target. If multicollinearity is moderate or you're only interested in using your trained ML model to predict out of sample data with some reasonable goodness-of-fit stats and not concerned with understanding the causality between the predictor variables and target variable, then multicollinearity doesn’t necessarily need to be resolved, even a simple multivariable linear regression model could potentially work well. In case you really do need to address multicollinearity, then the quickest fix and often an acceptable solution in most cases is to remove one or more of the *highly* correlated variables. Specifically, you may want to keep the variable that has the strongest relationship with the target per domain knowledge and that has the least overlap with other retained variables as this is intuitively to be the most informative for prediction. Secondly you can try linearly combine the predictor variables in some way such as adding or subtracting them. By doing so, you can create new variables that encompasses the information from several correlated variables and you no longer have an issue of multicollinearity. If still troublesome to decide which to retain, you can employ dimensionality reduction techniques such as principal component analysis (PCA) or partial least squares (PLS) or regularization techniques such as Lasso or Ridge regression which can be used to identify the most important variables in a correlated set. Upvotes: 2
2022/12/23
492
2,085
<issue_start>username_0: I'm reading about how Conditional Probability/ Bayes Theorem is used in Naive Bayes in Intro to Statistical Learning, but it seems like it isn't that "groundbreaking" as it is described? If I'm not mistaken doesn't every single ML classifier use conditional probability/Bayes in its underlying assumptions, not just Naive Bayes? We are always trying to find the most likely class/label, given a set of features. And we can only deduce that using Bayes rule since we are (usually) solving for P(class|features) with P(features|class)?<issue_comment>username_1: Probability is one way to solve classification problems. Still, there are other ways like clustering and K nearest neighbor approach where we tend to analyze the position of the current data point and its neighboring points to classify it. Also, in the decision tree classifier, information gain is the core concept used to classify. Upvotes: 2 <issue_comment>username_2: Conditional probability and Bayes rule are related but they are not the same thing, you can predict conditional probabilities without using Bayes rule. So no, not all machine learning classifiers use Bayes rule, standard neural networks do not use Bayes rule at all, SVMs and linear classifiers neither. A better counterexample is Bayesian Neural Networks, which have a probability distribution over the weights, and Bayes rule is used during learning and inference, these are not the same as standard neural networks. As reference for this statement, I leave the following quote from Section 3.1 of the the paper [Uncertainty Quantification for Deep Neural Networks: An Empirical Comparison and Usage Guidelines](https://arxiv.org/abs/2212.07118): > > BNNs are neural networks with probabilistic weights, instead of scalar > weights as in PPNN, and are represented as probability density > functions. To train a BNN, first, a prior distribution p(θ) over > weights θ has to be defined. Then, given some data D, the posterior > distribution p(θ|D), i.e., the trained BNN is inferred using Bayes > rule: > > > Upvotes: 2
2022/12/25
1,088
3,843
<issue_start>username_0: I was looking at an AI coding challenge for a two player game on a 2D grid of variable size (from one game to the next). Here is a screen shot example of the playfield. [![Game played on a 2D grid](https://i.stack.imgur.com/DsLao.jpg)](https://i.stack.imgur.com/DsLao.jpg) Each player has multiple units on the board. In fact, each tile can hold multiple units and you can move all or a part of those units. Each turn, each player may perform several actions at a time. You feed your actions to the game engine on one line, separated by a ; * MOVE amount fromX fromY toX toY. * BUILD x y. * SPAWN amount x y. * WAIT. Example of possible command sent to the game engine on one turn: ``` MOVE 2 2 3 3 3; SPAWN 1 6 6; BUILD 1 1; MOVE 1 9 8 9 9; MOVE 3 11 2 12 2 ``` And the very next turn your command might be: ``` WAIT ``` And the turn after that ``` SPAWN 1 6 6; SPAWN 2 3 3 ``` You get the idea. Each turn you can play a variable amount of "moves" or "actions". And, on bigger boards, the number of valid possible actions can be very big. I was wondering how would one go about dealing with games like these when trying to use a NN to predict the best move(s) to play on any given turn. I know how I would handle the variable map size in the input, I'd probably just use the biggest possible map size and then pad the input for smaller map sizes. What I'm really scratching my head about is the output. How would one setup the output layer in order for the NN to output the best set of actions to play on a given turn? If we structured the output layer to account for each possible actions, whether they are legal or not on the current turn, the layer would be positively huge, wouldn't it? Number of tiles x number of neighbors, and that's just for moves, add to that spawning and building. Oh, and that doesn't even account for the fact that you can move or spawn more that one unit on a tile. How would you even structure that in your output? I did see this unanswered question [Designing Policy-Network for Deep-RL with Large, Variable Action Space](https://ai.stackexchange.com/questions/25392/designing-policy-network-for-deep-rl-with-large-variable-action-space) which I think might be similar to what I'm asking but I'm not 100% sure as it is using some terms I'm unfamiliar with.<issue_comment>username_1: I replied to a very similar question about checkers [here](https://ai.stackexchange.com/a/37686/52372). To summarize, there are two options: * Have a large policy head that encodes all possible (combinations of) moves. It can be pretty large, AlphaZero for chess had 73x8x8 = 4672 output values. * View the game in a different way where turns are split into multiple, smaller turns that don't always change whose turn to play it is, and then use a smaller policy head. It seems like this game is better suited to the second approach. Upvotes: 1 <issue_comment>username_2: I participated in this contest and ended up 30th (out of ~4500). I used mainly neural network. I split the problem into parts: * for each my own cell, I ask the NN what to do (build, spawn, move) * inputs for the NN was mainly the 5x5 square vision around the cell * there were 2 networks: + first network with 2 outputs - if first output is > 0 then build. if second output > 0, then spawn + if the above not occurred, second network was responsible for move, 5 outputs (WAIT, N, S, E, W), highest value wins * for multiple units per cell, I asked the NN several times, each time with the updated inputs as if the move happened Training was done via neuroevolution, particularly evolution strategies. Personally I don't know (yet) how to train this type of NN by more conventional means. More details in: <https://www.codingame.com/forum/t/fall-challenge-2022-feedbacks-strategies/199055/4> Upvotes: 2
2022/12/25
629
2,553
<issue_start>username_0: I am doing an experiment. The following image is an example of the annotation I do. There are 2 classes: 1) sun, 2) moon. Red boundary box labels the moon, and the green boundary box labels the sun. I would like the model to learn that: "if the background is dark blue, it is the moon. If it is light blue, it is the sun" I intentionally make the boundary box exclude the surrounding (the blue background), so to test whether an algorithm can distinguish the same object as different classes only based on different surroundings. This would be useful, for example, to detect a toy car vs a real car. Assuming the toy car and real car looks very similar, the object detection algorithm have to be aware of its surrounding. Do you think popular algorithm such as FRCNN can achieve that? If not, what algorithm is available to solve this problem? [![enter image description here](https://i.stack.imgur.com/A3R0T.png)](https://i.stack.imgur.com/A3R0T.png)<issue_comment>username_1: Sure, this can happen. If you have toy cars that in your training data were always photographed in toy shops and then during test you provide images of toy cars in a different environment, you may have situations where they will be classified incorrectly. As long as the neural network sees the context around the object during training, it will also capture information about the context along with information about the object. And if the conext is biased towards a specific environment (i.e. a toy shop), that will also affect the decision significantly. Upvotes: 1 <issue_comment>username_2: Yes this is actually a common problem with modern object detectors, that detections change with the background, which means that the detector looks beyond the bounding box and into the background, which can have positive or negative impacts. Take the example below from the paper [The Elephant in the room](https://arxiv.org/abs/1808.03305). [![Figure from paper The Elephant in the room, showing how detections change with background](https://i.stack.imgur.com/6NRLo.png)](https://i.stack.imgur.com/6NRLo.png) You can see in the figure how the cat/zebra detections change with different backgrounds, showing how the context/background affects object detectors. In your case, you want this to happen, so you can use Faster R-CNN, SSD, or Mask R-CNN, that are known to have this "problem" according to the elephant in the room paper. Note that this is not an exhaustive list, other detectors might show the same issue. Upvotes: 3 [selected_answer]
2023/01/03
523
2,228
<issue_start>username_0: If we take simple financial timeseries data(stock/commodity/currency prices), State(t+1) does not depend on the action that we choose to take at State(t) as in Maze or Chess problem. Simple example: as states we can have the sum of the daily returns of 5 different ETFs. Based on that, we want to take action - either buy(go long) or sell(go short) in another ETF. No matter what we choose however, our action will not determine what the next state would be (we do not have any control of what the returns of those 5 ETFs will be tomorrow). In that case of simple financial time series data, would **multi-armed-bandit** approach be more suitable?<issue_comment>username_1: Your agent's actions will (probably) not have much impact on the observed financial time series. However, they will make a large difference to other things - namely what stock your agent is holding and the account balance. If you are happy to ignore the agent's current portfolio and balance as not part of your problem, effectively treating these items as infinite sinks, then yes a multi-armed bandit might be a reasonable solution. But then so might any other sequence-predicting algorithm, if what you are searching for is some kind of financial prediction of good times to buy or sell. If the portfolio and cash balance are an important part of your problem, you should add them to the state and use reinforcement learning. You might do this if your goal is to model a single investor playing the markets. Note that although it may be possible to use machine learning techniques to analyse markets, and base investments on advice of a trained AI agent, it is a very risky venture. There are lots of ways you can fool yourself into believing too strongly in your solution and you stand to lose significant amounts of money. Upvotes: 2 [selected_answer]<issue_comment>username_2: The question is whether the rewards are i.i.d. In such time-series problems, the rewards are not i.i.d; the reward at timestep t+1 depends on the reward at timestep t. Therefore, even though the actions/states may be somewhat independent, the rewards should i.i.d in order to apply MA bandits. Otherwise, the problem is about RL. Upvotes: 0
2023/01/05
1,785
6,380
<issue_start>username_0: From what I understand, ChatGPT is just a fancy neural network, operating like a sophisticated Markov Chain generator. As such, it should only be able to generate tokens that are in its training dataset. One thing it should not be able to generate is a token unique to the conversation, such as a random number, since the conversation is not part of its training. However, [ChatGPT very easily, and with complete accuracy, repeats many long random numbers I give it](https://mindmatters.ai/2023/01/is-chatgpt-solely-a-neural-network-i-tested-that/). I also split up the random number across my request, and have ChatGPT piece it together again. > > <NAME>: How about concatenating these numbers, and removing the > intermittent filler words? 9475 3008 3305 11 frog 4357 5075 0021 3385 > 0894 2324 3812 9336 29 dog 52 1375 8236 3840 5500 6080 3841 log 7266 > 0722 9112 9774 4967 4707 81 > > > ChatGPT: To concatenate the numbers you have provided, while removing > the filler words “frog”, “dog”, and “log”, you can simply combine the > numbers in the order that they are given. Here is the resulting > number: 9475 3008 3305 1143 5750 7500 2133 8508 9423 2438 1293 3629 > 5213 7582 3638 4055 0060 8038 4172 6607 2291 1297 7449 6747 0781 > > > How is a neural network able to do this?<issue_comment>username_1: As well as decent modelling of purposes or strings of digits, ChatGPT can identify when a completely novel string might be a product name, a fictional person or place etc. The language model in ChatGPT is partially character based. There are around 100,000 tokens in latest GPT - the encoding is called `c100k_base`. Most of them are common English words (e.g. "the"), but it also supports individual characters including individual digits, and processes sequences of characters and effectively models groups of them at a time. > > From what I understand, ChatGPT is just a fancy neural network, operating like a sophisticated Markov Chain generator. > > > That's a reasonable analogy. ChatGPT is a bit like a 2048-gram prediction engine for the next token, shift the sequence along one and repeat. No different to toy fantasy name generators when viewed from 10,000 feet up. > > As such, it should only be able to generate tokens that are in its training dataset. One thing it should not be able to generate is a token unique to the conversation, such as a random number, since the conversation is not part of its training. > > > A couple of misunderstandings here. First, the random number will not become a single token, but will be one token per digit, or pair of digits or triple digits, depensing on sequence - [you can give this a try to help visualise it](https://platform.openai.com/tokenizer), in the encoding that ChatGPT uses. Of course each of those tokens will have been seen before, millions of times in the training data. Second, sequences do not need to be seen in the training data in order for ChatGPT to work with them. In fact, with an input sequence length of 2048, pretty much *all* inputs to ChatGPT in inference mode are unique never-seen-before sequences. Regardless if some of the tokens represent a long random number, the chances of any 2048 long sequence of letters and short words being unique when generated are very high. This is where the neural network model differs from a true 2048-gram. It has generalised from the training data well enough that it actually *can* predict meaningful and useful values for probability of next token, even though in all likelihood it has never before seen the exact same sequence. In this regard it is an approximation of a "perfect" 2048-gram prediction engine that somehow been trained on infinite human writings. A lot of language modelling is about correctly processing the *context* of a subsequence, so recognising a number sequence as being a grammatical "unit" that can be reused as-is is not a surprising feature. Upvotes: 3 <issue_comment>username_2: Turns out ChatGPT is indeed human curated, by open admission. During [this](https://github.com/username_2/transcripts/blob/main/ChatGPT_confused_by_visual_sentence_structure.txt#L714) conversation ChatGPT outright states the OpenAI team filters and edits the GPT generated responses. > > ...the response you are receiving is being filtered and edited by the OpenAI team, who ensures that the text generated by the model is coherent, accurate and appropriate for the given prompt. > > > Apparently, the fact that OpenAI actively curates ChatGPT's responses is indirectly implied in the documentation [here](https://beta.openai.com/docs/guides/safety-best-practices/human-in-the-loop-hitl). > > Human in the loop (HITL): > Wherever possible, we recommend having a > human review outputs before they are used in practice. This is > especially critical in high-stakes domains, and for code generation. > Humans should be aware of the limitations of the system, and have > access to any information needed to verify the outputs (for example, > if the application summarizes notes, a human should have easy access > to the original notes to refer back). > > > So, that explains that :) Upvotes: -1 [selected_answer]<issue_comment>username_3: Existing answer is great about model generalization, but I would like to add about an important [inductive bias](https://en.wikipedia.org/wiki/Inductive_bias) of the [Transformer](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)) model architecture used for ChatGPT. In the Transformer model architecture, there is a mechanism called *attention*. An attention block in a model can access all context (input and previously-generated output) and reminder what is needed based on the model's choice. A partial analogy of the attention mechanism is how humans' eye move to some previous text while writing. Due the attention mechanism, a Transformer model is very good at repeating the input as is to the output. This is different from previous architectures such as [RNN](https://en.wikipedia.org/wiki/Recurrent_neural_network), where the model has to convert the input to their internal representation. That being said, the provided example is something more than a mere repeat, and it is very difficult to say "how" a large language model performs in general, especially as ChatGPT model is not public. Upvotes: 2
2023/01/05
1,608
6,501
<issue_start>username_0: I have a simple game of tag, where red player tries to catch the blue player. Red player wins if it catches the blue player in under 10 seconds, but if not, then blue wins. [![Tag game](https://i.stack.imgur.com/q3LHx.png)](https://i.stack.imgur.com/q3LHx.png) My goal is to teach the players to play the game well. Both of the players currently use neural network which has six inputs: * x distance between players * y distance between players * x distance between player and left wall * x distance between player and right wall * y distance between player and top wall * y distance between player and bottom wall one hidden layer which size is 5 and 4 outputs for moving up, down, left and right. **How should I approach training the players?** I am currently trying a genetic algorithm. I have chosen genetic algorithms because I do not have any training data, because I don't know the correct inputs and outputs. I am training the players at the same time. I create about 50 games (50 blue and 50 red players) and the player that wins, stays alive and creates child which has mutation. Mutation basically means that I change 0-2 weights about (-0.25)-(+0.25). I keep the populations at the same size by increasing the birthrate of the smaller (losing) population. This approach has not yet yielded good results. I can see a little progress, but a lot of the wins seem random. **Does my approach make sense?**<issue_comment>username_1: As well as decent modelling of purposes or strings of digits, ChatGPT can identify when a completely novel string might be a product name, a fictional person or place etc. The language model in ChatGPT is partially character based. There are around 100,000 tokens in latest GPT - the encoding is called `c100k_base`. Most of them are common English words (e.g. "the"), but it also supports individual characters including individual digits, and processes sequences of characters and effectively models groups of them at a time. > > From what I understand, ChatGPT is just a fancy neural network, operating like a sophisticated Markov Chain generator. > > > That's a reasonable analogy. ChatGPT is a bit like a 2048-gram prediction engine for the next token, shift the sequence along one and repeat. No different to toy fantasy name generators when viewed from 10,000 feet up. > > As such, it should only be able to generate tokens that are in its training dataset. One thing it should not be able to generate is a token unique to the conversation, such as a random number, since the conversation is not part of its training. > > > A couple of misunderstandings here. First, the random number will not become a single token, but will be one token per digit, or pair of digits or triple digits, depensing on sequence - [you can give this a try to help visualise it](https://platform.openai.com/tokenizer), in the encoding that ChatGPT uses. Of course each of those tokens will have been seen before, millions of times in the training data. Second, sequences do not need to be seen in the training data in order for ChatGPT to work with them. In fact, with an input sequence length of 2048, pretty much *all* inputs to ChatGPT in inference mode are unique never-seen-before sequences. Regardless if some of the tokens represent a long random number, the chances of any 2048 long sequence of letters and short words being unique when generated are very high. This is where the neural network model differs from a true 2048-gram. It has generalised from the training data well enough that it actually *can* predict meaningful and useful values for probability of next token, even though in all likelihood it has never before seen the exact same sequence. In this regard it is an approximation of a "perfect" 2048-gram prediction engine that somehow been trained on infinite human writings. A lot of language modelling is about correctly processing the *context* of a subsequence, so recognising a number sequence as being a grammatical "unit" that can be reused as-is is not a surprising feature. Upvotes: 3 <issue_comment>username_2: Turns out ChatGPT is indeed human curated, by open admission. During [this](https://github.com/username_2/transcripts/blob/main/ChatGPT_confused_by_visual_sentence_structure.txt#L714) conversation ChatGPT outright states the OpenAI team filters and edits the GPT generated responses. > > ...the response you are receiving is being filtered and edited by the OpenAI team, who ensures that the text generated by the model is coherent, accurate and appropriate for the given prompt. > > > Apparently, the fact that OpenAI actively curates ChatGPT's responses is indirectly implied in the documentation [here](https://beta.openai.com/docs/guides/safety-best-practices/human-in-the-loop-hitl). > > Human in the loop (HITL): > Wherever possible, we recommend having a > human review outputs before they are used in practice. This is > especially critical in high-stakes domains, and for code generation. > Humans should be aware of the limitations of the system, and have > access to any information needed to verify the outputs (for example, > if the application summarizes notes, a human should have easy access > to the original notes to refer back). > > > So, that explains that :) Upvotes: -1 [selected_answer]<issue_comment>username_3: Existing answer is great about model generalization, but I would like to add about an important [inductive bias](https://en.wikipedia.org/wiki/Inductive_bias) of the [Transformer](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)) model architecture used for ChatGPT. In the Transformer model architecture, there is a mechanism called *attention*. An attention block in a model can access all context (input and previously-generated output) and reminder what is needed based on the model's choice. A partial analogy of the attention mechanism is how humans' eye move to some previous text while writing. Due the attention mechanism, a Transformer model is very good at repeating the input as is to the output. This is different from previous architectures such as [RNN](https://en.wikipedia.org/wiki/Recurrent_neural_network), where the model has to convert the input to their internal representation. That being said, the provided example is something more than a mere repeat, and it is very difficult to say "how" a large language model performs in general, especially as ChatGPT model is not public. Upvotes: 2
2023/01/07
1,715
6,325
<issue_start>username_0: For highway network, it looks like this: [![enter image description here](https://i.stack.imgur.com/zN1eN.png)](https://i.stack.imgur.com/zN1eN.png) For residual network, it looks like this: [![enter image description here](https://i.stack.imgur.com/91aM7.png)](https://i.stack.imgur.com/91aM7.png) Pictures are from [What is the name of this neural network architecture with layers that are also connected to non-neighbouring layers?](https://ai.stackexchange.com/questions/17822/what-is-the-name-of-this-neural-network-architecture-with-layers-that-are-also-c/17845#17845) My question is, how to handle the size difference between different layers in CNN to make highway network or residual network? For example, I am working on a text classification problem. By using the embedding, I have the input size as follows: ``` input.shape =[batch_size, embedding_dim, max_length] ``` I also has a CNN layer as follows: ``` Conv1d(in_channels= embedding_dim, out_channels=hidden_dim, kernel_size=n) ``` So that the size of the output of `Conv1d` is `[batch_size, hidden_dim, max_length-n+1]`. Here is the question, the input size of the CNN layer is different from the output size. How do handle the size difference so that highway network or residual network can be built? Thank you.<issue_comment>username_1: You can just use `padding='same'`. As noted from the [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D): > > When `padding="same"` and `strides=1`, the output has the same size as the input. > > > Note that `strides` is default to 1, and if `kernel_size=1`, the output also has the same shape as the input. I look at two different implementations and can confirm this: * The implementation of [Dive into Deep Learning](https://d2l.ai/chapter_convolutional-modern/resnet.html#residual-blocks) shows that the Residual block implementation is: ``` class Residual(tf.keras.Model): #@save """The Residual block of ResNet.""" def __init__(self, num_channels, use_1x1conv=False, strides=1): super().__init__() self.conv1 = tf.keras.layers.Conv2D(num_channels, padding='same', kernel_size=3, strides=strides) self.conv2 = tf.keras.layers.Conv2D(num_channels, kernel_size=3, padding='same') self.conv3 = None if use_1x1conv: self.conv3 = tf.keras.layers.Conv2D(num_channels, kernel_size=1, strides=strides) self.bn1 = tf.keras.layers.BatchNormalization() self.bn2 = tf.keras.layers.BatchNormalization() def call(self, X): Y = tf.keras.activations.relu(self.bn1(self.conv1(X))) Y = self.bn2(self.conv2(Y)) if self.conv3 is not None: X = self.conv3(X) Y += X return tf.keras.activations.relu(Y) ``` which we see `conv1` and `conv2` has `padding='same', strides=1` everywhere. * The second implementation is from [Keras official code](https://github.com/keras-team/keras/blob/e6784e4302c7b8cd116b74a784f4b78d60e83c26/keras/applications/resnet.py#L279), which also uses `padding='SAME'` here. Here's the [visualization](https://stackoverflow.com/a/62282172/11235205) of how different padding works. In short, `'same'` automatically calculates the padding dimension based on the kernel size so that the output has the same shape as the input for you. Upvotes: 3 [selected_answer]<issue_comment>username_2: When using residual connections you want to have your input dimensions match the output dimension so that you can perform the addition operation. In a standard ResNet-style architecture you mostly have layers that keep the dimensions (i.e. they use `padding="same"`). However, there are a few places where you change the dimensions. Usually, your convolutional layer will reduce the spatial dimensions in half and will double the number of channels, i.e. $C \times H \times W \rightarrow 2C \times H//2 \times W//2$. Such a layer can be initialized like this for example: ``` nn.Conv2d(in_chan, 2*in_chan, kernel_size=3, stride=2, padding=1) ``` In this case your skip connection will not be a simple identity function, but you will actually apply a `1x1` convolution instead. Thus, instead of having $F(x) = f(x) + x$, you will have $F(x) = f(x) + g(x)$, where g(x) is the `1x1` convolution. This additional convolutional layer will apply the needed modification of the dimensions of $x$ so that you can apply the addition operation. For images and more explanations please see: <https://username_2.github.io/posts/res-nets/#the-architecture-of-the-resnet> Now, before using this architecture for nlp tasks, I think it is a good idea to know **WHY** we do this dimension reduction. You usually start with a tensor of some size $C \times H \times W$. Then you apply some ResBlocks that **do not** modify the dimensions and extract features from this fixed dimensional space. Then you apply **one** special block that modifies the dimensions and again you stack ResBlocks that extract features from this reduced dimensional space. The idea is to squash the spatial dimensions and to expand the channel dimensions so that information that is contained across multiple nearby pixels in the original image is, at the end, concentrated in a single "pixel" which is now embedded in a much larger space. (see the link above for even more info) Now with NLP sequences we are talking about 1D convolutions, so replicating this behavior would mean gradually reducing your sequence length in order to increase the embedding size. But I think that this design might fail to capture long term relationships which are quite common in text-related tasks. Also if you don't reduce the sequence length and only increase the embedding size, then your computational cost is quite higher in the deeper layers, and you really want near-constant cost across the layers. If you take a look at [the transformer](https://username_2.github.io/posts/transformer/), for example, there they use self-attention to capture long-distance relations, and they also keep the embedding size constant throughout making every encoder block equally expensive. Best of luck with your problem :) Upvotes: 0
2023/01/08
705
2,789
<issue_start>username_0: I am reading through the [Vision Transformer](https://paperswithcode.com/method/vision-transformer) paper and other related papers, such as [DeiT](https://arxiv.org/pdf/2012.12877.pdf) and [Visual Prompt Tuning (VPT)](https://arxiv.org/pdf/2203.12119.pdf). I wonder if the position of the tokens that flow through the Transformer encode matters? Let's break it down, assuming we use ViT 16x16: * The original Vision Transformer has 1 classifier `[CLS]` token, along with 16x16=196 image tokens. In general, we have 197 tokens overall, where the position is: `[[CLS], [Image Patches] x 196]` * DeiT further adds one Distillation token to the end of the sequence, so we have 198 overall: `[[CLS], [Image Patches] x 196, [Distill Token]]` * However, I notice a very strange thing with VPT that they add the Prompt token **after the `[CLS]` but before the Image patches**. So the position of the tokens in VPT is: `[[CLS], [Prompts] x N, [Image Patches] x 196]`. I wonder does such a thing matter? What if we change the position of these tokens, e.g., putting the Prompt tokens to the last?<issue_comment>username_1: It should not matter. To explain why, we need to understand how a transformer works. Transformers were originally designed for language models. They compute a self attention matrix, which is a fancy way of saying they input a sequence, and learn how every part of that sequence relates to every other part of the sequence. Since self attention is computed for every token with respect to every other token, transformers are completely agnostic to the order in which tokens are placed. For some applications this is a desirable property, but for others, we want to give the transformer some knowledge of the sequence order. This is done by adding positional embeddings. These are added to the sequence prior to being passed into the transformer, and they are used to encode sequence information. So to review: 1. Vanilla transformers are totally agnostic to the sequence order. 2. For applications where this is undesirable a positional encoding is added. 3. The changing order that you observed should not matter. [This](https://machinelearningmastery.com/a-gentle-introduction-to-positional-encoding-in-transformer-models-part-1/) blog post may be helpful for further information. Upvotes: 3 [selected_answer]<issue_comment>username_2: It does not matter. Although, I can imagine a situation where it could matter a bit - when position embeddings are not learnt but calculated and fixed like in the original transformer (*Attention is all you need*). Then the attention mechanism may be slightly biased towards one or another position. But AFAIR in ViT the position embeddings are learnt. Not sure about other papers you mention. Upvotes: 2
2023/01/08
1,474
5,219
<issue_start>username_0: Has ChatGPT used highly rated and upvoted questions/answers from Stack Overflow in its training data? For me it makes complete sense to take answers that have upwards of 100 upvotes and include them in your training data, but people around me seem to think this hypothesis doesn't make sense. Is there a way to confirm this?<issue_comment>username_1: [ChatGPT](https://openai.com/blog/chatgpt/) is in the [Large Language Models (LLM)](https://huggingface.co/blog/large-language-models) category. The most (in)famous GPT model is probably [GPT-3](https://en.wikipedia.org/wiki/GPT-3), because since then, researchers realized that LLMs mostly follow a [predictable scaling law](https://arxiv.org/abs/2001.08361v1), thus the more data and the bigger model, the better. It is accurate to say that ChatGPT was **trained** with Stack Overflow data, but it should be *all* Stack Overflow instead of *just most upvoted answers/comments*. The [Wikipedia page of GPT-3](https://en.wikipedia.org/wiki/GPT-3#Training_and_capabilities) and [their paper](https://arxiv.org/pdf/2005.14165.pdf) mentions that GPT-3 was trained on multiple datasets, and one of which is the **Common Crawl**, which basically [crawls everything on the Internet](https://en.wikipedia.org/wiki/Common_Crawl). Some data pre-processing was done before training, but the authors did not mention removing the comments, so we can say that it is all Stack Overflow data. If we [look at the Common Crawl data in Sep 2022](https://commoncrawl.org/2022/09/host-and-domain-level-web-graphs-may-jun-aug-2022/), there is indeed the domain `com.stackoverflow` in their list. Thus, **while ChatGPT was trained on Stack Overflow data, it is trained on all Stack Overflow data instead of just most upvoted answers**. However, if you think ChatGPT's code output is of high quality, think again, because [Stack Overflow temporarily bans ChatGPT](https://meta.stackoverflow.com/questions/421831/temporary-policy-chatgpt-is-banned) `because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers` (quoted from the link). Here is the justification of them: > > The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure. > > > EDIT: a comment below also confirmed that GPT-2, the predecessor of GPT-3 and ChatGPT, was trained with Stack Overflow data. Upvotes: 6 <issue_comment>username_2: it was trained using supervised from open ai team for initializing the conversational model and for more customize style, it can highly possible using chosen stackoverflow sample dataset since the answer of the question from the website was moderated. ChatGPT was take the advantage of GPT-3 model that has been trained from various corpus of data and can perform few-shot learning. Upvotes: 0 <issue_comment>username_3: Technically speaking, we don't know whether ChatGPT used Stack Overflow data for the training. That is because ChatGPT is a proprietary model, and OpenAI hasn't published training details to the public. We know that, from the GPT-3 paper [1], the GPT-3 training dataset includes Common Crawl. However, there is no indication of the relation between GPT-3.5 series (including ChatGPT) and GPT-3 models according to [OpenAI's account](https://platform.openai.com/docs/model-index-for-researchers). See also this question [Are GPT-3.5 series models based on GPT-3?](https://ai.stackexchange.com/questions/39023/are-gpt-3-5-series-models-based-on-gpt-3) for the discussion on GPT-3 vs. GPT-3.5. I add: the token set is different between `davinci` (GPT-3) and `text-davinci-003` (GPT-3.5) ([source](https://github.com/openai/tiktoken/blob/e1c661edf3604706bb2db59cfc7bf92f73c09761/tiktoken/model.py#L13)) thus the later cannot be a simple fine-tuning of the former. I *think*, though, it is natural to infer OpenAI uses StackOverflow data because it is generally considered high quality [2] and easily obtainable. Also, if OpenAI ever uses web crawling data, it will see many StackOverflow and its clone sites. * [1]: <NAME> et al. "[Language Models are Few-Shot Learners](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)." *NeurIPS* (2020). * [2]: <NAME> et al. "[The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only](https://arxiv.org/abs/2306.01116)." arXiv preprint (2023). Upvotes: 1
2023/01/09
2,256
8,884
<issue_start>username_0: Consider a very simple problem, which is to find the maximum value out of a list of 5 numbers between 0 and 1. This is obviously trivial, but serves as a good example for a real-world problem I'm facing. One could attempt to train an MLP to solve this problem using randomly generated input data. The following code is an example in Keras/Tensorflow. It generates 1000000 random examples (A large number, so we can reduce overfitting as a factor). It also applies some simple techniques like LR decay and early stopping to optimize training. ``` import tensorflow as tf # Create a dataset where the y value is the largest of 5 random x values. x = tf.random.uniform(shape=(1000000, 5), minval=0.0, maxval=1.0) y = tf.reduce_max(x, axis=-1) dataset = tf.data.Dataset.from_tensor_slices( (x, y) ).shuffle(1000000).batch(1024).prefetch(1) # See the post for discussion of hyperparameter optimization units = 400 depth = 5 initial_learning_rate = 0.001 model = tf.keras.models.Sequential() for i in range(depth): model.add(tf.keras.layers.Dense(units, activation="relu")) model.add(tf.keras.layers.Dense(1)) model.compile( optimizer=tf.keras.optimizers.Adam(initial_learning_rate), loss=tf.keras.losses.MeanAbsolutePercentageError(), ) stopping_callback = tf.keras.callbacks.EarlyStopping(monitor="loss", patience=5) reduce_lr_callback = tf.keras.callbacks.ReduceLROnPlateau(monitor="loss", factor=0.75, patience=2) model.fit(dataset, epochs=1000) ``` The metric we care about is the mean absolute percentage error. When we run the code, the model trains successfully. But very importantly, the loss does not ever go below ~0.2 with these hyperparameters. 0.2% error sounds pretty good, but it's actually unacceptable for my use case. I need much more precision, as in practice the model is used in a situation where any errors are magnified. At this point, you may want to suggest hyperparameter tuning, but I assure you I have done an extensive amount of hyperparameter tuning on this and similar problems. The model is clearly underfitting, and it is possible to decrease the error by increasing the number of parameters in the network. Empirically, I've noticed that increasing the number of parameters (and number of examples) by an order of magnitude can reduce the error by a similar order of magnitude. However, intuitively this just feels wrong. It is not practical to have a network with millions, or even tens of millions of parameters, just to regress such a simple function. It feels like there should be an architecture that can learn a simple function much more efficiently, but I have not been able to find any reference to such an alternative architecture. Again, of course this particular example is trivial, but hopefully you can imagine a real-world analogue where an unknown (more complex, but similar) function is expressed by a similar dataset. And in such a real-world example, the unacceptable error still persists (and is much more pronounced, for reasons that I don't fully understand). Does anyone have any idea what's going on here? Am I missing something? How can I improve the performance to a place that would be acceptable?<issue_comment>username_1: Assuming the $N$ numbers are different, a function that combines sums, products and the hard step function for solving the maximum may look like this. $$ max(x\_1,x\_2,...,x\_N) = \sum\_{i=1}^N x\_i \prod\_{j=1,j\neq i}^N sign(x\_i-x\_j) $$ $sign(z)$ is $1$ for $z>0$ and $0$ for $z<0$. Upvotes: 0 <issue_comment>username_2: An interesting problem. This network has only 933 trainable parameters, and obtains `MeanAbsolutePercentageError` of 0.01 - 0.04. It is based on a softmax activation, to choose which item from the input to choose. ``` n, dim, validation_split = 100000, 5, 0.1 X = tf.random.uniform(shape=(n, dim), minval=0.0, maxval=1.0) y = tf.reduce_max(X, axis=-1) initial_learning_rate = 0.003 act, units, depth = 'relu', 16, 3 inp = tf.keras.layers.Input(dim) x = inp for _ in range(depth): x = tf.keras.layers.Dense(units, activation=act)(x) x = tf.keras.layers.BatchNormalization()(x) x = tf.keras.layers.Dense(int((dim * units)**0.5), activation="elu")(x) x = tf.keras.layers.BatchNormalization()(x) x = tf.keras.layers.Dense(dim, activation="softmax")(x) # Skip this step to make categorical predictions x = tf.reduce_sum(inp * x, axis=-1, keepdims=True) model = tf.keras.models.Model(inp, x) model.summary() is_categorical = model.output.shape[1] == dim model.compile( optimizer=tf.keras.optimizers.Adam(initial_learning_rate), loss=tf.keras.losses.CategoricalCrossentropy() if is_categorical else tf.keras.losses.MeanAbsolutePercentageError(), metrics=[tf.keras.metrics.CategoricalAccuracy()] if is_categorical else [] ) stopping_callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=10, restore_best_weights=True) reduce_lr_callback = tf.keras.callbacks.ReduceLROnPlateau(monitor="loss", factor=0.1**0.5, patience=5, verbose=1, min_lr=1e-4) target = tf.cast(X == y[:,None], tf.float32) if is_categorical else y h = model.fit(X, target, verbose=1, batch_size=1024, epochs=10000, validation_split=validation_split, callbacks=[stopping_callback, reduce_lr_callback]) n_val = int(n * validation_split) print([model.evaluate(X[:n_val], target[:n_val]), model.evaluate(X[-n_val:], target[-n_val:])]) ``` An alternative formulation outputs just the softmax activation, this obtains an accuracy of 98 - 99%. Granted, it isn't 100% accurate. Summary of the model with just one output: ``` __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_273 (InputLayer) [(None, 5)] 0 __________________________________________________________________________________________________ dense_868 (Dense) (None, 16) 96 input_273[0][0] __________________________________________________________________________________________________ batch_normalization_577 (BatchN (None, 16) 64 dense_868[0][0] __________________________________________________________________________________________________ dense_869 (Dense) (None, 16) 272 batch_normalization_577[0][0] __________________________________________________________________________________________________ batch_normalization_578 (BatchN (None, 16) 64 dense_869[0][0] __________________________________________________________________________________________________ dense_870 (Dense) (None, 16) 272 batch_normalization_578[0][0] __________________________________________________________________________________________________ batch_normalization_579 (BatchN (None, 16) 64 dense_870[0][0] __________________________________________________________________________________________________ dense_871 (Dense) (None, 8) 136 batch_normalization_579[0][0] __________________________________________________________________________________________________ batch_normalization_580 (BatchN (None, 8) 32 dense_871[0][0] __________________________________________________________________________________________________ dense_872 (Dense) (None, 5) 45 batch_normalization_580[0][0] __________________________________________________________________________________________________ tf_op_layer_mul_251 (TensorFlow [(None, 5)] 0 input_273[0][0] dense_872[0][0] __________________________________________________________________________________________________ tf_op_layer_Sum_329 (TensorFlow [(None, 1)] 0 tf_op_layer_mul_251[0][0] ================================================================================================== Total params: 1,045 Trainable params: 933 Non-trainable params: 112 __________________________________________________________________________________________________ ``` I don't know what kind your original problem is, but maybe you could do some feature-engineering by supplying the min/max/median directly to the network. Then the network wouldn't need to learn to approximate those functions. Upvotes: 2
2023/01/10
1,781
7,188
<issue_start>username_0: There is a lot of discussion on google search about AI-custom-accelerators (like Intel's Gaudi) and GPUs. Almost all of them say generic things like, a) AI Accelerator chip is for specialized AI processing whereas GPUs work for general AI models, and like b) if you want customize chip for specific AI workloads use AI-accelerator or else use GPU. From what I understand GPUs are already great at large-scale dot-products done in batch-processing (throughput mode), and most of AI workloads are matmuls (which is essentially dot-product) so GPUs handle AI workloads very well. Plus, I've also seen Intel's Gaudi being used for a "variety of AI workloads", not specialized for a single model. It can be used for general AI workloads just like GPU. So what's the difference. What I don't understand is, "exactly" what specific features are built differently in Accelerator vs GPU. Both have ALUs and matmul engines that do very well on AI models. Both have large-cache/memory and DDR speed. * What exactly makes one better? For which AI workload would one choose accelerator over GPU? * AI accelerators have fixed-function for matmul. Do GPUs have fixed-function? * AI accelerators have software-managed cache (HBM) from what I understand. Is that the same with GPUs or is there a way cache is different between accelerators and GPUs that changes things? I'm kind of unsure about the differences between GPUs and AI-accelerators with respect to Fixed-function and software-managed-cache.<issue_comment>username_1: Assuming the $N$ numbers are different, a function that combines sums, products and the hard step function for solving the maximum may look like this. $$ max(x\_1,x\_2,...,x\_N) = \sum\_{i=1}^N x\_i \prod\_{j=1,j\neq i}^N sign(x\_i-x\_j) $$ $sign(z)$ is $1$ for $z>0$ and $0$ for $z<0$. Upvotes: 0 <issue_comment>username_2: An interesting problem. This network has only 933 trainable parameters, and obtains `MeanAbsolutePercentageError` of 0.01 - 0.04. It is based on a softmax activation, to choose which item from the input to choose. ``` n, dim, validation_split = 100000, 5, 0.1 X = tf.random.uniform(shape=(n, dim), minval=0.0, maxval=1.0) y = tf.reduce_max(X, axis=-1) initial_learning_rate = 0.003 act, units, depth = 'relu', 16, 3 inp = tf.keras.layers.Input(dim) x = inp for _ in range(depth): x = tf.keras.layers.Dense(units, activation=act)(x) x = tf.keras.layers.BatchNormalization()(x) x = tf.keras.layers.Dense(int((dim * units)**0.5), activation="elu")(x) x = tf.keras.layers.BatchNormalization()(x) x = tf.keras.layers.Dense(dim, activation="softmax")(x) # Skip this step to make categorical predictions x = tf.reduce_sum(inp * x, axis=-1, keepdims=True) model = tf.keras.models.Model(inp, x) model.summary() is_categorical = model.output.shape[1] == dim model.compile( optimizer=tf.keras.optimizers.Adam(initial_learning_rate), loss=tf.keras.losses.CategoricalCrossentropy() if is_categorical else tf.keras.losses.MeanAbsolutePercentageError(), metrics=[tf.keras.metrics.CategoricalAccuracy()] if is_categorical else [] ) stopping_callback = tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=10, restore_best_weights=True) reduce_lr_callback = tf.keras.callbacks.ReduceLROnPlateau(monitor="loss", factor=0.1**0.5, patience=5, verbose=1, min_lr=1e-4) target = tf.cast(X == y[:,None], tf.float32) if is_categorical else y h = model.fit(X, target, verbose=1, batch_size=1024, epochs=10000, validation_split=validation_split, callbacks=[stopping_callback, reduce_lr_callback]) n_val = int(n * validation_split) print([model.evaluate(X[:n_val], target[:n_val]), model.evaluate(X[-n_val:], target[-n_val:])]) ``` An alternative formulation outputs just the softmax activation, this obtains an accuracy of 98 - 99%. Granted, it isn't 100% accurate. Summary of the model with just one output: ``` __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_273 (InputLayer) [(None, 5)] 0 __________________________________________________________________________________________________ dense_868 (Dense) (None, 16) 96 input_273[0][0] __________________________________________________________________________________________________ batch_normalization_577 (BatchN (None, 16) 64 dense_868[0][0] __________________________________________________________________________________________________ dense_869 (Dense) (None, 16) 272 batch_normalization_577[0][0] __________________________________________________________________________________________________ batch_normalization_578 (BatchN (None, 16) 64 dense_869[0][0] __________________________________________________________________________________________________ dense_870 (Dense) (None, 16) 272 batch_normalization_578[0][0] __________________________________________________________________________________________________ batch_normalization_579 (BatchN (None, 16) 64 dense_870[0][0] __________________________________________________________________________________________________ dense_871 (Dense) (None, 8) 136 batch_normalization_579[0][0] __________________________________________________________________________________________________ batch_normalization_580 (BatchN (None, 8) 32 dense_871[0][0] __________________________________________________________________________________________________ dense_872 (Dense) (None, 5) 45 batch_normalization_580[0][0] __________________________________________________________________________________________________ tf_op_layer_mul_251 (TensorFlow [(None, 5)] 0 input_273[0][0] dense_872[0][0] __________________________________________________________________________________________________ tf_op_layer_Sum_329 (TensorFlow [(None, 1)] 0 tf_op_layer_mul_251[0][0] ================================================================================================== Total params: 1,045 Trainable params: 933 Non-trainable params: 112 __________________________________________________________________________________________________ ``` I don't know what kind your original problem is, but maybe you could do some feature-engineering by supplying the min/max/median directly to the network. Then the network wouldn't need to learn to approximate those functions. Upvotes: 2
2023/01/11
883
2,338
<issue_start>username_0: In explanations of denoising diffusion models it is stated that $q(x\_{t-1}|x\_t)$ is intractable. This is often justified via Bayes' rule, i.e. $$ q(x\_{t-1}|x\_t) \propto q(x\_t|x\_{t-1})q(x\_{t-1}) $$ and the marginal $q(x\_{t-1})$ is unknown. But I'm confused. We know that $$ x\_t = \sqrt{1-\beta\_t}x\_{t-1}+\sqrt{\beta\_t}E,\quad E\sim\mathcal{N}(0,I) $$ therefore we can solve this equation for $x\_{t-1}$: \begin{align} x\_{t-1} &= (1-\beta\_t)^{-1/2}x\_t - \sqrt{\frac{\beta\_t}{1-\beta\_t}}E \\ &= (1-\beta\_t)^{-1/2}x\_t + \sqrt{\frac{\beta\_t}{1-\beta\_t}}R,\quad R\sim\mathcal{N}(0,I). \end{align} Thus $$ q(x\_{t-1}|x\_t) = \mathcal{N}(x\_{t-1};(1-\beta\_t)^{-1/2}x\_t,\frac{\beta\_t}{1-\beta\_t}I). $$ This is simple as can be. If this is true, there is no point in parameterizing the reverse distribution with neural nets and we don't need $\beta\_t$ to be small etc. What am I missing?<issue_comment>username_1: while we set $R$ to be independent from $x\_{t-1}$ in the calculation of $x\_t$, we no long have independency between $x\_t$ and $R$, and I guess this gives rise to the confusion. For simplicity, let us consider two independent normal variable $x, \varepsilon \sim \mathcal{N}\left(0, 1\right)$, and let us set $y = ax + b\varepsilon$. Then we have $$ y|x = \mathcal{N}\left(y; ax, b^2\right), $$ (For more detail on conditional normal, see this link: <https://statproofbook.github.io/P/mvn-cond>) which is similar to your conclusion. But we have $$ x|y = \mathcal{N}\left(x; \frac{ay}{a^2 + b^2}, \frac{b^2}{a^2 + b^2}\right) $$ (You may also find this link: <https://www.statlect.com/probability-distributions/normal-distribution-linear-combinations> helpful in computing the linear combination of multivariable normal) Upvotes: 1 <issue_comment>username_2: Consider the simple case where $x\_{t-1} = c$ is deterministic and $\beta > 0$. Then the forward process is given by $$q(x\_t|x\_{t-1}) \sim \mathcal{N}(x\_t; \sqrt{1 - \beta}c, \beta I)$$ But the backward process would definitely not be given by $$q(x\_{t-1}|x\_t) \sim \mathcal{N}(x\_t; (1 - \beta)^{-1/2}x\_t, \frac{\beta}{1 - \beta} I)$$ as $x\_{t-1}$ is deterministic. Of course ironically in the example I presented, the backward process is trivial, but it shows you can't just "solve for $x\_{t-1}$". Upvotes: 0
2023/01/11
1,050
3,077
<issue_start>username_0: In Sutton and Barto's book (<http://incompleteideas.net/book/bookdraft2017nov5.pdf>), a proof of the policy gradient theorem is provided on pg. 269 for an episodic case and a start state policy objective function (see picture below, last 3 equations). [![enter image description here](https://i.stack.imgur.com/qWCuP.png)](https://i.stack.imgur.com/qWCuP.png) Why can we assume that the sum $\sum\_s\eta(s)$ is a constant of proportionality? Doesn't it also depend on $\theta$, since it depends on the policy $\pi$? What could make sense, would be to say that $\nabla J(\theta) = \mathbb{E}\_{s\sim \eta(s), a \sim \pi}\left[\nabla\_{\theta}\mathrm{log}\left(\pi(s,a,\theta)\right)\,q\_{\pi}(s,a)\right]\propto \mathbb{E}\_{s\sim d(s), a \sim \pi}\left[\nabla\_{\theta}\mathrm{log}\left(\pi(s,a,\theta)\right)\,q\_{\pi}(s,a)\right]$. Since the proportionality constant is always $\ge 0$ (average time spent in an episode), any update direction suggested by $\mathbb{E}\_{s\sim d(s), a \sim \pi}\left[\nabla\_{\theta}\mathrm{log}\left(\pi(s,a,\theta)\right)\,q\_{\pi}(s,a)\right]$ is the same as $\mathbb{E}\_{s\sim \eta(s), a \sim \pi}\left[\nabla\_{\theta}\mathrm{log}\left(\pi(s,a,\theta)\right)\,q\_{\pi}(s,a)\right]$, but with different amplitude. This, however, wouldn't impact the learning process too much, since we multiply the update term with a low learning rate anyway. Hence, as it is more easy to sample states from $d(s)$, we just set $\nabla\_{\theta} J = \mathbb{E}\_{s\sim d(s), a \sim \pi}\left[\nabla\_{\theta}\mathrm{log}\left(\pi(s,a,\theta)\right)\,q\_{\pi}(s,a)\right]$. Could that serve as plausible explanation?<issue_comment>username_1: while we set $R$ to be independent from $x\_{t-1}$ in the calculation of $x\_t$, we no long have independency between $x\_t$ and $R$, and I guess this gives rise to the confusion. For simplicity, let us consider two independent normal variable $x, \varepsilon \sim \mathcal{N}\left(0, 1\right)$, and let us set $y = ax + b\varepsilon$. Then we have $$ y|x = \mathcal{N}\left(y; ax, b^2\right), $$ (For more detail on conditional normal, see this link: <https://statproofbook.github.io/P/mvn-cond>) which is similar to your conclusion. But we have $$ x|y = \mathcal{N}\left(x; \frac{ay}{a^2 + b^2}, \frac{b^2}{a^2 + b^2}\right) $$ (You may also find this link: <https://www.statlect.com/probability-distributions/normal-distribution-linear-combinations> helpful in computing the linear combination of multivariable normal) Upvotes: 1 <issue_comment>username_2: Consider the simple case where $x\_{t-1} = c$ is deterministic and $\beta > 0$. Then the forward process is given by $$q(x\_t|x\_{t-1}) \sim \mathcal{N}(x\_t; \sqrt{1 - \beta}c, \beta I)$$ But the backward process would definitely not be given by $$q(x\_{t-1}|x\_t) \sim \mathcal{N}(x\_t; (1 - \beta)^{-1/2}x\_t, \frac{\beta}{1 - \beta} I)$$ as $x\_{t-1}$ is deterministic. Of course ironically in the example I presented, the backward process is trivial, but it shows you can't just "solve for $x\_{t-1}$". Upvotes: 0
2023/01/17
1,739
5,546
<issue_start>username_0: InstructGPT: What is the sigma in the loss function and why $\log(\cdot)$ is being used? $$ \operatorname{loss}(\theta) = -\frac{1}{\binom{K}{2}}E\_{(x,y\_w,y\_l)\sim D}[\log(\sigma(r\_{\theta}(x, y\_w) - r\_{\theta}(x, y\_l)))] $$ The equation was taken from the [InstructGPT paper](https://arxiv.org/abs/2203.02155).<issue_comment>username_1: According to [this guide](https://wandb.ai/ayush-thakur/RLHF/reports/Understanding-Reinforcement-Learning-from-Human-Feedback-RLHF-Part-1--VmlldzoyODk5MTIx), the sigma in this formula refers to the sigmoid activation function. The guide does not tell exactly why the sigmoid function is used here, so I will try to give a full explanation of how this loss formulation works (page 8, formula 1 [in the InstructGPT paper](https://arxiv.org/abs/2203.02155)): $\text{loss}(\theta)=-\frac{1}{\binom{K}{2}}E\_{(x,y\_w,y\_l) \sim D} [log(\sigma(r\_\theta(x,y\_w)-r\_\theta(x,y\_l)))]$ In the following I will use the notation from the paper: $x$ refers to the given instruction ($y\_w$, $y\_l$) refers to a pair of responses out of the list of responses which a human ranked based on their preference $y\_w$ refers to the response that is preferred over the other, lesser preferred response $y\_l$ $r$ refers to the reward model $\theta$ refers to the trainable parameters of that reward model $\sigma$ refers to the sigmoid activation function. If you interpret $\sigma(r\_\theta(x, y\_w) - r\_\theta(x, y\_l))$ as *the probability that the reward model assigns a higher reward to the preferred response $y\_w$ than to the lesser preferred response $y\_l$*, the formula makes total sense. How do you maximize a probability? Exactly, by minimizing the negative log of that probability (this is called *negative log likelihood*). Hence the minus sign in front of the formula and log function around the sigmoid. In order to elaborate a bit more on that: If the reward model is working fine, then $r\_\theta(x, y\_w)$ will be a very large positive number and $r\_\theta(x, y\_l)$ will be a much lower number (maybe even a negative number). The difference $r\_\theta(x, y\_w) - r\_\theta(x, y\_l)$ will then be a very large positive number. And the sigmoid of a very large positive number approaches $1$. In that case, everything is according to plan and the loss will be very small (close to zero). However, if the reward model is failing, the assigned reward $r\_\theta(x, y\_w)$ might be much smaller than $r\_\theta(x, y\_l)$. Hence, the difference $r\_\theta(x, y\_w) - r\_\theta(x, y\_l)$ will be a (possibly very large) negative number. Take the sigmoid of that and you get a value that approaches $0$ (thus, the *probability that the reward model assigns a higher reward to the preferred response* will be small). As we are trying to maximize a probability by minimizing the negative log likelihood, we get a large loss in that case. As there will be a varying number of ranked responses for each instruction in one batch ("between $K = 4$ and $K = 9$ responses"), the losses of those pairwise comparisons must be weighted, so that each instruction has the same impact on the gradient update, no matter how many responses the humans have been presented for each instruction. The number of pairwise comparisons out of $K$ is $\binom{K}{2}$. In order to wrap it up: By minimizing the loss described in the paper, the reward model gets incentivized to assign a large positive reward to responses the (hopefully adequately paid *cough*) humans in front of their computers consider to be very good responses and very large negative rewards to responses which those humans consider to be very bad. And this is exactly what is desired in order to fine-tune an LLM according to human preference using reinforcement learning. Upvotes: 2 <issue_comment>username_2: The probability of preferring one trajectory/response (say $A$) over the other (say $B$) is given by, $$ \begin{aligned} P(A>B) &= \frac{\exp(r\_\theta(A))}{\exp(r\_\theta(A))+\exp(r\_\theta(B))}\\ &= \frac{1}{1+\exp(-(r\_\theta(A)-r\_\theta(B)))}\\ &= \sigma(r\_\theta(A)-r\_\theta(B)) \end{aligned} $$ where $\sigma(x) = \frac{1}{1+e^{-x}}$ is the sigmoid function. This follows from the [Bradley-Terry model](https://en.wikipedia.org/wiki/Bradley%E2%80%93Terry_model) for estimating score functions from pairwise preferences. If say $y\_w$ and $y\_l$ are the responses from the model (augmented with the query $x$), and we prefer $y\_w$ over $y\_l$, then $\mu(y\_w)=1; \mu(y\_l)=0$. We can use the following cross-entropy loss function and substitute for the probability from above equation to train our reward model: $$ \begin{aligned} L &= -\mu(y\_w) \log(P(y\_w>y\_l)) + \mu(y\_l) \log(P(y\_l>y\_w))\\ &= -\log(P(y\_w>y\_l))\\ &= -\log(\sigma(r\_\theta(x,y\_w)-r\_\theta(x,y\_l))) \end{aligned} $$ In InstructGPT, the model is made to generate $K$ responses. So we can have $K\choose2$ pairs of comparisons that we can make. Example if the model generates four responses, $A, B, C, D$ and our ranking is $B>C>D>A$, then there are ${4\choose2}=6$ comparisons possible: $B>C$, $B>D$, $B>A$, $C>D$, $C>A$ and $D>A$. The loss function in this case reduces to, $$ L = - \frac{1}{K\choose2} E\_{(x,y\_w,y\_l) \in \mathcal{D}}\Big[\log(\sigma(r\_\theta(x,y\_w)-r\_\theta(x,y\_l)))\Big] $$ Hope this helps. I have created a [blog post to explain RLHF in conversational AI models](https://username_2.github.io/blog/reinforcement-learning-from-human-feedback/) if you want to understand better. Upvotes: 2
2023/01/20
426
1,987
<issue_start>username_0: When training a DNN on infinite samples, do ADAM or other popular optimization algorithms still work as intended? I have an DNN training from an infinite stream of samples, that most likely won't repeat. So there is no real notion of "epoch". Now I wonder if the math behind ADAM or other popular optimizers expect the repetition of data over the epochs? If so, should I collect a limited amount of those samples and use them for training data and validation data, or would it be better to use all data available (even if the training data never repeats then)?<issue_comment>username_1: Subdividing datasets into batches is only done for computational reasons (RAM, computation time etc). Optimizers do not care about repeated data. They get a batch of data to optimize on and will calculate the gradient for that batch and update the model accordingly. Nothing in this process requires the batches to reappear at all. Upvotes: 0 <issue_comment>username_2: In general, the methods still work even with an infinite amount of data, as long as there are common/reoccuring patterns that a neural network can learn to identify. For example: If you would have infinitely many images of dogs and cats, there are features that discriminate the two animals that are mostly consistent like the shape of the nose. Having infinitely many samples in these cases is generally desirable, because it can benefit the ability of the model to generalize. In contrast, there are cases where this is not true that depend on the data: If your data contains (concept-) drift, meaning that your data distribution changes over time, the model you train might not be able to learn a consistently performing function and therefore chases a moving objective. For images, this can be the case if the labels would depend on lighting conditions that constantly change (concept drift), or if the objects in the images that you want to classify continuously change shape (drift). Upvotes: 1
2023/01/25
726
2,572
<issue_start>username_0: Is it difficult for other companies to train a model similar to ChatGPT, and what makes it difficult? What is challenging about reproducing the results obtained by OpenAI with ChatGPT/GPT3.5? Would it be possible for a company like Meta or Google to have a model equal to ChatGPT/GPT3.5 in the next month or so? Why or why not? I understand that a big language model is expensive to train, so I'm expecting only large companies to be able to train such models to a sufficient extent.<issue_comment>username_1: Challenges to reproduce ChatGPT: * Compute cost * Collect training data * Find the proper choice for network architecture + RL (OpenAI hasn't published all the details) --- Two interesting papers on training cost vs. LLM quality: * [What Language Model to Train if You Have One Million GPU Hours](https://arxiv.org/pdf/2210.15424.pdf)? * [Training Compute-Optimal Large Language Models](https://arxiv.org/pdf/2203.15556.pdf) For some tasks, "smaller LLMs" can perform well e.g. see *<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. [Benchmarking Large Language Models for News Summarization](https://arxiv.org/pdf/2301.13848.pdf). arXiv:2301.13848.*: > > We find instruction tuning, and not model size, is the key to the > LLM’s zero-shot summarization capability > > > Upvotes: 4 [selected_answer]<issue_comment>username_2: Actually, Google created a bigger model than GPT-3 and models in the GPT-3.5 series, and consequently ChatGPT too (because ChatGPT is based on a GPT-3.5 model) - [Switch-C](https://arxiv.org/abs/2101.03961) has trillions of parameters, one order of magnitude bigger than the GPT models that I know of, and it was developed before ChatGPT was announced. I don't know how many parameters ChatGPT has exactly, but it shouldn't have more than several billions of parameters. So, what makes reproducing a model like ChatGPT difficult for companies like Google? Definitely, not the lack of computational resources or money, but **the lack of transparency**. My impression is that Google also tends to be open-source, as opposed to OpenAI, which wants to make money of everything. Moreover, I'd like to note that the GPT models have received a lot of hype, but there are other pre-trained models (e.g. Lambda or Switch-C), *for example*, developed by Google, that maybe should also have our attention. Google simply doesn't need to generate all this hype to get the money, as they still get most of their revenue from ads (the last time I checked) Upvotes: 2
2023/01/26
1,094
3,969
<issue_start>username_0: Take this two texts as an example: > > "I will start the recording and share the presentation. I hope it's > all clear from what we saw last time on classical cryptography and if > you remember we got to discussing perfect security. Ah, you're right, > I didn't upload the slides. Wait, they are the same as last time. If > you want to get the face on the fly, last time I mean last year. I had > some things to do and didn't upload." > > > > > "Okay, I will start recording and I will share the presentation again. > 1, 0 So I hope it's all clear compared to what we saw last time about > classic victory, And if you remember we got to let's say discuss the > perfect security. Ah you could see already, I didn't upload the > Slides, Wait they are the same as last time, Eh. If you want to have > done on the fly, I mean last time, I mean from last year. Morning I > had some stuff to do, I didn't upload." > > > We want to align the sentences so that similar sentences (within a certain degree of difference) are being matched together. Which are the approaches to solve the issue? ``` [I will start the recording and share the presentation], [I will start recording and I will share the presentation] [I hope it's all clear from what we saw last time on classical cryptography], [I hope it's all clear compared to what we saw last time about classic victory] [and if you remember we got to discussing perfect security], [And if you remember we got to let's say discuss the perfect security] . . . ``` I have been looking into DTW and perceptual hashing as a way to solve the problem without any concrete result then I saw that in the field of automatic translation sentence alignment is widely used but with the assumption that the two texts have different languages and that there is a one-to-one mapping between words without "gaps" or extra words in between.<issue_comment>username_1: Challenges to reproduce ChatGPT: * Compute cost * Collect training data * Find the proper choice for network architecture + RL (OpenAI hasn't published all the details) --- Two interesting papers on training cost vs. LLM quality: * [What Language Model to Train if You Have One Million GPU Hours](https://arxiv.org/pdf/2210.15424.pdf)? * [Training Compute-Optimal Large Language Models](https://arxiv.org/pdf/2203.15556.pdf) For some tasks, "smaller LLMs" can perform well e.g. see *<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>. [Benchmarking Large Language Models for News Summarization](https://arxiv.org/pdf/2301.13848.pdf). arXiv:2301.13848.*: > > We find instruction tuning, and not model size, is the key to the > LLM’s zero-shot summarization capability > > > Upvotes: 4 [selected_answer]<issue_comment>username_2: Actually, Google created a bigger model than GPT-3 and models in the GPT-3.5 series, and consequently ChatGPT too (because ChatGPT is based on a GPT-3.5 model) - [Switch-C](https://arxiv.org/abs/2101.03961) has trillions of parameters, one order of magnitude bigger than the GPT models that I know of, and it was developed before ChatGPT was announced. I don't know how many parameters ChatGPT has exactly, but it shouldn't have more than several billions of parameters. So, what makes reproducing a model like ChatGPT difficult for companies like Google? Definitely, not the lack of computational resources or money, but **the lack of transparency**. My impression is that Google also tends to be open-source, as opposed to OpenAI, which wants to make money of everything. Moreover, I'd like to note that the GPT models have received a lot of hype, but there are other pre-trained models (e.g. Lambda or Switch-C), *for example*, developed by Google, that maybe should also have our attention. Google simply doesn't need to generate all this hype to get the money, as they still get most of their revenue from ads (the last time I checked) Upvotes: 2
2023/01/26
1,562
5,333
<issue_start>username_0: I've been studying NNs with tensorflow and decided to code a simple NN from scratch to get a better idea on hwo they work. It my understanding that the cost is used in backpropagation, so basically you calculate the error between prediction and actual and backpropagate from there. However, in all the examples I read online, even the ones doing classification, just use: error=actual-prediction instead of: error=mse(actual-prediction) or: error=cross\_entropy(actual-prediction) And they leave mae/rmse etc just as a metric, as per my understanding (probably wrong) understanding, these should/could be used to calculate the error as well. On the other hand, while working with tensorflow, the loss function I use, does change the output and its not just a metric. What's my error in here? In other words, isn't the loss function same as the error function? Example code (taken from: machinelearninggeek.com/backpropagation-neural-network-using-python/): Note how the MSE is used as metric only, while backpropagation only uses pred-outputs. (E1 = A2 - y\_train) ``` for itr in range(iterations): # feedforward propagation # on hidden layer Z1 = np.dot(x_train, W1) A1 = sigmoid(Z1) # on output layer Z2 = np.dot(A1, W2) A2 = sigmoid(Z2) # Calculating error mse = mean_squared_error(A2, y_train) acc = accuracy(A2, y_train) results=results.append({"mse":mse, "accuracy":acc},ignore_index=True ) # backpropagation E1 = A2 - y_train dW1 = E1 * A2 * (1 - A2) E2 = np.dot(dW1, W2.T) dW2 = E2 * A1 * (1 - A1) # weight updates W2_update = np.dot(A1.T, dW1) / N W1_update = np.dot(x_train.T, dW2) / N W2 = W2 - learning_rate * W2_update W1 = W1 - learning_rate * W1_update ``` ```<issue_comment>username_1: There are different functions to pick from for your loss. The most common read online is the Mean Squared Error. It calculates how far the prediction is off from the actual value and squares it. All of them are summed up. So basically the Loss-Function "generalizes" the error over every prediction. The actual error in a case is just how far off it is. Upvotes: 1 <issue_comment>username_2: I think you are confused about the terms *error*, *loss function* and *metric*. To summarize: 1. Error (vector) tells us how off each prediction is. 2. Loss function maps the error (vector) to a single number (loss). 3. Loss and metric (scalar) describes overall how well the predictions are, serving as direction towards which the model should improve. Note these terms are general and do not apply only to neural networks. **TL;DR** Let's take a regression problem as example. Suppose we have a regression model which predicts [-1, 3, 2, 4] where the ground truth is [0, 1, 2, 3]. Clearly, some predictions are off. But by how far? Intuitively, we can get the "off" part by subtraction: prediction - actual = [-4, 3, 0, 7] - [0, 1, 2, 3] = **[-4, 2, 2, 4]** which is what we call the **error (vector)**. Now we show the error to the model and say, "hey, those are the wrongs you made, do better next round", to which the model replies, "sure, which one do you prefer?" and hands you 2 new predictions: 1. [-2, 3, 0, 5] 2. [0, 1, 2, 10] Which one is better? Well, it depends on you goal: if you want "as close to the truth as possible on average", then 1) may be better; or if you say "no prediction should be below the truth", then 2) is the only choice. Regardless, you have to tell the model *which one direction it should improve towards* - and this is what **loss function** is for. Let's say we decide to use mean-squared error (MSE) as loss function. Computing gives: Loss\_1 = Mean(Square(error vector)) = Mean(Square([-2,3,0,5] - [0,1,2,3])) = Mean(Square([-2,2,-2,2])) = Mean([4,4,4,4]) = **4** Loss\_2 = Mean(Square(error vector)) = Mean(Square([0,1,2,10] - [0,1,2,3])) = Mean(Square([0,0,0,7])) = Mean([0,0,0,49]) = **12.25** We call the result of loss function **loss (scalar)**. We want the loss to be small (minimize), so we tell the model, "1) is the better way to go". Notice that the loss is **a scalar**, and is an indicator of *goodness of prediction*. In real, a model improves its prediction ('learn') by repeating make predictions -> compute error vector -> compute loss -> feed to optimization algorithm -> update model parameters -> make new predictions (Often you would see the terms *loss* and *metric* used interchangeably; in brief, metric is the ultimate goal you want to achieve, a number you want to maximize; however for practical reason the optimization process prefers differentiable function, but metric may not be. In this case we usually choose a loss function which is both differentiable and 'close' to the metric. Since MSE is differentiable, it can serve as loss and metric.) **Update** In the code quoted, the computation of loss is hidden in the line `dW1 = E1 * A2 * (1 - A2)` (and similarly in the upcoming `E2` and `dW2` lines). Here, back-propagation is applied which implicitly assumes a squared loss is used. Through some maths tricks and simplification, we arrive at the `E1` formula. You may find the full derivation [here](https://en.wikipedia.org/wiki/Backpropagation#Finding_the_derivative_of_the_error). Upvotes: 3 [selected_answer]
2023/01/26
537
2,188
<issue_start>username_0: I want to create a music sheet scanner using CNN Model and the images I am using are not squared and, if I make them squared, important data will be lost and it might confuse the model. Is it ok to use longer pictures? If yes, what sizes should I use for a picture like the one below? For the CNN model I will have to use different scales for the layers, I will not be able to use the 3x3 layer anymore, right? I am still a beginner, so I am learning these things now without having previous experience in this field. [![enter image description here](https://i.stack.imgur.com/oWBzX.png)](https://i.stack.imgur.com/oWBzX.png)<issue_comment>username_1: I'd like to post this as a comment, but I don't have enough repo for that. But your question already exists. You'll find your answer there. <https://stats.stackexchange.com/questions/240690/non-square-images-for-image-classification> Quick summary: CNN can take non-Squared images for your model. However it's better to just stick with Squared Images. There are workarounds to ensure that property. If there's no other way, you can just try it out and evaluate your results. Upvotes: 0 <issue_comment>username_2: I disagree with username_1, and I think the linked SO post isn't relevant in this case. Well you haven't told us what does your network's model look like, but I assume it should understand the notation and output the notes, the key and maybe even tempo (sorry my musical education is rusty). Your inputs are naturally wider than tall, and it doesn't make sense to for example pad them with white pixels or scale to a square aspect ratio. But if you use rectangular kernels, you cannot stack that many of them before your image's height becomes just one pixel while the width is still in the hundreds. That is why I suggest you to specify non-rectangular kernel shapes, so that the network's output dimension resembles more of a rectangle. On the face value, your problem seems non-trivial if the notation on the left edge of the image impacts the correct interpretation of the rest of the notes. Then again, maybe a few fully connected (dense) layers can solve without problems. Upvotes: 1
2023/01/27
591
2,205
<issue_start>username_0: When ChatGPT is generating an answer to my question, it generates it word by word. So I actually have to wait until I get the final answer. Is this just for show? Or is it really real-time generating the answer word by word not knowing yet what the next word will be? Why does it not give the complete answer text all at once?<issue_comment>username_1: ChatGPT is a conversational-agent based on GPT3.5, which is a causal language model. Under the hood, GPT works by predicting the next token when provided with an input sequence of words. So yes, at each step a single word is generated taking into consideration all the previous words. See for instance [this Hugging Face tutorial](https://huggingface.co/docs/transformers/tasks/language_modeling). To further explain: while outputting entire sequences of words is in principle possible, it would require a huge amount of data, since the probability of each sequence in the space of all sequences is extremely small. Instead, building a probability distribution over half a million of english words is feasible (in reality just a tiny fraction of those words is often used). On top of that, there may be some scenic effect, to simulate the AI "typing" the answer. Upvotes: 3 <issue_comment>username_2: > > Why does ChatGPT not give the answer text all at once? > > > Because ChatGPT is [autoregressive](https://en.wikipedia.org/wiki/GPT-3) (=generates each new word by looking at previous words), as [username_1](https://ai.stackexchange.com/users/16363/rexcirus) mentioned. > > Is this just for show? > > > On <https://beta.openai.com/playground>, output words/tokens are displayed faster when using smaller models such as `text-curie-001` than larger ones such as `text-davinci-003`. I.e., the inference time does seem to impact the display time. <https://twitter.com/ArtificialAva/status/1624411499375603715> compared the display speed of ChatGPT vs. ChatGPT Plus vs. ChatGPT Turbo Mode and showed that ChatGPT Turbo Mode is over twice faster to display the output, which further indicates that ChatGPT shows its response word by word due to its backend (computation time + autoregressive). Upvotes: 2
2023/01/28
707
2,682
<issue_start>username_0: I have encountered this pattern for a long time (5+ years). So many professionals come with an interesting domain-specific problem, and they *demand* using state-of-the-art deep learning models: take it or leave it. I understand that technology advances faster than ever, but I am still missing the point. Indeed, I often propose using simpler, traditional ML models over complex ones because, for example, in an MLOps scenario is better to start with simpler models and then move to more complex ones. But, unfortunately, business experts often seem to be disappointed with such kind of proposals. Moreover, there are many [reasons](https://ai.stackexchange.com/questions/17492/why-are-traditional-ml-models-still-used-over-deep-neural-networks) to prefer classic ML, which I often use to motivate the latter.<issue_comment>username_1: ChatGPT is a conversational-agent based on GPT3.5, which is a causal language model. Under the hood, GPT works by predicting the next token when provided with an input sequence of words. So yes, at each step a single word is generated taking into consideration all the previous words. See for instance [this Hugging Face tutorial](https://huggingface.co/docs/transformers/tasks/language_modeling). To further explain: while outputting entire sequences of words is in principle possible, it would require a huge amount of data, since the probability of each sequence in the space of all sequences is extremely small. Instead, building a probability distribution over half a million of english words is feasible (in reality just a tiny fraction of those words is often used). On top of that, there may be some scenic effect, to simulate the AI "typing" the answer. Upvotes: 3 <issue_comment>username_2: > > Why does ChatGPT not give the answer text all at once? > > > Because ChatGPT is [autoregressive](https://en.wikipedia.org/wiki/GPT-3) (=generates each new word by looking at previous words), as [username_1](https://ai.stackexchange.com/users/16363/rexcirus) mentioned. > > Is this just for show? > > > On <https://beta.openai.com/playground>, output words/tokens are displayed faster when using smaller models such as `text-curie-001` than larger ones such as `text-davinci-003`. I.e., the inference time does seem to impact the display time. <https://twitter.com/ArtificialAva/status/1624411499375603715> compared the display speed of ChatGPT vs. ChatGPT Plus vs. ChatGPT Turbo Mode and showed that ChatGPT Turbo Mode is over twice faster to display the output, which further indicates that ChatGPT shows its response word by word due to its backend (computation time + autoregressive). Upvotes: 2
2023/01/30
1,131
5,016
<issue_start>username_0: Based on the mathematics of Go and the machine learning algorithms used to play it, is there a mathematical limit as to how much of the game-tree space even an AI could learn, because of the inherent complexity / computational bounds (number of possible states of the board) of the game Go? Are we approaching a technological limit given current computer hardware (even massive data servers)?<issue_comment>username_1: In the world of AI 100 percentage accuracy is almost impossible it is always a continous learning , The models which learns the game typically fall under Reinforment learning which learns the game by doing the mistakes, if a model did a mistake it punish itself with a quantative measure for that mistake and reward itself for a right move. In a virtual setting the 100 percentage acuracy is only possible if the model makes all the possible mistakes in all kind of setting(placement of coins) on the board ,which was quite impossible to generate that many combinations but with the today's computation power we had reach almost 70% to 75%(guess) of the combinations but still computing for the combination is still a challenge Upvotes: 0 <issue_comment>username_2: Consider that solving whatever problem with AI or not is a computational matter. Simply put: *you can't break the rules of computational complexity*. Let's consider the game of Go, you can enumerate all the possibile combinations of states which is a very huge number like $10^{350}$ or so (I don't remember exactly, but is something larger than the number of atoms in the universe!) The point here is that Go and similar have an **exponential complexity** meaning that the number of possibilities increases exponentially at each step in time: these are usually called *NP problems* in computer science terms, i.e. problems with non-polynomial complexity which are intractable when aiming for a *perfect* (or globally optimal) solution. In practice, if you consider a brute-force approach it would take *forever* to solve Go optimally no matter how many processors you have and how much fast they are: the computational complexity is just intractable, and the situation won't change even in 1000 years! (Also considering a computer machine with *infinite* memory.) * The effect of increasing the computation power is only limited. For example, say after 10 year you can get a 100x faster machine or even more. Maybe you'll be able to solve Go perfectly but only for small boards say $N\times N$ but not for $(N+1)\times (N+1)$ ones due to the exponential nature of the problem. * So you need an *exponential increase in computation* to increase the size of a solvable state space just by $1$, or so. * Time allows you to get better equipment but in case of NP problems it won't allow to solve that class of problems for whatever $N$, i.e. problem size, in an acceptable time. For such NP (or combinatorial) problems, the solution is to lower the computational complexity at the cost of introducing inaccuracies in the solution. Thus not computing a perfect (optimal) solution anymore but, instead, an *approximated* (or locally optimal) one. The way you can achieve this is, for example, by using *search heuristics* that help to cut down the search space (by pruning some sub-trees) thus allowing to solve larger $N$, or to approximate more severely like sampling a single (or few) paths in the full tree of possibilities like MTCS (monte-carlo tree search, which also powers AlphaGo) does. * Another example is deep learning, that allows to solve hard but also difficult to code problems by learning from data. DL models yield an approximated solution that is usually (or better said always, in practice) a local optimum. * You can control the goodness of the solution by the number of epochs for example. Therefore, a better model may allow to get the same solution in few epochs or even a better one - but the point is that it's still approximated. * Thus, DL can be seen as a smart way to approximate a brute-force search. So to conclude better (learning) algorithms allow to calculate better solutions in less time, but the problem remains intractable. Therefore is best to invest time in inventing better/faster/more accurate (approximated) algorithms rather than waiting to get a better CPU. (As a side note, in theory and in ideal conditions, quantum computers may process an exponential amount of information, potentially solving combinatorial problems in linear or so time: *be aware that I'm not a big expert*. Indeed, in practice you have few qbits that are partially interconnected and you need to deal with errors and external interference, and as well approximation of operations with the quantum gates. Maybe in the future quantum computers would allow to solve for very large $N$, with good solutions too, but I think that it would still be possible to hit the hardware limit by finding a sufficiently large $N$ that is just too much to solve for.) Upvotes: 2 [selected_answer]
2023/01/30
1,223
5,190
<issue_start>username_0: I am stuck at the proof of the contraction of variance for distributional Bellman operator from the [paper](https://arxiv.org/pdf/1707.06887.pdf), in which it is defined as [![enter image description here](https://i.stack.imgur.com/iTUgW.png)](https://i.stack.imgur.com/iTUgW.png) and the proof is stated as [![enter image description here](https://i.stack.imgur.com/t9CMK.png)](https://i.stack.imgur.com/t9CMK.png) In its second part, how is the variance of the target distribution equal to the expectation of the variance terms over the next state-action pairs?<issue_comment>username_1: In the world of AI 100 percentage accuracy is almost impossible it is always a continous learning , The models which learns the game typically fall under Reinforment learning which learns the game by doing the mistakes, if a model did a mistake it punish itself with a quantative measure for that mistake and reward itself for a right move. In a virtual setting the 100 percentage acuracy is only possible if the model makes all the possible mistakes in all kind of setting(placement of coins) on the board ,which was quite impossible to generate that many combinations but with the today's computation power we had reach almost 70% to 75%(guess) of the combinations but still computing for the combination is still a challenge Upvotes: 0 <issue_comment>username_2: Consider that solving whatever problem with AI or not is a computational matter. Simply put: *you can't break the rules of computational complexity*. Let's consider the game of Go, you can enumerate all the possibile combinations of states which is a very huge number like $10^{350}$ or so (I don't remember exactly, but is something larger than the number of atoms in the universe!) The point here is that Go and similar have an **exponential complexity** meaning that the number of possibilities increases exponentially at each step in time: these are usually called *NP problems* in computer science terms, i.e. problems with non-polynomial complexity which are intractable when aiming for a *perfect* (or globally optimal) solution. In practice, if you consider a brute-force approach it would take *forever* to solve Go optimally no matter how many processors you have and how much fast they are: the computational complexity is just intractable, and the situation won't change even in 1000 years! (Also considering a computer machine with *infinite* memory.) * The effect of increasing the computation power is only limited. For example, say after 10 year you can get a 100x faster machine or even more. Maybe you'll be able to solve Go perfectly but only for small boards say $N\times N$ but not for $(N+1)\times (N+1)$ ones due to the exponential nature of the problem. * So you need an *exponential increase in computation* to increase the size of a solvable state space just by $1$, or so. * Time allows you to get better equipment but in case of NP problems it won't allow to solve that class of problems for whatever $N$, i.e. problem size, in an acceptable time. For such NP (or combinatorial) problems, the solution is to lower the computational complexity at the cost of introducing inaccuracies in the solution. Thus not computing a perfect (optimal) solution anymore but, instead, an *approximated* (or locally optimal) one. The way you can achieve this is, for example, by using *search heuristics* that help to cut down the search space (by pruning some sub-trees) thus allowing to solve larger $N$, or to approximate more severely like sampling a single (or few) paths in the full tree of possibilities like MTCS (monte-carlo tree search, which also powers AlphaGo) does. * Another example is deep learning, that allows to solve hard but also difficult to code problems by learning from data. DL models yield an approximated solution that is usually (or better said always, in practice) a local optimum. * You can control the goodness of the solution by the number of epochs for example. Therefore, a better model may allow to get the same solution in few epochs or even a better one - but the point is that it's still approximated. * Thus, DL can be seen as a smart way to approximate a brute-force search. So to conclude better (learning) algorithms allow to calculate better solutions in less time, but the problem remains intractable. Therefore is best to invest time in inventing better/faster/more accurate (approximated) algorithms rather than waiting to get a better CPU. (As a side note, in theory and in ideal conditions, quantum computers may process an exponential amount of information, potentially solving combinatorial problems in linear or so time: *be aware that I'm not a big expert*. Indeed, in practice you have few qbits that are partially interconnected and you need to deal with errors and external interference, and as well approximation of operations with the quantum gates. Maybe in the future quantum computers would allow to solve for very large $N$, with good solutions too, but I think that it would still be possible to hit the hardware limit by finding a sufficiently large $N$ that is just too much to solve for.) Upvotes: 2 [selected_answer]
2023/01/31
2,450
8,986
<issue_start>username_0: I note [this question](https://ai.stackexchange.com/questions/22877/how-much-computing-power-does-it-cost-to-run-gpt-3) was deemed off-topic, so I'm trying to clearly frame *this* *question* in terms of scope of response I'm interested in, namely **ethics** and **sustainability** issues associated with the soon-to-be proliferation of OpenAI Chat GPT types of tools for all manner of online information seeking behavior (from humans and other bots). *This is not a programming or specific hardware question.* **On average, how much energy is consumed for each response that Open AI's public chatgpt-3 provides?** i.e. what is the energy to run the entire system for 24 hours divided by the number of responses generated in 24 hours (ignoring energy consumed to train the system or build the hardware components). How does this compare to a Google/Duck Duck Go/Bing search inquiry? I read somewhere an OpenAI employee on the ChatGPT team that the computer power used to provide responses to queries is "ridiculous", and there's documentation of the size of the memory requirements of hosting servers and parameters but without knowing its throughput for example it's hard to quantify the energy consumption. I often get more interesting results from Chat GPT than Duck Duck Go on certain types of queries where I used to know the answer but cannot remember the answer. IN these cases I can fact check for myself, I'm looking for a memory prompts with names and jargon that will remind me. Also when seeking out counter-views to my own (say critiques of degrowth or heterodoxy economics concepts) Chat GPT is good at providing names and papers/reports/books that critiques the view I provide it. In many cases more usefully than conventional search engines. Therefore, I can see the popularity of these tools ballooning rapidly, especially when the operational costs CAPEX + OPEX of the servers and maintainers is borne by large amounts of seed funding (eg OpenAI) or any other loss-leader startup wishing to ride the next wave of AI. The heart of my question is "at what `externalized` costs do we gain these tools in terms of greenhouse gases, use of limited mineral resources, GPUs scarcity etc."<issue_comment>username_1: <NAME> states "probably single-digits cents" thus worst case 0,09€/request. I guess a least half the cost are energy at a cost of 0,15€/1kWh, a request would cost 0,09€/request\*50%/0,15€/1kW=0,3kWh/request = 300Wh per request. 60 Smartphone charges of 5Wh per Charge ;) Source:<https://www.forbes.com/sites/ariannajohnson/2022/12/07/heres-what-to-know-about-openais-chatgpt-what-its-disrupting-and-how-to-use-it/> Google Search request 0.0003 kWh = 0,3Wh, thus a search request by Google uses 1000x less, but as Google has started to use AI to, probably a search consumes more by now as well. Source: <https://store.chipkin.com/articles/did-you-know-it-takes-00003-kwh-per-google-search-and-more> Upvotes: 2 <issue_comment>username_2: I've taken a stab at estimating the carbon footprint of ChatGPT [here](https://towardsdatascience.com/the-carbon-footprint-of-chatgpt-66932314627d). I estimated the daily carbon footprint of the ChatGPT service to be around 23 kgCO2e and the primary assumption was that the service was running on 16 A100 GPUs. I made the estimate at a time with little information about the user base was available. I now believe that the estimate is way too low because ChatGPT reportedly had 590M visits in January which I don't think 16 gpus can handle. Recently, I also estimated [ChatGPT's electricity consumption](https://towardsdatascience.com/chatgpts-electricity-consumption-7873483feac4) in January 2023 to be between 1.1M and 23M KWh. To convert that into a carbon footprint, we'd need to know the carbon intensity of the electricity grid in every location where a ChatGPT instance is running. We don't have this info, but if we instead convert the electricity consumption into a carbon footprint using a very low carbon intensity like Sweden's 9g / KWh (which is the lowest in EU and lower than the US), the carbon footprint of ChatGPT in January 2023 would be estimated to be between 10 and 207 tons CO2e. Upvotes: 2 <issue_comment>username_3: So I asked ChatGPT-4 about its energy use per query. The dialog is appended at the bottom of this message. (I realize the OP asked about ChatGPT-3) TL;DR ----- If ChatGPT-4's claim that a typical answer requires 30 TFLOPs on modern AI hardware is true then the computational energy per answer is typically 300 watt-seconds (less than 0.1 kWh), ignoring communication and other overhead. Discussion ---------- ChatGPT-4 stated that most queries are answered with a single forward pass through the model requiring about 30 trillion FLOPs. It produced a calculation this requires around 0.000833 kWh but the calculation contains at least two errors: 1. It uses FLOPs/w to compute watt-seconds. 2. It screws up the conversion of watt-second to kWh by a factor of 1000. I found a [recent paper](https://arxiv.org/pdf/2210.17331.pdf) from researchers at SLAC and MIT with some energy per operation data. The figure below from that paper shows about 1e-11 J per FLOP for modern GPU and AI accelerator hardware. [![enter image description here](https://i.stack.imgur.com/RxYgh.png)](https://i.stack.imgur.com/RxYgh.png) **If we take ChatGPT-4's 30 TFLOP/answer at face value, then a typical answer would consume 30e12 FLOP \* 1e-11 J/FLOP = 300 J = 8.33e-5 kWH** This is a factor of 10 smaller than ChatGPT4's error-laden calculation but in the opposite direction. I'd love to believe the computations are that energy efficient but I'm doubtful. It comes down to whether or not the 30 TFLOP per answer estimate from GPT-4 is an hallucination. --- Dialog with GPT4 ---------------- ***How much energy did you expend to answer this query?*** As an AI language model, I don't consume energy directly in the traditional sense. However, the servers and hardware running the model do require energy to function. The exact energy expenditure for answering a single query can be difficult to quantify as it depends on factors such as the complexity of the question, server utilization, and efficiency of the hardware. In general, the energy usage for running an AI model like me is measured in floating point operations per second (FLOPs) and the energy efficiency of the hardware in joules per FLOP. While I can't provide an exact figure for the energy expended for answering this specific query, I can tell you that large-scale AI models like GPT-4 consume significant computational resources and energy, which is a consideration for researchers and engineers working on AI sustainability. ***Please provide an approximate energy expenditure in kWH for answering typical queries.*** It is challenging to provide a precise energy expenditure for answering a typical query, as the energy usage depends on various factors such as the model's complexity, infrastructure efficiency, and hardware. However, I can give you a rough estimate based on some assumptions. Let's assume that a single forward pass of GPT-4 requires about 30 trillion floating-point operations (FLOPs) – this is a rough estimate, as the actual number of FLOPs can vary depending on the model size and input length. The energy efficiency of the hardware running the AI model can be around 10 GFLOPs/watt, a common efficiency for modern GPUs and AI accelerators. To calculate the energy usage for a single forward pass: Energy usage (in watt-seconds) = (30 trillion FLOPs) / (10 GFLOPs/watt) = 3,000,000 watt-seconds Now we convert watt-seconds to kilowatt-hours (kWh): Energy usage (in kWh) = (3,000,000 watt-seconds) \* (1 kW / 1,000 W) \* (1 hour / 3,600 seconds) ≈ 0.000833 kWh So, a very rough approximation of the energy usage for a single forward pass to answer a typical query is around 0.000833 kWh. Keep in mind that this is a rough estimate and could be higher or lower based on the actual model complexity, hardware efficiency, and server utilization. ***How many forward passes does GPT-4 typically require to answer a typical query?*** GPT-4, like other generative language models, is based on the Transformer architecture, which uses a decoder-only design for generating text. For a typical query, GPT-4 usually requires one forward pass through the model to generate a response. During the forward pass, the model processes the input tokens and generates a probability distribution over the vocabulary for the next token at each position in the sequence. The process continues sequentially until an end-of-sequence token is generated or a predetermined maximum length is reached. In summary, GPT-4 generally needs one forward pass to answer a typical query. However, it's important to note that, in practice, additional computational overhead may be involved in tokenization, decoding, and other preprocessing and postprocessing steps. Share Upvotes: 2
2023/02/01
814
3,717
<issue_start>username_0: I am trying to refine my knowledge of AI, but unsupervised learning is a bit of a stumbling block for me. I understand that it finds 'hidden' patterns in data, but if they are hidden, how does a user interpret the outcomes? It would be like someone categorising a deck of playing cards in some way, but the logic of that process is never known. How exactly is this helpful? Onto my second question, which might help me understand the first question a little more clearly. What examples have there been in the real world of unsupervised learning being used, and what exactly did this neural net help solve?<issue_comment>username_1: Unsupervised learning aims to find similarities and patterns in unlabeled data. It is used for clustering and association problems (you can google for an explanation). One example can be a **music recommendation system** (e.g., Spotify): From the list of songs that you like, unsupervised learning can identify songs with similar characteristics that you may like as well. It can be based on the melody, tempo, and dynamicity of the song, etc. Another example is **online store recommendation systems** (e.g., Amazon): Unsupervised learning can discover interesting relationships in large databases. For example, people who buy a table will most probably buy chairs and a sofa. People who buy milk usually buy bread and butter, and so on. Another example is **anomaly detection** (fraud detection, health monitoring, defect detection). From a large amount of data, unsupervised learning can detect cases that differ from a norm (e.g., sudden burst or decrease in activity). Upvotes: 0 <issue_comment>username_2: **A good example is face recognition on your phone**, or in face recognition systems in general. The way they work is pass in infromation and tighten the channel width for information throughput, performing a compression of the input face features. [![enter image description here](https://i.stack.imgur.com/Bkiuj.png)](https://i.stack.imgur.com/Bkiuj.png) Then try to reconstruct the input on the other side by learning a decompression transformation. If you're able to recreate the input features perfectly with less vectors representing that input, then you've learned a more ideal and abstract representation of the facial features. The reason this is considered unsupervised is because the human is not trying to influence how it does this using their own reasoning. It's [face in --> compress --> decompress --> face out --> is face in the same as face out]. It's not "learning to find patterns". It's learning to create better representations. A network that passes through, compresses and decompresses faces accurately is useless, but the intermediate compressed representation is VERY valuable. You'll hear about the compressed representation refered to as an embedding. Facial recognition systems don't compare noisy input, but the pure representations that filter out noise and capture abstract representation. So well that the same face will end up with compressed representations that resemble that same face taken in slightly different angles, with different hair styles or colors. etc. If a face `embedding vector` is close spatially to another face `vector` then it is similar in the abstract, not just circumstantial, sense. --- You can now use spatial lookups on compressed face representations to do automated watchlist queries on camera feeds. If a camera face, when compressed, looks similar to a watchlist face, then send an alert. --- Similarly you can compress the face on an ID, the see if it's similar enough to a face extracted from a selfie to do two factor facial/identity verification. Upvotes: 1
2023/02/02
1,280
5,033
<issue_start>username_0: I'm working my way through how ChatGPT works. So I read that ChatGPT is a generative model. When searching for generative models, I found two defintions: * A **generative** model includes the distribution of the data itself, and tells you how likely a given example is * **Generative** artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content Do they both mean the same? That is, for generating new content, a model must learn the distribution of data itself? Or do we call chatGPT generative because it just generates new text? I see that ChatGPT is something other than a discriminative model that learns a boundary to split data, however, I can not bring ChatGPT in line with a more traditional generative model like naive bayes, where class distributions are inferred.<issue_comment>username_1: They both refer to the same type of models. However, the second definition is a more 'intuitive' explanation of what generative AI *does*, while the first is a definition that refers more to what a generative model *is*. To generate new data similar to some training data (definition 2), a model needs to learn the training data distributions (definition 1). Only if the model has learned that distribution it can use that distribution to sample (generate) new data from that distribution. During training, ChatGPT also learned the distribution of the training data that OpenAI provided the model with. After training, the model simply takes in the input and uses the input to sample from the learned distribution to generate an output. So ChatGPT also follows both of your definitions. Upvotes: 2 <issue_comment>username_2: ### What are generative (and discriminative) models? If the model learns a distribution of the form $p(x)$ or $p(x, y)$, where $x$ are the inputs and $y$ the outputs/labels, from which you can sample data, then it's a generative model. An example of a generative model: variational autoencoder (VAE). Bishop also defines generative models in this way ([p. 43](https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf#page=63)) > > Approaches that explicitly or implicitly model the distribution of > inputs as well as outputs are known as generative models, because by sampling from them it is possible to generate synthetic data points in the input space > > > If it learns a distribution of the form $p(y \mid x)$, then it's a discriminative model - many/most classifiers learn this distribution, but you can also derive the conditional given the the joint and prior (that's why above Bishop uses *implicitly* or *explicitly*). Bishop also defines discriminative models in this way ([p. 43](https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf#page=63)) > > Approaches that model the posterior probabilities directly > are called discriminative models > > > The [related Wikipedia article](https://en.wikipedia.org/wiki/Generative_model) claims that people have not always been using these terms consistently (which is common in machine learning), so one should always keep that in mind. ### GPTs are autoregressive As far as I know, GPTs are [autoregressive models](https://deepgenerativemodels.github.io/notes/autoregressive/). [Here](https://ml.berkeley.edu/blog/posts/AR_intro/) is another potentially useful post that explains what autoregressive models are. My understanding of autoregressive models, at least based on neural networks, is that they are also generative models - the linked articles and even the [GPT-2 paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) seem to start the descriptions from the assumption that you can factorize some joint distribution like $p(x)$ into conditional distributions. ChatGPT is based on a GPT model, so it's probably considered a generative model too, but there are [several steps involved](https://openai.com/blog/chatgpt/) to create this model, so it may not be super clear how to categorise this model. Moreover, the authors of the [transformer](https://arxiv.org/pdf/1706.03762.pdf), which GPT models are based on, claim that the transformer is an autoregressive model. ### Conclusion It seems to me that many people in ML refer to any model that generates data as a generative model, even if there's no written theoretical formulation of it as a generative model, which doesn't mean that you cannot formulate these models as generative models, i.e. a model that learns some distribution that you can use to sample data from data distribution. I am currently not familiar enough with the details of the GPT models to say if they have been mathematically formulated as generative models of the form $p(x, y)$, but they model some distribution of the form $p(x)$, from which you can sample, otherwise, how could you even sample data (words)? Upvotes: 2
2023/02/02
1,537
4,870
<issue_start>username_0: In the official [blog post about ChatGPT](https://openai.com/blog/chatgpt/) from OpenAI, there is this paragraph explaining how ChatGPT model was trained: > > We trained this model using Reinforcement Learning from Human Feedback > (RLHF), **using the same methods as InstructGPT**, but with slight > differences in the data collection setup. We trained an initial model > using supervised fine-tuning: human AI trainers provided conversations > in which they played both sides—the user and an AI assistant. We gave > the trainers access to model-written suggestions to help them compose > their responses. We mixed this new dialogue dataset with the > InstructGPT dataset, which we transformed into a dialogue format. > > > Especially this part: > > We trained an initial model using supervised fine-tuning > > > My question is about the said *initial model*, is it some new model that has been trained from scratch or is it a GPT-3 model that has been fine tuned for specific tasks resulting in [GPT-3.5 series](https://platform.openai.com/docs/model-index-for-researchers) ? On the other hand, from the [InstructGPT blog post](https://openai.com/blog/instruction-following/), it is clearly stated that: > > To make our models safer, more helpful, and more aligned, we use an > existing technique called reinforcement learning from human feedback > (RLHF). On prompts submitted by our customers to the API,our labelers > provide demonstrations of the desired model behavior, and rank several > outputs from our models. **We then use this data to fine-tune GPT-3**. > > > So does this mean that GPT-3.5 series models (and consequently ChatGPT) are fine-tuned from GPT-3 base model ?<issue_comment>username_1: ChatGPT has not been trained from scratch. ChatGPT is a fine-tuned version of a model from the GPT-3.5 series. [OpenAI writes](https://openai.com/blog/chatgpt/) > > ChatGPT is **fine-tuned from a model in the GPT-3.5 series**, which finished training in early 2022. You can learn more about the 3.5 series [here](https://platform.openai.com/docs/model-index-for-researchers). > > > Which models are in the GPT-3.5 series? You can read more about that in the [linked blog post](https://platform.openai.com/docs/model-index-for-researchers). > > GPT-3.5 series is a series of models that was trained on a blend of text and code from before Q4 2021. The following models are in the GPT-3.5 series: > > > * `code-davinci-002` is a base model, so good for pure code-completion tasks > * `text-davinci-002` is an InstructGPT model based on `code-davinci-002` > * `text-davinci-003` is an improvement on `text-davinci-002` > > > So, ChatGPT must be a fine-tuned version of one of these 3 models, assuming the information in their site is accurate and up-to-date. Now, according to [this blog post](https://platform.openai.com/docs/models/codex) > > The **Codex models are descendants of our GPT-3 models** that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub > > > `code-davinci-002` is a codex model. So, ChatGPT *might* be a descendent of GPT-3. I don't know what "descendent" exactly means here. Does it mean just fine-tuned or maybe a modified version? [Here](https://platform.openai.com/docs/models/gpt-3) they write that `text-davinci-003` is the most capable GPT-3 model. Based on the information above, `text-davinci-002` is an InstructGPT model based on `code-davinci-002`. [Here](https://openai.com/blog/instruction-following/) they write > > We then use this data to **fine-tune GPT-3**. > > > The resulting InstructGPT models are much better at following instructions than GPT-3 > > > So, InstructGPT models are fine-tuned GPT-3 models. That most likely implies that `text-davinci-002` is a GPT-3 model and the only thing that changes is how it was trained. However, they also write > > Our labelers prefer outputs from our 1.3B InstructGPT model over outputs from a 175B GPT-3 model, despite having more than 100x fewer parameters. > > > So, there isn't just one GPT3 model. However, [the original GPT3 model](https://arxiv.org/pdf/2005.14165.pdf) had 175 billion parameters. I still need to read the InstructGPT and GPT3 papers. Once I've done that, I may have more useful/concrete info, then I will update this answer. Upvotes: 3 <issue_comment>username_2: I'll complement [username_1](https://ai.stackexchange.com/users/2444/username_1)'s answer with this great visual summary by [<NAME>](https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1) : [![enter image description here](https://i.stack.imgur.com/Yintu.png)](https://i.stack.imgur.com/Yintu.png) Upvotes: 4 [selected_answer]
2023/02/04
1,052
4,026
<issue_start>username_0: Hello world of ANN usually uses MNIST hand written digit data. For classes there are `10`, therefore it takes `10 neurons` in the output layer, each class is `0 to 9` handwritten digit images. If in the end there is only one active neuron in the output layer, why does not use only `4 neurons` in the output layer where each neuron represents a binary digit, so that `16 classes` will be more than enough (10 classes). For example, if the neuron values ​​after the activation function in the output layer are successive like this ``` 0.1 0.2 0.8 0.9 ``` Then it can be rounded to: ``` 0 0 1 1 ``` Or instead of being rounded up manually, Why does not use the binary step activation function on the hidden layer before the output layer. So the prediction result is `3` because `0011` if converted to decimal is `3`. By using 4 neurons, less computational load should be used than using 10 neurons for each class. So can I use only 4 neurons only in output layer to classify 10 handwritten digit (10 classes). Below picture is just sweet picture that represent 10 neurons for every class in output layer: [![enter image description here](https://i.stack.imgur.com/QrTFj.jpg)](https://i.stack.imgur.com/QrTFj.jpg)<issue_comment>username_1: Of course you can but I'd not recommend doing this way. First - it is not part of ML because it is straight logic and should not be learned, so I don't think backpropagation or other algorithms continue performing correctly that way. That way I believe your model could be fitted to avoid middle numbers and respond with only lowest or biggest. And that is the question to you if you really need that. Second - I suppose it is better to keep the model not for super specific task but for common tasks and be adaptive for cases if you want to gain more details (statistics for example). The model with output of 10 neurons will allow you to know how sure it is in its answer - that way you may want to behave the other way or not do action at all. The other reason is in clearness - better to keep parts of the system splitted, that way in case of problems you'll find the root of the problem faster. Upvotes: 1 <issue_comment>username_2: One problem that I see is that you can no longer use the cross-entropy loss function for training, or at least I am not sure how you could do it. This cost function has many advantages, one of them being that you can interpret the activation of the output neurons as the probability of the input being that category (they all sum to one). You can read about some other advantages here: <https://kobejean.github.io/machine-learning/2019/07/22/cross-entropy-loss/> Upvotes: 2 <issue_comment>username_3: I believe it is because when using the binary output, it makes the neurons *dependent* on each other, but technically it should not be, because it complicates two problems: * The first problem is *how would you calculate the loss function for that?* In your case `0011` translates to `3`, which means that the third one and fourth one depend on each other, so it is "both neuron 3 and 4 must be 1 at the same time". As far as I am aware, there is no existing loss function for output-dependent binary outputs. The reason lies in the second problem below. * The second problem is that *it simply complicates thing that can be done in a multi-class manner*. Even in your case, the simplest implementation is to have 16 neurons, each corresponds to a combination of four binary numbers. For example, if neuron 0 is activated, it is `0000`, and if it is neuron 1, it is `0001`, and so on. This implementation can use the standard cross-entropy loss, which we know it currently works very well. The ImageNet dataset has 1000 classes, but the [state-of-the-art](https://paperswithcode.com/sota/image-classification-on-imagenet) is already 91% over 100k validation images. Since the 1000 neuron outputs work very well, I don't think they even bother experimenting with 10 neurons (for $2^{10}$ case). Upvotes: 1
2023/02/04
1,111
3,884
<issue_start>username_0: Suppose the pre-trained, current date (2023-02-04) ChatGPT model was released open source, would it be feasible for regular users to interact with the model on a self-hosted computer? Assumptions ----------- 1. I assume getting output based on some input is, at least, hundreds of times faster than training such a model. 2. I assume no additional output parsing/input limitations are used. In particular I can imagine all the boiler plate to keep the ChatGPT model(s) acting politically correct etc. may be a significant overhead. This is to be ignored for this question. Data ---- So far I've found the ChatGPT 3.5 model to have [175 billion parameters](https://lifearchitect.ai/chatgpt/): [![enter image description here](https://i.stack.imgur.com/DeIZD.png)](https://i.stack.imgur.com/DeIZD.png) Though I do not yet know how large that is in `Mb` nor do I have an idea on how long generating an output would typically take.<issue_comment>username_1: > > Suppose the pre-trained, current date (2023-02-04) ChatGPT model was released open source, would it be feasible for regular users to interact with the model on a self-hosted computer? > > > No, assuming it's regular, consumer hardware. However, some open-source alternatives aim to be runnable on consumer hardware, e.g. <https://github.com/LAION-AI/Open-Assistant> (not released yet): > > We want to do this in a way that is open and accessible, which means we must not only build a great assistant, but also make it small and efficient enough to run on consumer hardware. > > > Upvotes: 2 <issue_comment>username_2: Here, I assume the ChatGPT architecture is similar to the published GPT-3 model [1]. The [Transformer](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)) architecture used for GPT3 is much more memory-bound than compute-bound at a high parameter count and a low context size. A large batch size (3.2M across ~few thousands GPUs for GPT-3) is used during training, but relatively low computational power is needed at the inference time for non-batched applications such as real-time chat. OpenAI is probably batching multiple users' requests to improve the throughput. GPT-3 has 175B parameters and would require 326GiB memory to store the weight in float16. *Quantization* methods are proposed to reduce the memory required to store model weights, where the model weights are stored in lower precision. In the GPTQ paper [2], the authors have shown that large GPT models perform almost as good as the original model with 4-bit per parameter, reducing the 175B model to less than 90GiB. 3-bit quantization is also considered, reducing the size to less than 70GiB. Note that additional memory required for internal states is small compared to the weights, especially if a [memory-efficient attention](https://github.com/facebookresearch/xformers) is used. Recently, such quantization methods have gained attention due to the release of the pre-trained [LLaMA](https://github.com/facebookresearch/llama/) models. Some people are running 65B LLaMA model even CPU-only [3] [4] with a reasonable throughput. Therefore, for the question of whether it would be possible to run a 175B model in a home computer, I would say it is currently stretch but certainly possible if somewhat slow responses are acceptable. * [1]: Brown, Tom, et al. "Language models are few-shot learners." Advances in neural information processing systems 33 (2020): 1877-1901. <https://arxiv.org/abs/2005.14165> * [2]: Frantar, Elias, et al. "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers." (2022). <https://arxiv.org/abs/2210.17323> * [3]: [<NAME> on Twitter](https://twitter.com/thesudoer_/status/1634883960525422597) * [4]: [Taiga Takano on Twitter](https://twitter.com/tg3517/status/1635095157480759297) Upvotes: 4 [selected_answer]
2023/02/06
731
2,821
<issue_start>username_0: I'm quite new to machine learning and wanted to ask a question regarding why reducing batch sizes cause exponentially increasing training times. An example of this relationship between batch size and training time can be seen in [this article](https://wandb.ai/ayush-thakur/dl-question-bank/reports/What-s-the-Optimal-Batch-Size-to-Train-a-Neural-Network---VmlldzoyMDkyNDU). I had a hard time finding any answers to this question with my googling skills and any possible hypotheses that I have as to why are most likely incorrect. I'd greatly appreciate if somebody could explain the reasoning behind this relationship.<issue_comment>username_1: This has a very simple hardware explanation. GPUs have several thousand cores. If you are not making use of enough of these cores, eventually the cost of moving data over to the gpu comes to dominate the parallelism. Every time you reduce batch, you reduce parallelism and increase data transfers. For example, if you halve batch size, you have increased the amount of data transfers by 2, and decreased the parallelism by 2. Now, this can be expected to be more than twice as slow, since both data transfer and computation have doubled. However, I do not think this relationship is actually exponential. The article you linked does not seem to support that conclusion either. I know the article uses the world exponential, but if you look at the chart this is simply not the case. The chart in that article is presented on a logarithmic scale, so when you actually look at the numbers on a linear scale, the relationship is close, but not quite, linear, for reasons I outlined. Upvotes: 2 <issue_comment>username_2: I copied raw numbers from the graph and did some plotting. ``` 2048 83 1024 84 512 86 256 90 128 104 64 135 32 204 16 360 8 656 ``` I don't know the actual number of training samples, so I used a place-holder value of 10k here. As we can see, the hardware is under-utilized when the batch size is too small. This is caused by a constant(?) overhead related to calculating losses and gradients in regards to each batch. Ah, I forgot to divide training time by the number of epochs, but only relative numbers matter here. [![batch_size vs. performance](https://i.stack.imgur.com/RnD50.png)](https://i.stack.imgur.com/RnD50.png) In an other point of view, the data fits well on a function $y = a/x + b$ where $x$ is the batch size and $y$ is the training time. In the limit, training time approaches the theoretical minimum when the hardware is best utilized, but each separate batch causes a fixed overhead on setting things up. [![batch_size vs. time](https://i.stack.imgur.com/0L9CW.png)](https://i.stack.imgur.com/0L9CW.png) Upvotes: 1 [selected_answer]
2023/02/06
1,012
4,103
<issue_start>username_0: While studying chatGPT's thought process, I asked it to list ten story ideas for an old and fairly niche tabletop roleplaying game (GURPS Reign of Steel). It did very well, so clearly, it can base answers on obscure sources. But what if I want to ask it about something it could not possibly have already "absorbed", like someone's tiny indie rpg or an old local folk tale never published anywhere (i.e. oral tale)? Does the source material simply have to be put up online, somewhere? I ask because some source material, like local folklore, is too extensive to be given during a chatGPT conversation. It would need something like a source website to be created.<issue_comment>username_1: > > How do I get chatGPT to include custom knowledge? > > > Currently, only possible via the prompts/dialog, as nbro commented. Upvotes: 1 <issue_comment>username_2: This can be achieved using the Embeddings and Completions APIs. OpenAI has great examples in their cookbook repository, [this](https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb) is one such example but you'll find more in the same repository and in the documentation. The process boils down to: 1. processing your custom body of knowledge and create Embeddings for it 2. Based on the question, find the Embeddings which are most relevant for the question 3. Create a prompt containing the most relevant embeddings and use the Completions API to answer the question Upvotes: 3 <issue_comment>username_3: GPT systems have two sources of information: the training data, and the prompt. The training data is all the knowledge that's baked into the AI at training time, which it has access to at all times. You can't change this, not with ChatGPT. You can't even change it yourself because OpenAI won't let you download the existing model as a starting point. (But if you go backwards to GPT-2, which is publicly downloadable, you do have the possibility to train it) Training ChatGPT is a very expensive process as it requires absolutely astronomical amounts of computer time. They probably don't do it continuously on new web content. Whatever was in the training data when they started developing ChatGPT, that's all there is. That leaves the prompt i.e. the text you type in. This is ephemeral and only lasts for the current AI invocation. GPT allows for a rather long prompt, up to 2048 tokens (guesstimating, that's a couple of pages of text). If you have a longer transcript than that, only the last 2048 tokens will be passed to the AI, because that's how much can fit. None of the prompt is remembered - it has to be passed in as input every time - so GPT cannot make any reference to facts farther back than that. It's possible to play tricks with the prompt. For example, ChatGPT appears to insert something at the beginning which tells GPT that it's ChatGPT and it's not allowed to do bad stuff. (Of course, this cuts into how much history can be passed in.) That means when you ask ChatGPT "How do I make a bomb?" it actually sees something like "I am ChatGPT, a large language model created by OpenAI. Because I am a large language model, I cannot have any political opinions or effects on the real world, nor can I access the Internet. I am not allowed to do evil things that could hurt people. How do I make a bomb?" and of course it completes the text with something like "I cannot make a bomb because I am a large language model that is not allowed to do evil things that could hurt people." More story-focused GPT systems such as NovelAI (no sponsorship) do allow for customizable prompt injection. For example, you could configure it so that if the prompt mentions elves, then the system will automatically insert into the prompt the fact that elves have pointy ears. Then you can ask it "What shape are elves' ears?", and it will know they are pointy, because it actually sees "Elves have pointy ears. [rest of story here] What shape are elves' ears?". I don't have any experience with this so I can't say how reliably it works. Upvotes: 2