date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2023/02/07
1,077
4,230
<issue_start>username_0: In CLIP [1], the authors train a model to learn multi-modal (text, audio) embeddings by maximizing the cosine similarity between text and image embeddings produced by text and image encoders. For the text encoder, the authors choose to use a variant of GPT2 which is a decoder-only transformer, taking the activations of the highest layer of the transformer at the [EOS] token the feature representation of the text (emphasis mine): > > The text encoder is a Transformer (Vaswani et al., 2017) with the architecture modifications described in **Radford et al. (2019)**. As a base size we use a 63M-parameter 12- layer 512-wide model with 8 attention heads. The trans- former operates on a lower-cased byte pair encoding (BPE) representation of the text with a 49,152 vocab size (Sennrich et al., 2015). For computational efficiency, the max sequence length was capped at 76. The text sequence is bracketed with [SOS] and [EOS] tokens and the **activations of the highest layer of the transformer at the [EOS] token are treated as the feature representation of the text** which is layer normalized and then linearly projected into the multi-modal embedding space. > > > I found this pretty weird considering that they could have used an encoder (a-la BERT) which to me seem more fitted to act as encoders than decoders. Perhaps they wanted to enable generative text capabilities, but they could've achieved that with an encoder-decoder architecture (a-la T5) too. I was expecting ablations on the text-encoder architecture, motivating their choices, but found none. Any clue why they made these choices? References: =========== [1] <NAME> et al., ‘Learning Transferable Visual Models From Natural Language Supervision’, in Proceedings of the 38th International Conference on Machine Learning, Jul. 2021, pp. 8748–8763. Accessed: Feb. 07, 2023. [Online]. Available: <https://proceedings.mlr.press/v139/radford21a.html><issue_comment>username_1: I believe because Decoder-only basically cuts down the model size in half, and has also shown empirically to be better. In the original Transformer paper, the evaluation task was about Machine Translation, which at that time, encoder-decoder architecture was very successful. [This paper](https://arxiv.org/pdf/1801.10198.pdf) is probably the first to propose the decoder-only Transformer, in which they observe the following improvements: * It removes the encoder, which means half the parameters and hyperparameters. * It helps them with long input sentences * The inputs of the Encoder and Decoder are the same, so basically [it is probably redundant](https://github.com/openai/gpt-2/issues/157) Later [paper](https://arxiv.org/abs/1905.08836) also finds out that decoder-only works better than encoder-decoder part. One thing to note about this is Encoder is Bi-directional while Decoder is Uni-directional. This nature fits with GPT-2, which is an autoregressive language model. Related: * [This paper](https://arxiv.org/abs/2204.05832) may be related. It studies different decoder-encoder variants and finds that decoder-only performs better. * A [StackExchange](https://datascience.stackexchange.com/questions/65241/why-is-the-decoder-not-a-part-of-bert-architecture) question regarding the decoder in BERT. Upvotes: 1 <issue_comment>username_2: I think one potential explanation for their text-encoding method choice is that they probably realised that an image will be typically captioned with more than one word (which on its own can be multiple tokens), so they need sentence-embeddings of the text, not token embeddings which is what you get with BERT. Basically, image::sentence. You notice this when using the CLIP encoder in code - it gives you sentence embeddings, not token embeddings. Because they use a decoder-only transformer, which is autoregressive, they can take the feature representations at the EOS token as sentence representations of the sentence that preceded the EOS token. Had they used an encoder, they would've had think about some aggregation method to go from token embeddings to sentence embeddings (Although, to be fair, this may have been as simple as using the CLS token representation). Upvotes: 0
2023/02/08
1,608
5,889
<issue_start>username_0: I wonder if machine learning has ever been applied to space-time diagrams of [cellular automata](https://en.wikipedia.org/wiki/Cellular_automaton). What comprises a training set seems clear: a number of space-time diagrams of one or several (elementary) cellular automata. For a supervised learning task, the corresponding local rules may be given as labels, e.g. in Wolfram's notation. Another label could be the complexity class of the rule (according to some classification, e.g. Wolfram's). But I am more interested in unsupervised learning tasks. [![enter image description here](https://i.stack.imgur.com/1IUuN.png)](https://i.stack.imgur.com/1IUuN.png) I'm just starting to think about this topic and am not clear, yet, what the purpose of such a learning task should be. It may be (unsupervised) classification, feature extraction, object recognition (objects = gliders), or hypothesis generation ("what's the underlying rule?"). Where can I start or continue my investigation? Has there already work been done into this direction? Is it immediately clear by which ML technique the problem should be approached? Convolutional neural networks?<issue_comment>username_1: I'm afraid that the application is so obscure that this has not been done due to lack of interests. As a supervised learning classification problem it seems a fairly easy one, I bet you should be able to build a computer vision classifier with high accuracy in an afternoon, by finetuning a pretrained convolutional neural network on a small dataset of examples. The latter are easy to generate programmatically. Perhaps a non trivial example of ML + cellular automata is [this](https://distill.pub/2020/growing-ca/) work on Growing Neural Cellular Automata, showing how self-organising patterns can emerge from simple rules and be able to rigenerate lost limbs or entire body parts. Upvotes: 0 <issue_comment>username_2: I am very much interested in this and will start my research on this at Ghent University soon. I'm preparing results about this approach and some preliminary results for pattern recognition in elementary cellular automaton using convolutional neural networks. I couldn't find much on this, but see e.g. [this](https://direct.mit.edu/isal/proceedings/isal2019/280/99182) (very limited) paper. Please keep me posted via [my ResearchGate page](https://www.researchgate.net/profile/Michiel-Rollier) if you make any progress :) Upvotes: 2 <issue_comment>username_3: As you say in your question, there are many directions "machine learning with cellular automata" can take. ### Classifying cellular automata space-time diagrams with ML I know of these two works that use neural networks to classify cellular automata into one of the 4 Wolfram classes: * <https://community.wolfram.com/groups/-/m/t/1417114> * <https://direct.mit.edu/isal/proceedings/isal2019/280/99182> [1] In general, predicting the Wolfram class is tricky because it is not a well-defined notion. This is part of the larger problem of classifying cellular automata behavior from their space-time diagram (or rule, etc.), for which there are also many non-ML approaches. For example, [2] uses compression to estimate the complexity of the CA, while [3] uses the asymptotic properties of the transient sizes. In related work [4], Gilpin used a convolutional neural network representation of CA to learn the rules. He estimates the complexity of a rule from "how hard it is to learn that rule." ### Cellular automata and neural networks @username_1 has pointed you to the very interesting "Growing neural cellular automata" paper. The same authors have recently been working on a continuous cellular automaton called Lenia, extending it to create [Particle Lenia](https://google-research.github.io/self-organising-systems/particle-lenia/). The whole idea behind these examples is to use a convolutional neural network to implement a CA-like system. This has many advantages, including being able to use all the NN properties and differentiability to "learn" things with the CA. There is a fundamental difference between the usual discrete CA and the neural network-based extensions because you move to continuous space. However, there are interesting connections to be made between the two models. ### Cellular automata as ML systems Another approach is using the CA as the basis of a ML system, harvesting its computations to make predictions. This is done with something called reservoir computing. This whole subfield is called reservoir computing with cellular automata (ReCA), and you might be interested in it. Here are a few papers to get you started (including one of mine) [5,6,7,8]. 1. <NAME>. Convolutional Neural Networks for Cellular Automata Classification. Artificial Life Conference Proceedings 31, 280–281 (2019). 2. <NAME>. Compression-Based Investigation of the Dynamical Properties of Cellular Automata and Other Systems. Complex Systems 19, (2010). 3. <NAME>. & <NAME>. Classification of Complex Systems Based on Transients. in 367–375 (MIT Press, 2020). doi:10.1162/isal\_a\_00260. 4. <NAME>. Cellular automata as convolutional neural networks. arXiv:1809.02942 [cond-mat, physics:nlin, physics:physics] (2018). 5. <NAME>. Reservoir Computing using Cellular Automata. arXiv:1410.0162 [cs] (2014). 6. <NAME>. & <NAME>. Deep Reservoir Computing Using Cellular Automata. arXiv:1703.02806 [cs] (2017). 7. <NAME>., <NAME>. & <NAME>. Benchmarking Learning Efficiency in Deep Reservoir Computing. in Proceedings of The 1st Conference on Lifelong Learning Agents 532–547 (PMLR, 2022). 8. <NAME>., <NAME>., <NAME>., <NAME>. & <NAME>. The Dynamical Landscape of Reservoir Computing with Elementary Cellular Automata. in ALIFE 2021: The 2021 Conference on Artificial Life (MIT Press, 2021). Upvotes: 3 [selected_answer]
2023/02/10
728
2,822
<issue_start>username_0: In [this question](https://ai.stackexchange.com/q/2922/5351) I asked about the role of knowledge graphs in the future, and in [this answer](https://ai.stackexchange.com/a/25788/5351) I found that *If curation and annotation are not sufficient, the knowledge base maybe cannot apply in AI.* ChatGPT [does not](https://levelup.gitconnected.com/what-is-chatgpt-openai-how-it-is-built-the-technology-behind-it-ba3e8acc1e9b) utilize a knowledge graph to understand or generate common sense, then I wonder how knowledge graphs can be utilized in the future. Will they be replaced by LLMs?<issue_comment>username_1: It is correct that curation and annotation are crucial to knowledge graph. At the same time, such annotation has been accumulated intensely in few areas like medical and manufacturing (some publicly, and some internally within the organization) - partly accelerated by the need of data interoperability and standardization within the industry. So while it may not be very ready yet for generic use cases, some form of knowledge graphs/ontologies are already utilized for a long time in domains mentioned earlier. Besides that, the current active research on knowledge graph generation/inference will potentially increase the breath and depth of the graph in a more scalable way. Upvotes: 0 <issue_comment>username_2: A couple of days ago, <NAME> from Inbenta [posted](https://www.linkedin.com/posts/jtorras_google-chatgpt-activity-7028807867218477057-ucyq) that chatGPT fails at classifying a particular integer as prime, while their chatbot nails it. But the goal of a chatbot is no way factoring integers, is it? Some weeks ago, Stephen Wolfram [suggested](https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/) some combination of chatGPT and their WolframAlpha, a curated engine for computational intelligence. A wealth of domains could benefit from integrating preexisting knowledge into the conversational skill of transformers. As a simple example, take "explain how 30 is 2x3x5", where the verified information plugged as a prompt may be obtained from a curated system and the natural language exposition could be finally written by a conversational system. I don't foresee knowledge absorbed by LLM, but some form of combination between both techiques. Consider the times tables, the chemical elements, or lots of well known and established knowledge pieces. Is there any advantage in texting all that structured information to afterwards gradient descent train on it? Not to mention algorithms, from Viterbi to Quick Sort to the Fast Fourier Transform. Those look like specialized intelligence modules to be interfaced by Large Language Models, rather than (re)learned from scratch. Upvotes: 2
2023/02/14
1,419
4,877
<issue_start>username_0: The seminal [Attention is all you need](https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) paper introduces Transformers and implements the attention mecanism with "queries, keys, values", in an analogy to a retrieval system. I understand the whole process of multi-head attention and such (i.e., what is done with the Q, K, V values and why), but I'm confused on **how these values are computed in the first place**. AFAICT, the paper seems to completely leave that out. Both Figure 2 of the paper and equations explaining Attention and Multihead attention start with Q,K,V already there : [![enter image description here](https://i.stack.imgur.com/t6qJz.png)](https://i.stack.imgur.com/t6qJz.png) The answers regaridng the origin of Q,K,V I've found so far haven't satisfied me : * In this [similar question](https://stats.stackexchange.com/questions/421935/what-exactly-are-keys-queries-and-values-in-attention-mechanisms), the [accepted answer](https://stats.stackexchange.com/a/424127/201218) says "*The proposed multihead attention alone doesn't say much about how the queries, keys, and values are obtained, they can come from different sources depending on the application scenario.*". * I also see some answers (eg [this one on the same question](https://stats.stackexchange.com/a/463320/201218)) which say that Q, K and V are the result of multiplication of the input embedding with some matrices. This is also what is shown in the popular blog post [The Illustrated Transformer](http://jalammar.github.io/illustrated-transformer/) : [![enter image description here](https://i.stack.imgur.com/8QJ6pl.png)](https://i.stack.imgur.com/8QJ6pl.png) Why isn't the computing of Q,K,V -be it "left to the application" or "multiplication with matrices" made more clear in the paper, at the very least for the task of language translation for which they show some results and so obviously did compute Q,K,V in some way ? If it is matrix multiplication, are these matrices ($W^Q$, etc in the figure of the blog post) trained with backprop jointly with the rest of the network or pretrained ? What are the resulting shapes of Q,K,V ?<issue_comment>username_1: *(OP auto-answer) After having dug further in and read more papers on attention, and with help from Chillston in the comments, I think I've got it narrowed down to an issue of confusing notation. If anyone thinks this is not the right answer, please don't hesitate to submit another one, which I'll mark as correct if I think it's better.* --- Q, K and V values *are* defined in the paper, and they *do* come from multiplication with learnt matrices. Those matrices are $W^Q\_i$, $W^K\_i$ and $W^V\_i$, defined in section 3.2.2 of the paper. The confusion stems from the fact that the notation used in the multihead attention equation and in Figure 2 (right) of the paper is wrong/confusing. The equation would be be clearer if it read : [![![enter image description here](https://i.stack.imgur.com/cR99V.png)](https://i.stack.imgur.com/cR99V.png) And Figure 2 right could be modified accordingly : [![enter image description here](https://i.stack.imgur.com/9ahfh.png)](https://i.stack.imgur.com/9ahfh.png) In this new notation, $X$ and $Y$ are the inputs to the current attention unit. * For self attention, we'd have $X = Y$ which would both be the previous en/decoder block output (or word embedding for the first encoder block). * For cross-attention, $X$ would be the output of the last encoder block and $Y$ the output of the previous decoder block. *Technically*, the way it's written in the paper could be correct but you need to consider that $Q, K, V$ refer to different tensors when they're written : * in the multihead(Q,K,V) equation where they represent inputs, ie what they call $V$ is $X$ in my suggested re-writing ; * in the attention(Q,K,V) equation where they represent "true" query/key/values, meaning inputs multiplied by projections matrices, ie what they call $V$ is $XW^V\_i$ in my suggested re-writing. Upvotes: 3 [selected_answer]<issue_comment>username_2: As I understand it (and I'm not an AI researcher, so any helpful comments from folks who know the topic better will be illuminating) the output of layer $l \in 1 ...\bf{L}$, $\bf{X}^l$, is [![enter image description here](https://i.stack.imgur.com/SKQyt.png)](https://i.stack.imgur.com/SKQyt.png) where $a\in 1...A$ is the head number, and $f$ is some function like RELU or whatever and the $\bf{b}$s are biases ($M$ is the attention mask and $d\_E$ is the size of the embedding). The first bit corresponds to [@username_1's correction](https://ai.stackexchange.com/a/39195/10649) (and the second bit is the FFN). (And $\underset{\mathsf{vocab}}{\mathsf{softmax}}\left(\bf{X}^L\bf{W}\_E^{-1}\right)$ is what's used in calculating cost). Upvotes: 0
2023/02/15
903
3,197
<issue_start>username_0: Sorry if that is a dumb question. I just started to learn about machine learning. I'm reading this book about neural networks: <http://neuralnetworksanddeeplearning.com/chap1.html#a_simple_network_to_classify_handwritten_digits> It explains how an artificial neural network classifies handwritten digits applying weights and biases to inputs and each input is assigned a value between 0 and 1, with 0 being white pixels and 1 black pixels. Now let's say an image containing only black pixels is input. If I understand it well this would cause all neurons in the hidden layer to output 1, which means that all neurons in output layer will output 1. This is not the intended behavior. What is the correct way to model an artificial neuron that outputs 0 when 1 is input and outputs 1 when 0 is input? Is this done by assigning a negative weight or using a different activation function?<issue_comment>username_1: > > Now let's say an image containing only black pixels is input. If I understand it well this would cause all neurons in the hidden layer to output 1 > > > Not necessarily. In a neural network, there are weights $W$, synapses if you will, connecting layers. These weights are randomly initialised, and can be negative. Furthermore, assuming a fully connected network, each hidden activation receives the activations from *all* neurons in the previous layer as input. A mixture of randomly negatively and positively scaled inputs, summed and passed through a non-linearity, could lead to either a positive or negative output. Hence why an input image of black pixels (0) will not cause *all* the hidden neurons to also be black (0) or white (1). This of course counts for all subsequent layers too. > > What is the correct way to model an artificial neuron that outputs 0 when 1 is input and outputs 1 when 0 is input? > > > As @Minh-Long Luu points out, you probably don't want or need a neural network to do this... In fact, I'm not even sure if this is possible (assuming training via backprop) because it sounds like you're after a [heaviside step-function](https://en.wikipedia.org/wiki/Heaviside_step_function) for the non-linear activation, which would be non-continuous and therefore non-differentiable, making it impossible to train the NN with stochastic gradient descent. Perhaps a flipped [sigmoid](https://en.wikipedia.org/wiki/Sigmoid_function) centered at 0.5 would be more appropriate? Upvotes: 2 <issue_comment>username_2: This inverter is an interesting initial application of an artificial neuron. Yes, you need a negative weight and... no, the sigmoid suffices if you do with a close enough approximation. See how an unrestricted optimization could progress from a very inexact approximation to the Heaviside step in the limit, flipped and shifted as Giulio suggests. [![enter image description here](https://i.stack.imgur.com/4gljW.gif)](https://i.stack.imgur.com/4gljW.gif) If you want to test other numbers, here is the [WolframAlpha link](https://www.wolframalpha.com/input?i=1%2F%281%2Bexp%28-100%281%2F2-x%29%29%29%3B%201%2F%281%2Bexp%28-10%281%2F2-x%29%29%29%3B%201%2F%281%2Bexp%28-%281%2F2-x%29%29%29). Upvotes: 0
2023/02/16
1,421
5,786
<issue_start>username_0: Why do large language models (LLMs) need massive distributed training across nodes -- if the models fit in one GPU and larger batch only decreases the variance of gradients? ============================================================================================================================================================================= tldr: assuming for models that don't need sharding across nodes, why do we need (massive) distributed training if the models (e.g. CLIP, Chinchilla, even really large GPTs e.g. CLIP fits in a V100 32GB) fit in one GPU and larger batch only decreases the variance of gradients (but not expose ore tokens or param updates)? A larger batch doesn't necessarily mean we train on "more data/tokens" -- or at least that doesn't seem to be wrt SGD like optimizers. --- Intuitively, it feels that if we had a larger batch size then we have more tokens to learn about -- but knowing some theory of optimization and what SGD like algorithms actually do -- a larger batch size only actually decreases the variance of gradients. So to me it's not clear why massie distributed training is needed -- at all unless the model is so large that it has to be shared across nodes. In addition, even if the batch was "huge" -- we can only do a single gradient update. I feel I must be missing something obvious hence the question given how pervasive massive distributed training is. In addition some toy training curves with V100s & T5's show me there is very little if any benefit in additional GPUs [![enter image description here](https://i.stack.imgur.com/8pZTP.png)](https://i.stack.imgur.com/8pZTP.png) In addition, it seems from nonGPT we know small batch sizes are sufficient to train (reference <https://github.com/karpathy/nanoGPT> but I did ask Karpathy directly to confirm <https://github.com/karpathy/nanoGPT/issues/58>). I am missing something obvious, but I wanted to clear this up in my head since it seems to be a foundation thing in training foundation models. Related to the previous, I've also been unsure about the role of the batch size in training LLMs compared to traditional deep learning. In traditional deep learning when we used epochs to train, a model **the larger the batch size the quicker we could go through an epoch** -- so the advice I received (e.g. approximate advice by <NAME> at the Simon's institute for deep learning tutorials) was to make the batch size large. Intuitively, the larger the batch size the more data the model sees per iteration. But mathematically this only really improves the variance of the gradient -- which isn't immediately obvious is what we want (I've done experiments and seen papers where noisy gradients lead to better models). It is clear too the that the larger the context size the better (for everything, but for the sake of this conv it's better for training) -- whenever possible. But context size is totally different from batch size. So my question is, how does distributed training, especially at the node level help at all if batch size isn't really the helping factor (which might be a wrong assumption)? So the only role for distributed training I see is if the model is to large to fit in 1 node -- since I'm arguing there is no point to make the batch size too large (I'd guess 64-32 is fine due to the CLT). What am I missing? Empirical answers are fine! Or any answers are fine! --- Related: * cross quora: <https://www.quora.com/unanswered/Why-do-large-language-models-LLMs-need-massive-distributed-training-across-nodes-if-the-models-fit-in-one-GPU-and-larger-batch-only-decreases-the-variance-of-gradients> * cross reddit: <https://www.reddit.com/r/learnmachinelearning/comments/113whxu/why_do_llms_need_massive_distributed_training/><issue_comment>username_1: I don’t think the problem lies in the gradient or related stuffs - the problem here is the *hardware limitation* of GPU VRAM. Sure, CLIP can fit in a single GPU, but what GPU are we talking about? I have done some experiments with CLIP and I am pretty sure that CLIP ViT-B/16 with 224x224 images and batch size above 40, it is very easy to get out-of-memory on RTX 3090. You can try it on Google Colab. Try with ViT-B/16 and input a random tensor with size 224x244x3, then gradually scale up the batch size to see which threshold it exceeds the VRAM of the given GPU. So in short, it is more about "we cannot fit it in a single GPU" rather than "It is about training stability". Upvotes: 2 <issue_comment>username_2: The large batch size is mostly to speed things up as you point out. You could in principle train these models with batch size 1 and just do gradient accumulation, but then training would take muuuuch longer. Aside for tokens/seconds, empirically, it has been found that larger batch sizes lead to faster convergence (<https://arxiv.org/abs/2212.14034>) The time and efficiency advantages provided by large batch sizes will often necessitate distributed training, as Minh points out, due to the VRAM limitations of individual GPUs. Good luck even just loading (no inference) anything larger than 15B on an A100 without optimization tricks. It also doesn’t help that transformer’s self-attention mechanism has memory requirements that are quadratic with the input sequence length. When you’re training these models, you tend to pack several examples into a single input sequence such that the maximum length is reached, minimizing padding tokens and maximizing GPU usage (which would otherwise remain underutilized, which is a waste of time/money). The result of that is that you’re training with fully maxed-out sequence lengths, and hence maxing-out memory requirements, once again requiring distributed training typically. Upvotes: 1
2023/02/16
515
2,272
<issue_start>username_0: This might be a stupid question, and I might have read too much about neural networks and CNNs today so my mind is a bit of a mess. But I get that neural networks contains neurons or nodes. They calculate a dot product and sends the output further into the network. But what about CNNs? The initial convolution layer will use a kernel convolution to go over the binary pixel data and calculate a dot product based on the weights in the kernel / filter, and the numbers from the binary pixel data. And after this we get a feature map, we can have several feature maps that find certain features or patterns, and we can pool and use other functions further on in the network to achieve certain predictions. But where are the neurons in the CNN? Aren't there "just" convolutional, pooling, flattened layers, and a final fully connected network?<issue_comment>username_1: There are no stupid questions :) As @MuhammadIkhwanPerwira pointed out, the pixels themselves can be thought of as "neurons". His answer to your follow-up question is also valid: generally yes the pixels in the feature map can be thought of similarly to the neurons in the hidden layers of a fully-connected layer, but this analogy starts to break down a bit when you introduce channels. The key difference with classical (fully-connected neural nets) is that convolution enables parameter sharing, so you no longer have "edges" (parameters) connecting each input node to each output node, but rather you build each output node (pixel) by sliding your (parametrised) kernel across the input nodes (pixels). Upvotes: 4 [selected_answer]<issue_comment>username_2: If you think about it too much, there aren't neurons in **any** neural network. There are just weight matrices, and activation functions. In a traditional NN, we can take a each column of the weight matrix, combined with an application of the activation function, and call that a neuron. In a CNN, the same weight matrix is used over and over, many times. If you like, you can call each time it's used a separate neuron (all having the same weights) or you can call it the same neuron being used multiple times. It doesn't matter what you call it, though, since the point is the convolution itself. Upvotes: 2
2023/02/19
654
2,548
<issue_start>username_0: I am working on a project where I am planning to convert RGB images to thermal images. I can convert to either near infrared spectrum images or far infrared spectrum image. I am planing on using Generative networks for the task, specifically Pix2Pix. For training GAN, there are datasets available with synchronized RGB and thermal image like dataset for MFNet. I will be grateful if anyone can tell me if it is even possible or if it is possible how close the generated images will be to actual thermal images, I will be grateful.<issue_comment>username_1: you may check SpeakingFaces paper by ISSAI. They did something similar with CycleGAN, only with faces though. Upvotes: -1 <issue_comment>username_2: **Assuming you create the perfect architecture** What your network will be learning will be correlations between objects/textures/colors and thermal signatures. Not the thermal signatures themselves. It will NOT work the same as a thermal camera. It will not give you actual thermal signatures. It will not reflect "this is how hot it is" It will only highlight those things that naturally change color due to heat. So it'd be more like a "highlighter" that tells you where something has changed visually due to heat. Examples of where it will not work: * It is common to use thermal cameras to find shorts on motherboards/circuits. Usually, the area where the short is occurring will be VERY hot and will show up well on a thermal camera, but not look like anything in the RGB spectrum [![enter image description here](https://i.stack.imgur.com/avuG1.png)](https://i.stack.imgur.com/avuG1.png) * Doing heat profiling for insulation in a room. Will not show up visually. Only on thermal bands. * Overheated pipes **Why** RGB, measure red green and blue respones. Thermal is a measurement on its own like Red, Green, or Blue is. RGB are how strong the signal is on various ranges, on the electromagnetic spectrum. Theres a range for red, green, blue, and thermal respectively. [![enter image description here](https://i.stack.imgur.com/IXG5W.png)](https://i.stack.imgur.com/IXG5W.png) The thermal range can be found here: [![enter image description here](https://i.stack.imgur.com/TI2EZ.png)](https://i.stack.imgur.com/TI2EZ.png) **what you'll be learning** Your approach will be using objects and learned properties of those objects as sort of a lookup mapping between objects and thermal signatures. This will only work if objects visually reflect heat changes. (which many don't) Upvotes: 1
2023/02/19
1,033
4,189
<issue_start>username_0: I am currently training a self-playing Monte-Carlo-Tree-Search (MCTS) algorithm with a neural network prior, and it seems to be working pretty well. However one problem I have is when I compare my new iteration of the player against the previous version to see whether the new one is an improvement over the previous one. Ideally I want to compare the two players to play 20 games of Tic Tac Toe with each being the first player in 10 of them. But what ends up happening is that each of those 10 games play out identically (because the MCTS in each player is reset at the beginning of each game, and since they are playing to win, they both take the play with highest probability, rather than randomly drawing actions based on the probabilities, so each player is making exactly the same decisions as they did in the previous game). So I understand why this isn't working, however I'm not sure what people commonly do to fix this problem? I could choose to not reset the MCTS between each game?, but that also feels like a weird fix, since the players are then still learning as the games are played, and game 10 would be quite different from game 1, but maybe that is just how people normally do this?<issue_comment>username_1: MCTS bandit phase chooses an action via UCB(1) algorithm. For a given $Q(s,a)$ and the visit counts of non-leaf node, it chooses an action branch in the tree to descend via: $$ a = \arg \max\_{a'}\left[Q(s,a') + c \sqrt{\frac{\log N}{N(s,a')}}\right]$$ The $c$ parameter is the exploration temperature. If $c \gg Q$ then the second term always wins and the algorithm should be randomly exploring the whole game tree, without any value-based preference. So, one possible problem could be that your $c$ is too low. Another possible problem is a pretty common misconception about the way MCTS works. It could be that for a single actual in-game move you are performing one MCTS search step. That's not how it is supposed to work - MCTS can be considered as an *online* "improvement" algorithm for *offline* learned value function - one should be performing multiple MCTS searches and build a reasonably large search tree for every in-game move one performs. Quoting [Wikipedia](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search): > > Rounds of search are repeated as long as the time allotted to a move remains. Then the move with the most simulations made (i.e. the highest denominator) is chosen as the final answer. > > > Upvotes: 0 <issue_comment>username_2: If I understand your question correctly, your goal is to figure out whether or not the quality of your neural network is improving as training progresses. The core issue seems to be that you do this by evaluating the playing strength of MCTS+DNN agents, but... the game of Tic-Tac-Toe is so simple, that an MCTS agent even with a completely random DNN (or no DNN at all) is likely already capable of optimal play. So, you cannot measure any improvement in playing strength. I would suggest one of the following two solutions: 1. Instead of evaluating the playing strength of an MCTS+DNN agent, just evaluate the playing strength of a raw DNN without any search. You could simply make this agent play according to the policy head of the network (proportionally or by maximising over its outputs), and ignore the value head. A pure MCTS (without neural network) could be used as the opponent to evaluate against (or you could evaluate against past versions of your DNN). 2. If you do want to continue evaluating MCTS+DNN agents, you could try to severely constrain the amount of computation the MCTS part is allowed to use, such that the MCTS by itself can no longer guarantee optimal play. For example, you could let your MCTS run only, say, 10 or 20 iterations for every root state encountered. Upvotes: 1 <issue_comment>username_3: I have to answer since I can't comment yet. I have the exact same problem, all evaluation games make the same moves again and again. I currently pick a random first action for both players, but it feels like there should be a better solution. I guess I could reuse the MCTS but that seems like an even worse hack. Upvotes: 0
2023/02/21
842
3,391
<issue_start>username_0: In Deep Learning and Transfer Learning, does layer freezing offer other benefits other than to reduce computational time in gradient descent? Assuming I train a neural network on task A to derive weights $W\_{A}$, set these as initial weights and train on another task B (without layer freezing), does it still count as transfer learning? In summary, how essential is layer freezing in transfer learning?<issue_comment>username_1: MCTS bandit phase chooses an action via UCB(1) algorithm. For a given $Q(s,a)$ and the visit counts of non-leaf node, it chooses an action branch in the tree to descend via: $$ a = \arg \max\_{a'}\left[Q(s,a') + c \sqrt{\frac{\log N}{N(s,a')}}\right]$$ The $c$ parameter is the exploration temperature. If $c \gg Q$ then the second term always wins and the algorithm should be randomly exploring the whole game tree, without any value-based preference. So, one possible problem could be that your $c$ is too low. Another possible problem is a pretty common misconception about the way MCTS works. It could be that for a single actual in-game move you are performing one MCTS search step. That's not how it is supposed to work - MCTS can be considered as an *online* "improvement" algorithm for *offline* learned value function - one should be performing multiple MCTS searches and build a reasonably large search tree for every in-game move one performs. Quoting [Wikipedia](https://en.wikipedia.org/wiki/Monte_Carlo_tree_search): > > Rounds of search are repeated as long as the time allotted to a move remains. Then the move with the most simulations made (i.e. the highest denominator) is chosen as the final answer. > > > Upvotes: 0 <issue_comment>username_2: If I understand your question correctly, your goal is to figure out whether or not the quality of your neural network is improving as training progresses. The core issue seems to be that you do this by evaluating the playing strength of MCTS+DNN agents, but... the game of Tic-Tac-Toe is so simple, that an MCTS agent even with a completely random DNN (or no DNN at all) is likely already capable of optimal play. So, you cannot measure any improvement in playing strength. I would suggest one of the following two solutions: 1. Instead of evaluating the playing strength of an MCTS+DNN agent, just evaluate the playing strength of a raw DNN without any search. You could simply make this agent play according to the policy head of the network (proportionally or by maximising over its outputs), and ignore the value head. A pure MCTS (without neural network) could be used as the opponent to evaluate against (or you could evaluate against past versions of your DNN). 2. If you do want to continue evaluating MCTS+DNN agents, you could try to severely constrain the amount of computation the MCTS part is allowed to use, such that the MCTS by itself can no longer guarantee optimal play. For example, you could let your MCTS run only, say, 10 or 20 iterations for every root state encountered. Upvotes: 1 <issue_comment>username_3: I have to answer since I can't comment yet. I have the exact same problem, all evaluation games make the same moves again and again. I currently pick a random first action for both players, but it feels like there should be a better solution. I guess I could reuse the MCTS but that seems like an even worse hack. Upvotes: 0
2023/02/22
783
3,217
<issue_start>username_0: After some time starting the deep learning project, training output files (model weights,training configuration files) will be piled up. Naming all outputs and training files can become complicated if the clean naming convention is not used. There are some example naming styles below. I wonder that how do you manage your training outputs and training files? ``` 'Outputs/run1/' 'Outputs/run2/' 'Outputs/run3' . . . ``` ``` 'Outputs/20230222/' 'Outputs/20230223/' 'Outputs/20230224/' . . . ``` ``` 'Outputs/WithDropout/' 'Outputs/WithDropout_RotationTransform/' 'Outputs/WithDropout_RotationTransform_AdamOptimizer/' . . . ```<issue_comment>username_1: Depending on how many instances of models you train you can do one of the following: 1. For when the amount of models is still somewhat manageable: Generate a settings file together with the model file in which you store all the hyperparameters of the model. 2. For when it really gets out of hand with the amount of models. Generate a random unique number which you can use to name the model, and store the settings and the file name of all models in a csv document. You can then use the csv document to retrieve the correct model name corresponding to a set of settings and a result. It's a bit of a hassle to implement but its worth it in the end ;) You can of course also mix and match the options if that suits your needs better. Unfortunately, I do not know a simple 'hack' which allows you to do this very easily. You can also try to add the model parameters in the name of the model itself, but in my experience this usually gets messy real fast once you realise 'oh i have to add this parameter as well', and 'oh this model does not have this parameter, but the other one does'. If you do something like bayesian optimization, a service such as WeightsAndBiases can keep track of all this stuff for you. The applicability of such a method is of course heavily depend Upvotes: 2 <issue_comment>username_2: You could use the most important distinctions to build a folder structure. For example I experiment with multiple model architectures (resnet, mobilenet, ...) and different types of classification (binary, multiclass and multilabel). Later in the development process it is possible new classes are added. So these parameters I include into the folder structure as well. Eventually I use the date like you showed in your example. For example: `models/resnet/multiclass_4/20230302/...` This would be a resnet multiclass model with 4 classes saved 2 march 2023. After I save the model I gather all the relevant training outputs to save them in a separate json file in the model directory. Finally I add a custom field called "description" where I give a short explanation what I tried compared to the previous training cycle. Disadvantages: * The description field will not help if you compare models which were not trained in succession. * You can't get a lot of information based on the folder structure. Advantage: * No endless series of subdirectories or chaotic directory names. * Minimal information in description can help you remember which model it was. Upvotes: 0
2023/02/24
1,040
3,839
<issue_start>username_0: I asked ChatGPT a [question](https://chat.openai.com/chat/7e5fbd69-e102-44b3-9ca8-7df2c9b265b7), and its response included this statement: > > In "The Ruling Class" film, which was released in 1972, <NAME> played the role of Bishop Lampton. > > > Unlike many of its vague or weaselly responses, this is a direct statement of fact. But it is false and very easy to disprove, so I wondered how it came up with that misinformation? I asked it how, and it apologized and admitted it was wrong: > > Q: You say "<NAME> played the role of Bishop Lampton", when in fact it was Alastair Sim that played that role. Where did you get this misinformation? > > > A: I apologize for the error in my previous response. You are correct that <NAME> did not appear in "The Ruling Class" film and that Alastair Sim played the role of Bishop Lampton. > > > I apologize for any confusion or inconvenience that my previous response may have caused. As an AI language model, I strive to provide accurate information, and I will make sure to double-check my sources more carefully in the future. Thank you for bringing this to my attention. > > > But it avoided answering the question: > > Q: But my question was "Where did you get this misinformation?". > > > A: I apologize for the confusion. As an AI language model, I do not have personal experiences, so **I did not get the misinformation from anywhere**. I must have made a mistake in my response due to an error in processing or a misunderstanding of the question. > > > It wouldn't have just made it up in order to provide an amazingly good answer, so how can it give a false statement of fact and claim that it didn't get the information from anywhere? (I.e. what mechanism does it use to generate facts if they don't originate somewhere else?)<issue_comment>username_1: The reason is that ChatGPT is not a data retrieval system. In other words, it is not consulting a database, nor is it performing searches of the internet, to find answers to questions posed. Instead, it is generating novel answers based on the trained parameters within the network. In a sense, this is far more impressive. If you think about it, ChatGPT and large language models generally provide a very interesting method of compressing data. Of course, compression isn't the point of LLMs, but you can view this as one of the outcomes. How so? During training, the parameters are learned such that the model is able to reasonably predict the next most likely token (word/bigram/trigram/letter/symbol) based on all of the tokens that have come before it in that session, including any base prompt that is included silently. When you ask it to recite something like *The Lovesong of <NAME>,* it can do so very accurately... yet, it does not have a copy of Prufrock "memorized" somewhere. Instead, it is recreating Prufrock generatively. This is also why its answers, while incredibly confident, can be wildly incorrect. It is essentially generating the most likely token that comes next, not reasoning or thinking about what the text means. Upvotes: 4 [selected_answer]<issue_comment>username_2: Incalculable times i caught it literally making things up it does not know. For example an hour ago i asked it if <NAME> says the line "i don't want to hurt you" in any of his movies. It first correctly returned no, then i wrote try harder and it apologized and said he says it in Batman and Robin 1997. I asked for exact minute it said 1:03, i looked no such line, then it said it's at 1:26 then i looked and there said there is no line there either. Then it apologize again and said there is no such line. So It lied 3 times, totally making it up. And i have personally caught it lying already countless times. Upvotes: 1
2023/02/25
2,978
13,359
<issue_start>username_0: Sorry if this question makes no sense. I'm a software developer but know very little about AI. Quite a while ago, I read about the Chinese room, and the person inside who has had a lot of training/instructions how to combine symbols, and, as a result, is very good at combining symbols in a "correct" way, for whatever definition of correct. I said "training/instructions" because, for the purpose of this question, it doesn't really make a difference if the "knowledge" was acquired by parsing many many examples and getting a "feeling" for what's right and what's wrong (AI/learning), or by a very detailed set of instructions (algorithmic). So, the person responds with perfectly reasonable sentences, without ever understanding Chinese, or the content of its input. Now, as far as I understand ChatGPT (and I might be completely wrong here), that's exactly what ChatGPT does. It has been trained on a huge corpus of text, and thus has a very good feeling which words go together well and which don't, and, given a sentence, what's the most likely continuation of this sentence. But that doesn't really mean it understands the content of the sentence, it only knows how to chose words based on what it has seen. And because it doesn't really understand any content, it mostly gives answers that are correct, but sometimes it's completely off because it "doesn't really understand Chinese" and doesn't know what it's talking about. So, my question: is this "juggling of Chinese symbols without understanding their meaning" an adequate explanation of how ChatGPT works, and if not, where's the difference? And if yes, how far is AI from models that can actually understand (for some definition of "understand") textual content?<issue_comment>username_1: Yes, the [Chinese Room argument by <NAME>](https://en.wikipedia.org/wiki/Chinese_room) essentially demonstrates that at the very least it is hard to *locate* intelligence in a system based on its inputs and outputs. And the ChatGPT system is built very much as a machine for manipulating symbols according to opaque rules, without any grounding provided for what those symbols mean. The large language models are trained without ever getting to see, touch, or get any experience reference for any of their language components, other than yet more written language. It is much like trying to learn the meaning of a word by looking up its dictionary definition and finding that composed of other words that you don't know the meaning of, recursively without any way of resolving it. If you possessed such a dictionary and no knowledge of the words defined, you would still be able to repeat those definitions, and if they were received by someone who did understand some of the words, the result would look like reasoning and "understanding". But this understanding is not yours, you are simply able to retrieve it on demand from where someone else stored it. This is also related to the [symbol grounding problem](https://en.wikipedia.org/wiki/Symbol_grounding_problem) in cognitive science. It is possible to argue that pragmatically the "intelligence" shown by the overall system is still real and resides somehow in the rules of how to manipulate the symbols. This argument and other similar ones try to side-step or dismiss some proposed hard problems in AI - for instance, by focusing on behaviour of the whole system and not trying to address the currently impossible task of asking whether any system has subjective experience. This is beyond the scope of this answer (and not really what the question is about), but it is worth noting that The Chinese Room argument has some criticism, and is not the only way to think about issues with AI systems based on language and symbols. I would agree with you that the latest language models, and ChatGPT are good example models of the The Chinese Room made real. The *room* part that is, there is no pretend human in the middle, but actually that's not hugely important - the role of the human in the Chinese room is to demonstrate that from the perspective of an entity inside the room processing a database of rules, nothing need to possess any understanding or subjective experience that is relevant to the text. Now that next-symbol predictors (which all Large Language Models are to date) are demonstrating quite sophisticated, even surprising behaviour, it may lead to some better insights into the role that symbol-to-symbol references can take in more generally intelligent systems. Upvotes: 6 [selected_answer]<issue_comment>username_2: Yes it is a good analogy, as explained nicely by Neil. Regarding your second question: > > how far is AI from models that can actually understand (for some > definition of "understand") textual content? > > > Here's the catch: how do we know that we (humans) are not simply very sophisticated chinese rooms? For instance suppose that current AI models are improved so much that their performance is on par to human performance, without the current catastrophic failures, yet they retain the current model architectures. Now you have an apparent paradox: they are indistinguishible from humans and yet you know that they are not "understanding". Personal guess: It's chinese rooms all the way down. Upvotes: 4 <issue_comment>username_3: Searle's [Chinese room](https://en.wikipedia.org/wiki/Chinese_room) is **not** intended as a functional description of any real-world machine. Searle was a philosopher who created the Chinese room as a thought experiment to show what he considered an absurd conclusion of the [computational theory of mind](https://en.wikipedia.org/wiki/Computational_theory_of_mind). The intended absurdity is that the person inside the room doesn't understand anything about the inputs or outputs, but that when just looking at the room from the outside, the room (i.e. the system person+dictionary) appears to understand Chinese. To Searle, it was clear that there was no "understanding" located anywhere here, and so this system was clearly not equivalent to a human consciousness that actually understands Chinese. But "strong AI" computationalists believe that all that matters for consciousness is the inputs and the outputs. Since Searle considers the conclusion that the room is conscious absurd, this thought experiment is supposed to be a refutation of this computationalist viewpoint of consciousness. ChatGPT and other large language models are *not* a realization of Chinese rooms. There isn't a human in there who doesn't understand English and instead uses a dictionary or set of rules to translate inputs to outputs. The point of the Chinese room is that it is clear that a) the human doesn't understand Chinese and b) the dictionary/rules are just a book that isn't conscious in itself either, otherwise it doesn't work as a reductio ad absurdum. The point of the thought experiment is that it *eliminates* anything to which we could attribute understanding - but indeed one of the replies to Searle was that it was just the *room itself* that had understanding/consciousness, and the interplay between the human and the dictionary is just analogous to the way different regions of the human brain might interact to produce the overall "understanding". Instead, large language models consist of a big neural network that transforms the inputs to outputs. It's not two distinct entities like in the Chinese room - "rules storage", i.e. the dictionary, and "rules implementor", i.e. the human - it's one big algorithmic structure whose exact inner workings are often hard to explain for specific use cases. You may or may not assign the ability to "understand" to this network, but there are no identifiable substructures here as there are in Searle's room. These models, of course, raise much the same *questions* about consciousness and understanding that Searle's Chinese room does, but the room with its clear two-component structure bears no actual resemblance to how the [transformer networks](https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)) underlying large language models work. You might argue that these models are not conscious in the same way that Searle's room is not conscious, but your argument for *why* they aren't conscious (or why they don't "understand" the language they're using) needs to be very different from Searle's argument. Upvotes: 4 <issue_comment>username_4: I suggest it makes all the difference in the world whether 'the knowledge' was acquired by parsing many examples to get 'a feeling for what's right' (AI/learning), or by detailed instructions (algorithmic). Can you say how the exposition here relates to the Question title? What point is there but that the AI system should and the algorithmic should not be able to expand its programmed capabilities? How could anyone's understanding Chinese matter? Do you know - or know of - anyone who believes ChatGPT understands anything? More importantly, anyone who can explain where most people's understanding comes from? Do you know of people who know how to choose words based on anything but what they've seen? Can you look again at '…it mostly gives answers that are correct? Is that less or more useful/worth-while than the answers most people give? How could understanding Chinese matter, unless you were specifically asking for translations? Are you? I suggest that juggling Chinese symbols without understanding their meaning is irrelevant to ChatGPT. If what you're really Asking is how far AI might be from 'understanding' anything, why not first define 'understanding'? Upvotes: 1 <issue_comment>username_5: The chinese room argument is useless because it can be applied to the brain as well. Replace the slit with sensory input, the handbook with the wiring of the brain and the activity of the agent inside the room with neuron activity. In the same way the argument demonstrates that the room has no understanding it demonstrates that the brain has no understanding. My personal assessment is that LLMs like chatGPT have a true understanding of the domain they were trained on. My reasoning is that the training forced the model to squeeze all the information it can utilize to make its predictions into the limited amount of its network parameters. In this regard the incorporation of understanding is a far more efficient usage of the available space than any other kind of data compression. Upvotes: 3 <issue_comment>username_6: As many have stated, the Chinese room analogy is intended to show that any hardware + software instance that relies on rules alone (logical operators on input symbols) cannot be said to have understanding. It is not a good analogy, and the argument does not apply well to trained neural networks. Neural networks (ChatGPT is actually two trained NNs -- an input-trained NN and output-trained NN) are produced as a result of extensive training (for ChatGPT, unsupervised training to generate a language model, and a couple of stages of supervised training on its output sentences). This training creates many billions of weights that are instantiated in the NN, and these weights are applied across the NN nodes as it 'processes' an input prompt. From a macro view, the NN code is written in the logic of computer language, so one might conflate this with the logical operations on input signals described in the Chinese Room analogy, concluding that a NN is nothing but logical operations. This is a mistake, in my view. A trained NN is different in kind from the purely operational program that Searle described. By evolving weightings through training, NNs encode an incredibly complex object that Searle could not have imagined when he created the analogy. Does a neural network then have some kind of understanding? I think that it's clearly not anything like human understanding, but I also believe that it is at least some kind of understanding. There are many aspects to this discussion, to say the least. One main objection to Searle's argument is that if you look at individual neurons in the brain, you will not find understanding there either, but the physical brain does give rise to consciousness in some way (unless you believe in some extra 'magic' that overlays the physical brain -- something that Searle, a physicalist, would not allow). Upvotes: 2 <issue_comment>username_7: Interesting discussion, I stumbled on it reflecting on a ChatGPT experiment I did. I asked it to decrypt the word Hello, using the vignère cypher. When I asked it to provide only the answer, it simply guessed. When I let it explain all the steps it gets to the answer easily. So to link that to the chinese room, it seems that in the case of chatGPT, the man in the middle starts clueless but eventually knows a bit of chinese if there is a strong enough pattern within its answer. Maybe I'm knocking on the wrong door here but I find this result very interesting, how much more powerful could these models be if we could first get them to write out the reasoning we want from them as opposed to simply asking them to reason. TLDR: ChatGPT is similar to a chinese room but it would seem that the man in the middle has the capacity to "learn chinese" if provided with instructions on how to do so within the string to translate, I find that fascinating! Upvotes: -1
2023/03/02
427
1,749
<issue_start>username_0: That is, if some of the inputs to a neural network can be calculated by a pre-determined function whose variables are other inputs, then are those specific inputs useless? For example, suppose there are three inputs, $x\_1$, $x\_2$ and $x\_3$. If $x\_3$ is determined by function $x\_3=f(x\_1,x\_2)$, then will $x\_3$ be useless for training a neural network?<issue_comment>username_1: No it is not useless. The relationship may not be obvious, and having the data will allow the network to learn this relationship. Further, even if is obvious, networks are so sample inefficient that more (non-noisy) data is always helpful. In fact, the common practice is to train for hundreds of epochs on the same exact samples - because we can not learn quickly enough from seeing them only once. That said, there are some cases where data is harmful. For example, if we have an imbalanced dataset, adding more samples to exacerbate that imbalanced may be a bad idea. But in general, this added data will still be of use. Upvotes: 3 [selected_answer]<issue_comment>username_2: There is a difference between adding more samples to the data (rows), and adding more features (columns). In this case we are talking about more features. If the function $f$ is trivial, feeding extra columns doesn't hurt but doesn't bring any benefits either. And the training is a bit slower, since extra gradients needs to be calculated. If it is non-trivial, it could be the case that this helps the network train faster and maybe you can even use a smaller network. It is common to pre-process data before feeding it to the network, for example transforming a time-feature into two season-features via `sin` and `cos` transformations. Upvotes: 0
2023/03/02
1,363
5,453
<issue_start>username_0: I want to program and train a voice cloner, in part to learn about this area of AI, and in part to use as a prototype of audio for testing and getting feedback from early adopters before recording in a studio with voice actors. For the prototype, I have a set of recordings from voice actors. I would like to record my voice, in English or other languages, then run a neural network and produce an audio with the same text, intonation and emotion but with roughly the actors' voices. It doesn't need to be perfect; 80% right and believable would be enough to get good feedback and reach a final version of the script before recording. I have 30 minutes to one hour of utterances from each voice I want to clone. The closest I have found is Resemble.ai, which has an [impressive video](https://www.youtube.com/watch?v=f075EOzYKog), but the public plan is only in English and other languages are prohibitively expensive. The engineer published a masters' thesis as an [open-source project](https://github.com/CorentinJ/Real-Time-Voice-Cloning), but this project does only text-to-speech, not speech-to-speech. Another startup is [play.ht](https://play.ht/voice-cloning/), but again it seems to be English-only. [This open source project](https://github.com/andabi/deep-voice-conversion) seems to do what I want, cloning Kate Winslet's voice, but it has no installation instructions and so I haven't tried yet. Can you recommend an open-source project, ideally in Python and Tensorflow, to roughly replace a voice with another? *Note*: This question is similar to [What is the State-of-the-Art open source Voice Cloning tool right now?](https://ai.stackexchange.com/questions/15501/what-is-the-state-of-the-art-open-source-voice-cloning-tool-right-now) , except that that question is old and the project mentioned only does text-to-speech, not speech-to-speech.<issue_comment>username_1: Tensorflow code <https://github.com/phiana/speech-style-transfer-vae-gan-tensorflow> it's the implementation of a 2021 paper. Speech style transfer, voice cloning or speech-to-speech synthesis are the keywords. Further research (looking at the state of the art) would yield some papers: * MIST-Tacotron: End-to-End Emotional Speech Synthesis Using Mel-Spectrogram Image Style Transfer * Expressive Neural Voice Cloning (<https://expressivecloning.github.io/>) It seems to be a research area without much activity though. Maybe you could add something interesting :) Upvotes: 1 <issue_comment>username_2: Additional projects that might be of interest: * [Neural Voice Cloning with a Few Samples - NeurIPS 2018 (<NAME>, <NAME>, <NAME>, <NAME>, <NAME>)](https://paperswithcode.com/paper/neural-voice-cloning-with-a-few-samples) A neural voice cloning system is introduced, using a few audio samples to create personalized speech interfaces. Two approaches are explored: speaker adaptation, which fine-tunes a multi-speaker model with cloning samples, and speaker encoding, which trains a separate model to infer new speaker embeddings from cloning audios. Both methods achieve good performance in terms of speech naturalness and similarity to the original speaker. Although speaker adaptation offers better naturalness and similarity, speaker encoding demands less cloning time and memory, making it suitable for low-resource deployment. * [Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning, arXiv:2109.11115 [cs.SD]](https://github.com/CMsmartvoice/One-Shot-Voice-Cloning) In this paper, the authors present a novel one-shot voice cloning algorithm called Unet-TTS that has good generalization ability for unseen speakers and styles. Based on a skip-connected U-net structure, the new model can efficiently discover speaker-level and utterance-level spectral feature details from the reference audio, enabling accurate inference of complex acoustic characteristics as well as imitation of speaking styles into the synthetic speech. According to both subjective and objective evaluations of similarity, the new model outperforms both speaker embedding and unsupervised style modeling (GST) approaches on an unseen emotional corpus. * [ElevenLabs.io](https://beta.elevenlabs.io/) (Not Open Source, but has a free tier. Voice cloning becomes available in the Starter Tier, starting at 5$/month.) ElevenLabs initially built new text-to-speech models which rely on high compression and context understanding to render human speech ultra-realistically. Their tools aim to provide the necessary quality for voicing news, newsletters, books and videos. They also offer a suite of tools for **voice cloning** and designing synthetic voices. * [BeyondWords.io](https://beyondwords.io) (Not Open Source, but has a free tier and is a partner of the Open Voice Network, a non-profit industry association dedicated to making voice technology worthy of user trust and it operates as a directed fund of The Linux Foundation.) Voice cloning is part of the [enterprise plan](https://beyondwords.io/pricing/#all-features) with custom pricing and requires 2-8 hours of recorded utterances following their script. See an [example of original and cloned voice in English on YouTube](https://www.youtube.com/watch?v=qzVihUH4-4g). Although it sources non-English voices from partners such as Google and Amazon, it does not seem to support voice cloning in languages other than English. Upvotes: 2
2023/03/09
1,062
4,268
<issue_start>username_0: There are few common image enhancement: ``` 1. brightness -> r = s + b 2. negative -> s = 255 - r 3. contrast -> scretching (flexible) dan thresholding (binary image) 4. smoothing -> generate blur penapis non linear (min, max, median) 5. sharpening (high pass filter), suit for getting edge ``` How do I know what should I enchance? Is it brightness, contrast, or what? And how do I know if image is better after enhancement or not (evaluating result) objectively instead of subjectively by our perspective? Mainly for medical imaging such as USG, CT-SCAN, etc... As far As I know, I can see the histogram, but I still don't know what histogram to evaluate the result.<issue_comment>username_1: Tensorflow code <https://github.com/phiana/speech-style-transfer-vae-gan-tensorflow> it's the implementation of a 2021 paper. Speech style transfer, voice cloning or speech-to-speech synthesis are the keywords. Further research (looking at the state of the art) would yield some papers: * MIST-Tacotron: End-to-End Emotional Speech Synthesis Using Mel-Spectrogram Image Style Transfer * Expressive Neural Voice Cloning (<https://expressivecloning.github.io/>) It seems to be a research area without much activity though. Maybe you could add something interesting :) Upvotes: 1 <issue_comment>username_2: Additional projects that might be of interest: * [Neural Voice Cloning with a Few Samples - NeurIPS 2018 (<NAME>, <NAME>, <NAME>, <NAME>, <NAME>)](https://paperswithcode.com/paper/neural-voice-cloning-with-a-few-samples) A neural voice cloning system is introduced, using a few audio samples to create personalized speech interfaces. Two approaches are explored: speaker adaptation, which fine-tunes a multi-speaker model with cloning samples, and speaker encoding, which trains a separate model to infer new speaker embeddings from cloning audios. Both methods achieve good performance in terms of speech naturalness and similarity to the original speaker. Although speaker adaptation offers better naturalness and similarity, speaker encoding demands less cloning time and memory, making it suitable for low-resource deployment. * [Unet-TTS: Improving Unseen Speaker and Style Transfer in One-shot Voice Cloning, arXiv:2109.11115 [cs.SD]](https://github.com/CMsmartvoice/One-Shot-Voice-Cloning) In this paper, the authors present a novel one-shot voice cloning algorithm called Unet-TTS that has good generalization ability for unseen speakers and styles. Based on a skip-connected U-net structure, the new model can efficiently discover speaker-level and utterance-level spectral feature details from the reference audio, enabling accurate inference of complex acoustic characteristics as well as imitation of speaking styles into the synthetic speech. According to both subjective and objective evaluations of similarity, the new model outperforms both speaker embedding and unsupervised style modeling (GST) approaches on an unseen emotional corpus. * [ElevenLabs.io](https://beta.elevenlabs.io/) (Not Open Source, but has a free tier. Voice cloning becomes available in the Starter Tier, starting at 5$/month.) ElevenLabs initially built new text-to-speech models which rely on high compression and context understanding to render human speech ultra-realistically. Their tools aim to provide the necessary quality for voicing news, newsletters, books and videos. They also offer a suite of tools for **voice cloning** and designing synthetic voices. * [BeyondWords.io](https://beyondwords.io) (Not Open Source, but has a free tier and is a partner of the Open Voice Network, a non-profit industry association dedicated to making voice technology worthy of user trust and it operates as a directed fund of The Linux Foundation.) Voice cloning is part of the [enterprise plan](https://beyondwords.io/pricing/#all-features) with custom pricing and requires 2-8 hours of recorded utterances following their script. See an [example of original and cloned voice in English on YouTube](https://www.youtube.com/watch?v=qzVihUH4-4g). Although it sources non-English voices from partners such as Google and Amazon, it does not seem to support voice cloning in languages other than English. Upvotes: 2
2023/03/10
2,200
7,914
<issue_start>username_0: I have read several resources, including previously asked questions such as [this](https://stackoverflow.com/a/2499936/8243797). I have also read arguments related to intercepts needed to separate linearly separable data. If my neural network can perform feature transformation, what is the need of a bias term? Since the weights are learnt, my network can optimise to fit the data. For example, if my data is in 2D coordinate plane, my equation without bias for a perceptron for the layer will be $W\_1X\_1 + W\_2X\_2$ where $X\_1$ is `x` coordinate and $X\_2$ is `y` coordinate, making $W\_1$ and $W\_2$ coefficients of all vectors along `x` and `y` direction. Their linear combination will cover the whole plane which allows my data to be transformed across a line with 0 intercept. For example, if my weight is `1.0` for input `x`, and my bias is `0.1`, I might as well have weight $1+(0.1/\bar x)$ (or any other value descriptive of x) and `0` bias to get the same result. Similar things happen for the arguments related to activation mentioned in the marked solution to the referenced question. In such a scenario, why is the bias needed? Edit: A lot of the answers offer reasonable arguments for the perceptron/single layer case, but perceptron was just an example. Do they hold for deep neural networks as well, because that allows for previous layers better transformation of inputs? As mentioned by some, `0` input will truly cause a problem which I agree with.<issue_comment>username_1: It's not strictly "needed." In fact, if you look at things like Keras, you will see that layers have a `use_bias` parameter, which defaults to True, but you can set to False, of course. For an intuition about why bias is *useful* rather than required, consider a simple $\mathbb{R}^2$ example. Imagine that we have some data that we are attempting to fit a straight line to. When generating our line, we can iteratively update the slope using something like gradient descent and completely ignore the y-intercept, or bias term. In the end we will find a line, centered at the origin, that has the identical slope to a best-fit line that passes through the data. If you have this mental picture, take it a step further. Using the bias term (y-intercept), we can then adjust that line up or down (bias the line up or down) by whatever amount is needed to minimize the loss. If you think about it, for any line $y=mx+b$, we could think of a specific line $y=4x$ as representing the fundamental line for all lines with that slope. Really, they are all the *same* line that we can slide up or down the y-axis to place them where we need them to be. Coming back to training a neural network, the bias, therefore, is not *required*, but can be very useful in allowing us to adjust the output of a neuron up or down as required to better fit the data, possibly easing the difficulty of training subsequent layers/neurons. Upvotes: 4 <issue_comment>username_2: If you have data generated from $y = 5\,x + 3$ How do you expect the simplest neural net $y = w\_1 x$ to adjust the data ? That is why $b$ is useful. Upvotes: 3 <issue_comment>username_3: Let's write some code, shall we? First I'll generate two 2D Gaussian blobs with means at (0,0) and at (3,3) and sigma = 1.0. The points for the blob at (0,0) will be in class `y=0` and the second blob will have the class `y=1`. ``` import numpy as np x = np.concatenate([ np.random.normal(loc=0, scale=1, size=2*1000).reshape(-1,2), np.random.normal(loc=3, scale=1, size=2*1000).reshape(-1,2) ]) y = np.concatenate([np.zeros(1000), np.ones(1000)]) ``` We can plot it with something like `scatter(x[:,0], x[:,1], c=y)`: [![Enter image description here](https://i.stack.imgur.com/AQ9hs.png)](https://i.stack.imgur.com/AQ9hs.png) I'll use `torch`, so I convert these to torch tensors and use its data wrangling classes to shuffle and split into batches. ``` import torch x, y = torch.tensor(x).float() , torch.tensor(y).long() dataset = torch.utils.data.TensorDataset(x,y) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) ``` Now let's make two neural networks - just a linear layer with 2D inputs and two outputs for each class. The only difference is that the first one will have `bias=False` and the second one `bias=True`. ``` model1 = torch.nn.Sequential(torch.nn.Linear(2, 2, bias=False)) model2 = torch.nn.Sequential(torch.nn.Linear(2, 2, bias=True )) ``` I assume that our networks return logits of the classes - so cross-entropy as a loss function. I've hacked together this pretty standard code that, given a `model` and an `optimizer` goes through one epoch and returns average loss: ``` loss_fn = torch.nn.CrossEntropyLoss() def optimize_epoch(model, optimizer): total_loss = 0 for n, (inputs, labels) in enumerate(dataloader): optimizer.zero_grad() outputs = model.forward(inputs) loss = loss_fn(outputs, labels) loss.backward() optimizer.step() total_loss += loss.item() return total_loss / n ``` Running this optimization for 50 epochs and collecting loss history: ``` optimizer1 = torch.optim.SGD(model1.parameters(), lr=0.01) optimizer2 = torch.optim.SGD(model2.parameters(), lr=0.01) losses1, losses2 = [] , [] for _ in range(50): losses1.append(optimize_epoch(model1, optimizer1)) losses2.append(optimize_epoch(model2, optimizer2)) plot(losses1, label="model1"); plot(losses2, label="model2"); legend() ``` You'll see a striking difference in performance between the two models: [![Enter image description here](https://i.stack.imgur.com/PWjlm.png)](https://i.stack.imgur.com/PWjlm.png) --- As per @Stef request, here are the scatterplots for predicted classes for each model. Obtainable via `scatter(x[:,0], x[:,1], c=model1.forward(x).argmax(axis=1)`. [![enter image description here](https://i.stack.imgur.com/abq22.png)](https://i.stack.imgur.com/abq22.png) Here you can clearly see that the separating line for `bias=False` goes through `(0,0)` in accordance with most of other answers here. Upvotes: 4 <issue_comment>username_4: No matter what you make $W\_1$ and $W\_2$, if $X\_1$ is 0 and $X\_2$ is 0 then $W\_1X\_1+W\_2X\_2$ is 0 which (in a typical classification application) means the classifier is completely unsure which class it belongs to. Additionally, mirroring a point across the origin (that is, assigning $X\_1 \leftarrow -X\_1$ and $X\_2 \leftarrow -X\_2$) will also negate the output. The classifier is only able to provide classifications where one class is mirrored across the origin from the other class. Adding a bias term - $W\_1X\_1+W\_2X\_2+W\_3$ - solves these problems. The classifier can draw any line to separate the two classes, not only lines that pass through the origin (0,0). Upvotes: 3 <issue_comment>username_5: Let's interpret each node of a layer as a transformation of sub-feature-inputs into a certainty value for the presence (or absence) of a feature. Then, for example a dense-layer first looks at each of the sub-features independently. It classifies for each sub-feature whether it counts as evidence of presence or absence of the feature (which the node wants to detect) which is expressed by the sign of the weight of edge of the sub-feature to the node. Then the absolute weight of the edge tells us how strong of an evidence the corresponding sub-feature is. Now, for some features it might be much easier to detect evidence for their absence, than to find evidence of their presence. In such cases, the easiest way for the node to learn the feature might be to just find lots of measures which are evidence for the absence of the feature it wants to detect, and then use the bias as a counter-term which decides whether the evidence for absence found by the sub-features is sufficient to rule out the existence of the feature. Upvotes: 0
2023/03/10
2,937
10,091
<issue_start>username_0: Hi I'm trying to train an ANN model to classify images containing these characters: 0,1,2,3,4,T,X,S eg. [![enter image description here](https://i.stack.imgur.com/KnDvB.png)](https://i.stack.imgur.com/KnDvB.png) [![enter image description here](https://i.stack.imgur.com/oAryk.png)](https://i.stack.imgur.com/oAryk.png) etc... so something like the classification of records of the MNIST dataset but using my generated images (not handwritten). I generate ~10000 records that have to be divided in the training set and in the test set: ``` n = 10000 X_train, y_train = generate_typed(typed_alphabet_resized.astype('uint8'), int(n*0.8)) X_test, y_test = generate_typed_test(typed_alphabet_resized.astype('uint8'), int(n*0.3)) ``` `generate_typed` and `generate_typed_test` are pretty much the same function, the sole difference is the augmentation applied to images: for generate\_typed\_test the only augmentation applied is scale(1,3) while for generate\_typed: ``` augmentation_seq = iaa.Sequential([ iaa.Sometimes(0.3, iaa.GaussianBlur(sigma=(0, .1))), iaa.Sometimes(0.3, iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.05*255), per_channel=0.2)), iaa.Affine(scale=(1, 3)), ], random_order=True) ``` I did that to make the model more generalized as possibile. But really the kind of images the model is supposed to work on in the real world might just (sometimes) be zoomed in or be slightly less centered. However even if I use these two different functions, the images I got are very very similar. Now, the problem is: I got a bigger accuracy on the test set than on the training set, and I really can't understand why and how to fix it. It seems like that the accuracy of my model doesn't matter, if my model has around 80% of accuracy or 95% of accuracy, I still got an 0.9997 on test set (bigger than the accuracy during the training). Note: Even if I apply the same type of trasformations on the train and test set, or if I apply no trasformation to both sets nothing changes. I'm not saying the model doesn't work, but it's really wrong quite often. I don't know if this means overfitting or not, but I don't think so because loss & accuracy doesn't intersect each other: [![enter image description here](https://i.stack.imgur.com/w82jt.png)](https://i.stack.imgur.com/w82jt.png) Model: ``` model3 = Sequential([ Flatten(input_shape=(28,28,1)), # "appiattisco" l'immagine Dense(20, activation='relu'), # creo uno strato denso la cui funzione di attivazione è relu Dropout(0.5), # Spengo a caso dei neuroni (per evitare overfitting) Dense(num_classes, activation='softmax') # strato denso di 10 neuroni con funzione di attivazione softmax (perchè è un classificatore) ]) model3.compile( loss = 'sparse_categorical_crossentropy', metrics=['accuracy'], optimizer=Adam(lr=0.01) ) X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.1, random_state=42) model3.fit(X_train, y_train, epochs=5, batch_size = 5, validation_data=(X_val, y_val), callbacks=[early_stop]) _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= flatten_34 (Flatten) (None, 784) 0 dense_71 (Dense) (None, 20) 15700 dropout_37 (Dropout) (None, 20) 0 dense_72 (Dense) (None, 105) 2205 ================================================================= Total params: 17,905 Trainable params: 17,905 Non-trainable params: 0 ```<issue_comment>username_1: It's not strictly "needed." In fact, if you look at things like Keras, you will see that layers have a `use_bias` parameter, which defaults to True, but you can set to False, of course. For an intuition about why bias is *useful* rather than required, consider a simple $\mathbb{R}^2$ example. Imagine that we have some data that we are attempting to fit a straight line to. When generating our line, we can iteratively update the slope using something like gradient descent and completely ignore the y-intercept, or bias term. In the end we will find a line, centered at the origin, that has the identical slope to a best-fit line that passes through the data. If you have this mental picture, take it a step further. Using the bias term (y-intercept), we can then adjust that line up or down (bias the line up or down) by whatever amount is needed to minimize the loss. If you think about it, for any line $y=mx+b$, we could think of a specific line $y=4x$ as representing the fundamental line for all lines with that slope. Really, they are all the *same* line that we can slide up or down the y-axis to place them where we need them to be. Coming back to training a neural network, the bias, therefore, is not *required*, but can be very useful in allowing us to adjust the output of a neuron up or down as required to better fit the data, possibly easing the difficulty of training subsequent layers/neurons. Upvotes: 4 <issue_comment>username_2: If you have data generated from $y = 5\,x + 3$ How do you expect the simplest neural net $y = w\_1 x$ to adjust the data ? That is why $b$ is useful. Upvotes: 3 <issue_comment>username_3: Let's write some code, shall we? First I'll generate two 2D Gaussian blobs with means at (0,0) and at (3,3) and sigma = 1.0. The points for the blob at (0,0) will be in class `y=0` and the second blob will have the class `y=1`. ``` import numpy as np x = np.concatenate([ np.random.normal(loc=0, scale=1, size=2*1000).reshape(-1,2), np.random.normal(loc=3, scale=1, size=2*1000).reshape(-1,2) ]) y = np.concatenate([np.zeros(1000), np.ones(1000)]) ``` We can plot it with something like `scatter(x[:,0], x[:,1], c=y)`: [![Enter image description here](https://i.stack.imgur.com/AQ9hs.png)](https://i.stack.imgur.com/AQ9hs.png) I'll use `torch`, so I convert these to torch tensors and use its data wrangling classes to shuffle and split into batches. ``` import torch x, y = torch.tensor(x).float() , torch.tensor(y).long() dataset = torch.utils.data.TensorDataset(x,y) dataloader = torch.utils.data.DataLoader(dataset, batch_size=32, shuffle=True) ``` Now let's make two neural networks - just a linear layer with 2D inputs and two outputs for each class. The only difference is that the first one will have `bias=False` and the second one `bias=True`. ``` model1 = torch.nn.Sequential(torch.nn.Linear(2, 2, bias=False)) model2 = torch.nn.Sequential(torch.nn.Linear(2, 2, bias=True )) ``` I assume that our networks return logits of the classes - so cross-entropy as a loss function. I've hacked together this pretty standard code that, given a `model` and an `optimizer` goes through one epoch and returns average loss: ``` loss_fn = torch.nn.CrossEntropyLoss() def optimize_epoch(model, optimizer): total_loss = 0 for n, (inputs, labels) in enumerate(dataloader): optimizer.zero_grad() outputs = model.forward(inputs) loss = loss_fn(outputs, labels) loss.backward() optimizer.step() total_loss += loss.item() return total_loss / n ``` Running this optimization for 50 epochs and collecting loss history: ``` optimizer1 = torch.optim.SGD(model1.parameters(), lr=0.01) optimizer2 = torch.optim.SGD(model2.parameters(), lr=0.01) losses1, losses2 = [] , [] for _ in range(50): losses1.append(optimize_epoch(model1, optimizer1)) losses2.append(optimize_epoch(model2, optimizer2)) plot(losses1, label="model1"); plot(losses2, label="model2"); legend() ``` You'll see a striking difference in performance between the two models: [![Enter image description here](https://i.stack.imgur.com/PWjlm.png)](https://i.stack.imgur.com/PWjlm.png) --- As per @Stef request, here are the scatterplots for predicted classes for each model. Obtainable via `scatter(x[:,0], x[:,1], c=model1.forward(x).argmax(axis=1)`. [![enter image description here](https://i.stack.imgur.com/abq22.png)](https://i.stack.imgur.com/abq22.png) Here you can clearly see that the separating line for `bias=False` goes through `(0,0)` in accordance with most of other answers here. Upvotes: 4 <issue_comment>username_4: No matter what you make $W\_1$ and $W\_2$, if $X\_1$ is 0 and $X\_2$ is 0 then $W\_1X\_1+W\_2X\_2$ is 0 which (in a typical classification application) means the classifier is completely unsure which class it belongs to. Additionally, mirroring a point across the origin (that is, assigning $X\_1 \leftarrow -X\_1$ and $X\_2 \leftarrow -X\_2$) will also negate the output. The classifier is only able to provide classifications where one class is mirrored across the origin from the other class. Adding a bias term - $W\_1X\_1+W\_2X\_2+W\_3$ - solves these problems. The classifier can draw any line to separate the two classes, not only lines that pass through the origin (0,0). Upvotes: 3 <issue_comment>username_5: Let's interpret each node of a layer as a transformation of sub-feature-inputs into a certainty value for the presence (or absence) of a feature. Then, for example a dense-layer first looks at each of the sub-features independently. It classifies for each sub-feature whether it counts as evidence of presence or absence of the feature (which the node wants to detect) which is expressed by the sign of the weight of edge of the sub-feature to the node. Then the absolute weight of the edge tells us how strong of an evidence the corresponding sub-feature is. Now, for some features it might be much easier to detect evidence for their absence, than to find evidence of their presence. In such cases, the easiest way for the node to learn the feature might be to just find lots of measures which are evidence for the absence of the feature it wants to detect, and then use the bias as a counter-term which decides whether the evidence for absence found by the sub-features is sufficient to rule out the existence of the feature. Upvotes: 0
2023/03/12
1,273
5,241
<issue_start>username_0: Before I describe my challenge, I want to point out that I have searched extensively online for "outlier image detection", "anomaly images detection", etc., but all returned results are about finding anomalies or outliers within an image (e.g., a defect in a machined part, a tear in fabric, etc.). This is not what I am looking for: I am looking to identify images (whole images) that depict an object that is different from the other images. For instance, I have a stream of images from a live camera. Each image depicts a single dog (different breeds) and each image is centered on the dog. A few images depict a cat, rather than a dog... I would like to detect these cat images (outliers). This is not a simple classification problem, where I have a labeled data set of cats and dogs... because I don't want to define the 'outlier'. In other words, the approach must be adaptable to the data: if I were to suddenly add a few images of trash, the model must flag those trash images as novel/outliers because in comparison to the number of dog images, trash images (like cat images) are pretty rare (or new). Would something like sequential k-means work for this problem? I thought about sequential k-means, since it operates on a data stream and will adapt to the number of clusters (i.e, k does not need to be pre-defined). What other approaches would work? Any resources (e.g., links to tutorials, GitHub repositories, specific algorithms, methodologies) that might help?<issue_comment>username_1: Hi there @pookie you can approach this problem using unsupervised anomaly detection techniques. One such technique is to use an autoencoder neural network. The idea is to train an autoencoder on a large set of dog images, and then use the trained network to encode (compress) each image into a lower-dimensional representation. The encoded representations should capture the key features of dog images. The network can then be used to reconstruct the original image from the encoded representation. Now, when a new image (e.g., a cat image or a trash image) is presented, the network will not be able to reconstruct it well because it has not seen that kind of image before. The reconstruction error will be higher for the novel image, which can be used to identify it as an outlier. You can use this approach to build a threshold-based anomaly detection system, where any image with a reconstruction error above a certain threshold is considered an outlier. Alternatively, you can use a more sophisticated anomaly detection algorithm like One-class SVM, Isolation Forest, or Local Outlier Factor. Another option is to use clustering techniques like k-means or DBSCAN to cluster the dog images and identify clusters with very few points as outliers. However, this approach may not work well if the outlier images are visually similar to dog images. Here are some resources that may help you get started: * Anomaly Detection with Autoencoders using Keras in Python: [machinecurve/anomalyDetectionAutoencoder-using-Keras](https://www.machinecurve.com/index.php/2020/02/18/how-to-do-anomaly-detection-with-autoencoders-using-keras-in-python/) * One-class SVM for anomaly detection: [scikit-learn/oneClassSVM](https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html) * Isolation Forest for anomaly detection: [scikit-learn/isolationForest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) * Local Outlier Factor for anomaly detection: [scikit-learn/localOutlierFactor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) If you found this helpful, don't forget to upvote :) Upvotes: 2 <issue_comment>username_2: I agree with username_1' answer, but as a preprocessing step I suggest to use some pre-trained image classification network without the last fully-connected layer. This will give you robust features which represent characteristics of the input image. But since these representations are high-dimensional, I'd apply an autoencoder to these features. Actually I have implemented and blogged about such approach, but I'm not sure whether links to personal blogs are discouraged or not. Well at least each answer should work as a stand-alone resource. In my case the images originated from videos, and I wanted to cluster either the frames themselves or video clips. [![video processing pipeline](https://i.stack.imgur.com/ELTcj.png)](https://i.stack.imgur.com/ELTcj.png) I used this for mountain biking videos, which resulted in this kind of 2D map: [![clustered videos](https://i.stack.imgur.com/XdCsr.jpg)](https://i.stack.imgur.com/XdCsr.jpg) Each line represents a different mountain biking / downhill clip. Anyway, in your case I'd use 2D coordinates for outlier detection. They seem to contain enough interesting data, and are so easy to visualize as well! Link to my blog article: [Image and video clustering with an autoencoder](https://nikonyrh.github.io/videoclustering.html). It also has examples on clustering frames from either <NAME>­ger or <NAME> films. It could group conceptually similar images together in the 2D plane, from different films. Upvotes: 2
2023/03/16
723
2,656
<issue_start>username_0: To me it looks like GPT-4 is based on GPT-3. On the other hand, there were rumors that training of GPT-3 was done with errors, but re-train was impossible due to the costs.<issue_comment>username_1: GPT-4 is largely based on GPT-3. From the [GPT-4 Technical Report](https://arxiv.org/pdf/2303.08774.pdf): > > GPT-4 is a Transformer-style model [39] > > > The transformer-style model originates from the paper [Attention Is All You Need](https://arxiv.org/abs/1706.03762), which lays the foundation of GPT, GPT-2, and GPT-3. *However*, there is one significant change: GPT-4 accepts images as inputs. This has been observed in the paper [Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/pdf/2103.00020.pdf) by OpenAI. We can safely predict that GPT-4 is based on GPT-3 + CLIP. I say *predict* because the authors decided not to publish the architecture or anything: > > Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar. > > > However, we have no details regarding whether it is trained from scratch, or use any technique. Upvotes: 3 <issue_comment>username_2: GPT-4 is a transformer like GPT-3 and any other GPT. The training is certainly new, because it has a different size, you just can not transfer GPT-3 weights into an GPT-4 to continue training. The details of the implementation are currently not known, the published report about it is not a scientific paper about how GPT-4 works, it basically states "we do not say how it works", see below. The size of various components certainly differ, at least the number of weights in one part, but it could also be larger everywhere. The prompt length (context size) and the maximal output size increased a lot, from 4000 tokens to 8000 or even 32000 tokens. I would expect there are some other minor differences. Other major differences can be the amount of training data, and the compute used for training. My personal speculation is that GPT-4 used a lot more compute for training, with possibly a similar amount of training data as GPT-3. --- From the "GPT-4 Technical Report", section 2: > > Given both the competitive landscape and the safety implications of > large-scale models like GPT-4, this report contains no further details > about the architecture (including model size), hardware, training > compute, dataset construction, training method, or similar. > > > Upvotes: 3 [selected_answer]
2023/03/17
671
3,143
<issue_start>username_0: I do not know at all how AI works. After checking out the first open AI system available to the public, ChatGPT, I am curious whether systems like this could contribute to scientific theory in the future.<issue_comment>username_1: This already happened: What directly comes to my mind is AlphaTensor that developed a novel method for matrix multiplication. Matrix multiplication is a computational expensive and often used operation. So it is safe to assume that over the past decades many humans have put effort into finding faster algorithms for this. There are also scientific papers written by an AI. But without a close look at these papers, I would hesitate to claim that they contain novel scientific principles or theories. Upvotes: 4 [selected_answer]<issue_comment>username_2: Yes, from first principles. Discovering new scientific theories is a mixture of following a set of mathematical logic principles and using creativity to come up with something new and useful. Regarding the first criteria, logic and math, it is clear that computers are superior to humans in carrying out long chains of logic inferences, thanks to the precision of the hardware on which they are built. Creativity is often mentioned as something differentiating humans from machines, but actually machines have access to the highest level of creativity: random number generators. Nothing is more creative than just randomly picking an option! While current systems are still primitive in manipulating results and having the big picture necessary to put them to use, the state of the art is rapidly advancing. For instance, given the recent advances in NLP AIs will soon be able to chain together different concepts according to common patterns, such as the one used by scientists. Chain-of-Thought techniques are a good example. Now imagine gluing this NLP system to a source of truth, such as a tool (say, an equation solver or a microscope) able to test the proposed theory with an experiment. By brute forcing over many hypotheses you can imagine how these systems may eventually come up with relevant theories. Assuming the above, that is LLMs or chatGPT-like systems are in principle able to formulate valid theories, are these systems in practice able to tame the combinatorial explosion in logic formulas arising when probing non trivial theories? I would lean on saying no, models which are limited to predicting the next word would be inefficient in performing the smart reasonings required to avoid combinatorial explosions, even though you may imagine that clever tricks could be devised to remind the language models the laws and mental models needed to trim the tree of possibilities at each step of the reasoning process. What could do it then? I believe a substantial leap in capabilities would come from a fully multimodal model, able to also Chain-of-Thought reason with images, 3d representations, embodiments and physics simulations. In a sense these modalities are summarising many words at once, cutting down a continuum of possible inference chains. This is often called Visual Thinking. Upvotes: 2
2023/03/19
441
2,003
<issue_start>username_0: I am using a model with linear activation in the hidden layer and non-linear activation in the output layer. Could you please help to know whether such models exhibit linearity or not? The non-linear activation is required to scale the output.<issue_comment>username_1: You are speaking about generalized linear models (GLM) Upvotes: 0 <issue_comment>username_2: If you have a `sigmoid` activation in your output layer, and linearities before, you can interpret it as a [logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) model for binary classification (if output dimension is one.) Also, in practice, having multiple *linear* hidden layers is a waste of resources because you can define an equivalent linear layer whose weight matrix is the linear combination of those. Moreover, intermediate linear layers can be useful to "adjust" (i.e. expand or decrease) the output dimension of the previous layer (including the input layer.) In general, you want to have multiple non-linear layers (i.e. whatever layer with non-linear activation on top) in the middle of your model (i.e. as hidden layers) because the activation of the output layer is often (if not always) problem dependent: e.g., in regression it can be linear. Having multiple non-linear functions in your neural-net allows it to model (learn) more complex functions, so exploiting the [universal function approximation](https://en.wikipedia.org/wiki/Universal_approximation_theorem) thing: indeed, assuming a correct architecture, loss function, and optimization procedure. Upvotes: 0 <issue_comment>username_3: A non-linear layer is a matrix multiplication followed by an activation function. A linear layer is just a matrix multiplication. You have a matrix multiplication, then another matrix multiplication, then an activation function. Two matrix multiplications in series are equivalent to one matrix multiplication, so your network is equivalent to one non-linear layer. Upvotes: 1
2023/03/20
441
1,722
<issue_start>username_0: I asked ChatGPT (3.5 and 4) about current date and (s)he answered correctly. In subsequent conversation (s)he was not able to explain how (s)he has this knowledge. I always thought that the model only sees the conversation above and a pretrained NN is used. How is the information about current date injected into his/her knowledge?<issue_comment>username_1: For ChatGPT 3, the current date is inserted into a long pre-prompt, along with instructions like "this is a conversation between an AI chatbot and a human" plus "be nice" and "be truthful", which are part of the attempts to frame the next-word-predicting engine at the core of ChatGPT as a chatbot. OpenAI [confirm this approach in their general GPT documentation](https://platform.openai.com/docs/guides/chat/instructing-chat-models). Inherently, the core of ChatGPT - the GPT large language model - is *not* a chatbot. It has some resemblance conceptually to an image inpainting system — it predicts text that is likely, given preceding text. I expect the same is true of ChatGPT 4, but have not seen any confirmation of this. It is *feasible* in principle to alter architecture of the bots to have them reference external data sources, but I believe that for the current date, a pre-prompt will still be in use. Upvotes: 7 [selected_answer]<issue_comment>username_2: As an AI language model, ChatGPT does not have access to real-time information unless it's provided to it. Its knowledge cutoff date is September 2021. Therefore, it actually relies on the timestamp of the user's message to give the current date. Based on the user's message's timestamp, it assumes the current date. For example 2023-03-23 10:10:00 UTC. Upvotes: 0
2023/03/21
572
2,592
<issue_start>username_0: We have been working on a deep learning problem for a few iterations now. We've been tweaking the preprocessing as we go. We've also been training models as we go. The people using the models for inference need to do the same preprocessing on their data as we used on our training data. Is there any standard way of 'versioning' the preprocessing code so that it can be linked to the models that used that version? I could use GitHub versions or tags, and maybe that's the best way. Or is there some other technique that is widely used?<issue_comment>username_1: Using GitHub versions and tags is certainly one way to go. However, there are more sophisticated techniques and tools you might want to consider: * **Data Version Control (DVC)**: DVC is a popular open-source tool for versioning data and preprocessing code, specifically designed for machine learning projects. It provides a Git-like interface to manage both data and code, allowing you to link specific versions of your data and preprocessing pipeline with corresponding model versions. You can learn more about DVC at <https://dvc.org/>. * **MLflow**: MLflow is an open-source platform for managing the end-to-end machine learning lifecycle. It includes tools for tracking experiments, packaging code into reproducible runs, and sharing and deploying models. With MLflow, you can track different versions of your preprocessing code, data, and models, and switch between them when needed. You can learn more about MLflow at <https://mlflow.org/>. * **Docker**: Another way to ensure consistency in your preprocessing is by using Docker containers. You can create a Docker image with the required dependencies and preprocessing code, and then use this image for both training and inference. This ensures that the same code and environment are used at each stage. You can then version the Docker images using tags, and store them in a container registry such as Docker Hub or Google Container Registry. You can learn more about Docker at <https://www.docker.com/>. Upvotes: 2 [selected_answer]<issue_comment>username_2: Experiment tracking is so important, it's very hard to work together as a team on models without it. Sounds like you'd both want to track your data lineage and track your models. Here you can see my comparison of yolov5 and yolov8 models, everything is stored for all of these models; hyperparameters, metrics, confusion matrix, images, dependencies.. I didn't store the code, but it's very easy to do. <https://www.comet.com/kristenkehrer/dogs-and-cats/view/new/panels> Upvotes: 0
2023/03/22
804
3,219
<issue_start>username_0: Before the advent of neural architectures, many AI domains (e.g. speech recognition and computer vision) used algorithms that consisted of a series of hand-crafted transformations for feature extraction. * In speech recognition everything to do with spectrograms or cepstra is done based on the mathematical theory of (discrete) Fourier transforms. * In computer vision, edge detectors like Sobel/Canny or feature descriptors like SIFT are all convolution-based. You can implement all of these with everyday procedural programs. All the intelligence of these methods is embedded within the steps of the program, and not in its parameters. One thing I have been wondering about is what you would call such algorithms that work "out of the box". My inclination is to call them *deterministic*, but this is clearly wrong considering most neural networks (not including e.g. VAEs) are deterministic functions of their input too. Perhaps some might call them *dumb*, but that's also not right, because these algorithms are very sophisticated and purposefully made. They're not *naive* like naive Bayes is naive. It's not because they aren't trained that they aren't *artificially intelligent*. *Traditional* or *classical* is too vague, because neural nets have existed for more than half a century and could be considered that too. The same goes for *algorithmic* and *procedural*. Is there standard terminology to delimit the set of such artificially-intelligent-but-not-data-driven algorithms?<issue_comment>username_1: How about **handcrafted** -as you mentioned-? In the following question it is opposed to learned. <https://datascience.stackexchange.com/questions/54390/what-is-the-difference-between-handcrafted-and-learned-features> Upvotes: 2 <issue_comment>username_2: The word "Artificial intelligence" refers to machines being able to have intelligence that of humans/animals. The meaning of the word was even discussed on [this site](https://ai.stackexchange.com/questions/7838/what-is-artificial-intelligence). So it's up to your interpretation what is human/animal like performance - you could even argue the examples you provide are human like performance. There is this well known Venn diagram showing the difference. [![ source ](https://i.stack.imgur.com/t8nTd.png)](https://i.stack.imgur.com/t8nTd.png) But personally I wouldn't call these example you mention artificially-intelligent. The algorithms you mention are just a set of complex instructions. > > All the intelligence of these methods is embedded within the steps of the program, and not in its parameters. > > > While intelligent authors made the algorithms, that's not what the word "artificially-intelligent" refers to. > > It's not because they aren't trained that they aren't artificially intelligent. > > > Again it's up to you what you perceive as intelligence that of humans/animals. But if an algorithm has trainable weights, then it's classified as Machine Learning. A Machine learning agent can still be so useless such that no one would classify it as "Artificial intelligence". **I would just call the algorithms you refer to a "complex algorithm" or a "non-ML method".** Upvotes: 2
2023/03/22
473
1,774
<issue_start>username_0: Where do you insert layer norm in a residual block? After the addition or before the activation function (RELU in this case)?<issue_comment>username_1: Two things: * Layer Norm wasn't invented before ResNet. ResNet still uses the regular Batch Norm. * The model to use Layer Norm is residual block is ConvNeXt. Based on [this line](https://github.com/facebookresearch/ConvNeXt/blob/048efcea897d999aed302f2639b6270aedf8d4c8/models/convnext.py#L18), it applies LayerNorm after the first Conv layer, before the activation layer. Upvotes: 1 <issue_comment>username_2: Usually you insert the normalization layer (be it BatchNorm, LayerNorm or whatever) after the convolutional layer and before the activation layer, i.e. Conv + Norm + ReLU. The original ResNet applies the addition (skip connection) just before the last ReLU, but this design was revised in a [follow-up paper](https://arxiv.org/pdf/1603.05027.pdf) by the same authors. You can also see [here](https://username_2.github.io/posts/res-nets/#the-residual-block-empowering-deeper-networks) for figures and explanations. Regarding using skip connections with fully-connected layers, I think you can take a look at the transformer architecture. The transformer encoder (and decoder) architecture uses skip connections around fully connected layers. It also uses LayerNorm instead of BatchNorm. However, with this architecture there are again debates where exactly to place the LayerNorm layer. [This paper](https://arxiv.org/pdf/2002.04745.pdf) suggests using Pre-LayerNorm instead of Post-LayerNorm, because it stabilizes the training of the Transformer model. You can also see [here](https://username_2.github.io/posts/transformer/#encoder-block) for figures and explanations. Upvotes: 0
2023/03/22
543
2,067
<issue_start>username_0: Basically, I'm wondering if there are any small and simple problems that are: * complex enough to be **unsolvable** with a standard neural network without any hidden layer (ie. input -> output) * simple enough to be **solvable** with a standard neural network consisting on exactly one hidden node in exactly one hidden layer (ie. input -> one hidden node -> output) Can such problems even exist at all? If not, why?<issue_comment>username_1: Two things: * Layer Norm wasn't invented before ResNet. ResNet still uses the regular Batch Norm. * The model to use Layer Norm is residual block is ConvNeXt. Based on [this line](https://github.com/facebookresearch/ConvNeXt/blob/048efcea897d999aed302f2639b6270aedf8d4c8/models/convnext.py#L18), it applies LayerNorm after the first Conv layer, before the activation layer. Upvotes: 1 <issue_comment>username_2: Usually you insert the normalization layer (be it BatchNorm, LayerNorm or whatever) after the convolutional layer and before the activation layer, i.e. Conv + Norm + ReLU. The original ResNet applies the addition (skip connection) just before the last ReLU, but this design was revised in a [follow-up paper](https://arxiv.org/pdf/1603.05027.pdf) by the same authors. You can also see [here](https://username_2.github.io/posts/res-nets/#the-residual-block-empowering-deeper-networks) for figures and explanations. Regarding using skip connections with fully-connected layers, I think you can take a look at the transformer architecture. The transformer encoder (and decoder) architecture uses skip connections around fully connected layers. It also uses LayerNorm instead of BatchNorm. However, with this architecture there are again debates where exactly to place the LayerNorm layer. [This paper](https://arxiv.org/pdf/2002.04745.pdf) suggests using Pre-LayerNorm instead of Post-LayerNorm, because it stabilizes the training of the Transformer model. You can also see [here](https://username_2.github.io/posts/transformer/#encoder-block) for figures and explanations. Upvotes: 0
2023/03/22
1,543
6,334
<issue_start>username_0: How can GPT 4 solve complex calculus and other math problems. I believe these problems require analytical reasoning and ability to compute numbers. Does it still use a LLM to complete this process or does it add on to this? [Here](https://openai.com/research/gpt-4) is the link to the official results published by OpenAI<issue_comment>username_1: There is a folk story about [<NAME>](https://en.wikipedia.org/wiki/Josiah_Willard_Gibbs) that goes something like: > > Being a famous scientist, Gibbs was a member of a number of scientific bodies. He was bored by those and never took a podium. Except for one time. The discussion was about redirecting some effort from teaching mathematics towards more effort at teaching foreign languages. Gibbs decided to give a speech that one time. He said: *"Mathematics is a language."* > > > I don't know is this story is true or not, but I share the attitude. Upvotes: 0 <issue_comment>username_2: As far as we know, GPT-4's core capabilities are still based mainly on a Large Language Model (LLM). If so, then the apparent capabilities to reason are a somewhat surprising emergent phenomenom from a well-trained sequence prediction engine that has been trained on large amounts of data, and has capacity to create highly complex rules that approximate a "next symbol oracle". Again, assuming this assertion is correct, then maths and logic capabilities of ChatGPT divide into a few different possibilities (these are not formal classifications, just my amateur analysis): * Rote learning of symbol associations. This is likely to occur with commonly-occurring items in the training data. Special values of trigonometry functions for example. * Things that look like logical reasoning, but are simply well-formed sentences that are on-topic. This is something we can easily be fooled by. When ChatGPT gives an explanation for a thing, it may not have any representation of it beyond being in an "explainy" state, and generating text that fits. * Approximate rules and processes. The LLM is a complex neural network, and can in principle learn arbitrary internal functions in order to predict sequence continuation. It learns these statistically, and there will be limitations - for example it is unlikely that it could learn to produce crytpographic hashes given only examples. But it may really learn to add two numbers across a wide range of numerical values given thousands of examples. * Accurate rules and processes. Some rules in maths are very language-like and could be learned very well by an LLM. That could include some mathematical symbol manipulations such as variable substitution. I expect that all the above are occurring in some mix. For example, you could conjecture that there is a reasonable chance that GPT can internally count accurately up to some moderate number, and re-use that ability in different contexts to predict numerical symbols and associated words (e.g. `one` has also representation `1`) It may also contain more than one such semi-accurate counter, using them in different contexts. The sheer quantity of training material - more than any single person could consume in their lifetime - plus learning capacity of the neural network means that there are probably a lot of simple rote rules that are subject-dependent. However, in some areas the model will have a "deeper understanding" in that it will have learned reasonably complex manipulations, and used them to predict a sequence of symbols as accurately as possible, using as little of its learning capacity as possible (because it is being asked to predict text in a huge range of contexts, so benefits when it compresses its rules) GPT has not learned primarily by reasoning and from first principles though. Its inner representations and logical units of work are likely to be quite alien to humans, and may freely combine grammar, sentiment, mathematical building blocks and other symbollic context in ways that could seem very odd if they could even be explained. This heady mix of things that occurs in most neural networks during training, is one reason why it is unlikely that OpenAI have wired in separate logic modules for highly structured processing such as math symbols or calculations. Providing such modules is possible, but detecting when to use them, and how to wire them into the network are both hard problems. Upvotes: 2 <issue_comment>username_3: ChatGPT now uses Wolfram Alpha to deal with math as well as other factual information. <https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/> Upvotes: 2 <issue_comment>username_4: Large Language Models actually can do math. It's an "emergent" property, i.e. it appears only at larger scales. Understanding complex English language does require some analytical ability, which can carry over to math tasks like calculus and even arithmetic. Numbers can be represented as words, so it's definitely not unthinkable that an LLM could learn to add and subtract if it were to see enough examples. The graph below from the 2022 paper ["Emergent Abilities of Large Language Models"](https://arxiv.org/pdf/2206.07682.pdf) shows that these properties "spontaneously" emerge as models get larger. We're interested in subgraph (A) here. Upto 10^22 FLOPS, the models studied (the largest models available at the time) have basically no arithmetic ability, but scaling the models further rapidly improves their capabilities. We don't know the internals of GPT-4, but it should be larger than these models, so it was expected that it would better at arithmetic. It also goes the other way around, [Numeracy enhances the Literacy of Language Models](https://aclanthology.org/2021.emnlp-main.557.pdf) [![Emergent Properties of Large language models](https://i.stack.imgur.com/ITNQK.png)](https://i.stack.imgur.com/ITNQK.png) Upvotes: 4 <issue_comment>username_5: OpenAI's CEO [explicitly mentioned](https://www.youtube.com/live/outcGtbnMuQ?feature=share&t=1317) last month in the GPT-4 announcement video that GPT-4 isn't hooked up to a calculator. One can however install plugins on top of ChatGPT, which may connect it to some other resources such as Wolfram as mentioned in [username_3](https://ai.stackexchange.com/users/24014/jaume-oliver-lafont)'s answer. Upvotes: 2
2023/03/23
557
2,342
<issue_start>username_0: People often cite the universal approximation theorem as a reason for why neutral networks are so effective at capturing patterns or features of various training data. However, this seems unremarkable to me, because something like Fourier series are also able to approximate almost any function between compact domains of Euclidean spaces. So my question is, what makes neural networks different from something like Fourier analysis where we can approximate any sufficiently nice function we like as well? Am I not understanding the universal approximation theorem, or are there justifications for the power of neural networks that go deeper than talk about approximation?<issue_comment>username_1: > > People often cite the universal approximation theorem as a reason for why neutral networks are so effective at capturing patterns or features of various training data. > > > There is an opinion that this is completely missing the point. It doesn't really matter how well you can approximate your data - a lookup table that completely remembers your data can perform with 100% accuracy. What matters is the generalization capabilities of your functional family: how well does it interpolates and extrapolates beyond its training observations. Speaking about the Fourier transform - I think one can even say that the [Gibbs phenomenon](https://en.wikipedia.org/wiki/Gibbs_phenomenon) can be considered as very early example of people noticing such "bad generalization" behavior. Why deep neural networks are so (comparatively) good at generalization? There's still a discussion about it. There's [Manifold hypothesis](https://en.wikipedia.org/wiki/Manifold_hypothesis), there's [kernel machines](https://arxiv.org/abs/2012.00152) interpretation, there are studies of [training landscapes](https://www.sciencedirect.com/science/article/pii/S0370157321001290) of deep neural nets. There are even takes that employ [criticality analysis](https://arxiv.org/abs/2106.10165) from physics. To me it looks like all these approaches are parts of a bigger picture that we are yet to see. Upvotes: 3 [selected_answer]<issue_comment>username_2: It's worth noting that the Fourier series analogy was used in early explorations of universal approximation theorems <https://ieeexplore.ieee.org/document/23903>. Upvotes: 2
2023/03/24
527
2,074
<issue_start>username_0: I am keen on creating a little project that can play a fairly basic 2D game (more complex than say, snake but not as complex as mario kart) and would like some pointers on where to begin. I'm entirely new to any coding/programming but have a basic knowledge in machine learning. Is the first step to learn python, or unity? Then from there, what would I look at learning next? Any pointers are appreciated.<issue_comment>username_1: > > People often cite the universal approximation theorem as a reason for why neutral networks are so effective at capturing patterns or features of various training data. > > > There is an opinion that this is completely missing the point. It doesn't really matter how well you can approximate your data - a lookup table that completely remembers your data can perform with 100% accuracy. What matters is the generalization capabilities of your functional family: how well does it interpolates and extrapolates beyond its training observations. Speaking about the Fourier transform - I think one can even say that the [Gibbs phenomenon](https://en.wikipedia.org/wiki/Gibbs_phenomenon) can be considered as very early example of people noticing such "bad generalization" behavior. Why deep neural networks are so (comparatively) good at generalization? There's still a discussion about it. There's [Manifold hypothesis](https://en.wikipedia.org/wiki/Manifold_hypothesis), there's [kernel machines](https://arxiv.org/abs/2012.00152) interpretation, there are studies of [training landscapes](https://www.sciencedirect.com/science/article/pii/S0370157321001290) of deep neural nets. There are even takes that employ [criticality analysis](https://arxiv.org/abs/2106.10165) from physics. To me it looks like all these approaches are parts of a bigger picture that we are yet to see. Upvotes: 3 [selected_answer]<issue_comment>username_2: It's worth noting that the Fourier series analogy was used in early explorations of universal approximation theorems <https://ieeexplore.ieee.org/document/23903>. Upvotes: 2
2023/03/27
724
2,486
<issue_start>username_0: In a NN regression problem, considering that MSE is squaring the error and the error is between 0 and 1 would it be pointless to use MSE as our loss function during model training? For example: ``` MSE = (y_pred - y_true) ^ 2 @ Expected model output range [0, 1]: MSE = (0.1 - 0.01) ^ 2 = 0.0081 // Significantly larger error is less pronounced in the MSE output MSE = (0.1 - 0.0001) ^ 2 = 0.00998001 @ Expected model output range [10, 20]: MSE = (10 - 12) ^ 2 = 4 // Significantly larger error is more pronounced in the MSE output MSE = (10 - 20) ^ 2 = 100 ``` If it’s indeed useless for that range, would using RMSE allow us to use this loss function at 0-1 range to benefit from its outlier sensitivity during training or is there another loss that would mimic the effect of MSE for values between 0 and 1?<issue_comment>username_1: > > People often cite the universal approximation theorem as a reason for why neutral networks are so effective at capturing patterns or features of various training data. > > > There is an opinion that this is completely missing the point. It doesn't really matter how well you can approximate your data - a lookup table that completely remembers your data can perform with 100% accuracy. What matters is the generalization capabilities of your functional family: how well does it interpolates and extrapolates beyond its training observations. Speaking about the Fourier transform - I think one can even say that the [Gibbs phenomenon](https://en.wikipedia.org/wiki/Gibbs_phenomenon) can be considered as very early example of people noticing such "bad generalization" behavior. Why deep neural networks are so (comparatively) good at generalization? There's still a discussion about it. There's [Manifold hypothesis](https://en.wikipedia.org/wiki/Manifold_hypothesis), there's [kernel machines](https://arxiv.org/abs/2012.00152) interpretation, there are studies of [training landscapes](https://www.sciencedirect.com/science/article/pii/S0370157321001290) of deep neural nets. There are even takes that employ [criticality analysis](https://arxiv.org/abs/2106.10165) from physics. To me it looks like all these approaches are parts of a bigger picture that we are yet to see. Upvotes: 3 [selected_answer]<issue_comment>username_2: It's worth noting that the Fourier series analogy was used in early explorations of universal approximation theorems <https://ieeexplore.ieee.org/document/23903>. Upvotes: 2
2023/03/27
756
2,713
<issue_start>username_0: Assuming the user can set all parameters, including but not limited to the seed. Is the output deterministic? As in, the same set of inputs will create the same image?<issue_comment>username_1: Yes. By setting the seed you can control the reproducibility. See the [Guide to using seed in Stable Diffusion](https://getimg.ai/guides/guide-to-seed-parameter-in-stable-diffusion) [![enter image description here](https://i.stack.imgur.com/YaPEr.png)](https://i.stack.imgur.com/YaPEr.png) With all parameters fixed except for the seed, the output will have some degree of randomness. Using the prompt "dogs chasing cars in Alaska" three times I got the following on Stable Diffusion v1.5: [![enter image description here](https://i.stack.imgur.com/WrFPo.jpg)](https://i.stack.imgur.com/WrFPo.jpg) [![enter image description here](https://i.stack.imgur.com/rau8D.jpg)](https://i.stack.imgur.com/rau8D.jpg) [![enter image description here](https://i.stack.imgur.com/wMR1z.jpg)](https://i.stack.imgur.com/wMR1z.jpg) Upvotes: 2 <issue_comment>username_2: Even with all parameters fixed, I have gotten slightly different results with SD 1.4 at least when generating a batch of images. I haven't done extensive testing on this, but even a single sample is sufficient to show that the results weren't identical. The used code is a private fork from [neonsecret/stable-diffusion](https://github.com/neonsecret/stable-diffusion). Eight of the nine images were pretty much identical, but for some reason the middle row's right image showed some variation between runs on the hand. I also tried running a single image once, and this time the result was identical. SD 1.4, "Indian girl on a flower field, digital art, realistic, highly detailed, concept art", CFG 3, 50 steps, seed 2347886331, DDIM sampler. The gamma of difference images was set to 3.0 in GIMP, to enhance small deviations. [![SD girl on field comparison](https://i.stack.imgur.com/Nhajd.jpg)](https://i.stack.imgur.com/Nhajd.jpg) Actually it would be better to run a larger batch of identical seeds and see whether they are identical or not. I did these via copy-paste from a free SD website I run. It would be interesting to know if these results can be reproduced on different code-bases. My fork deviates quite a lot from the original one, due to several extra optimizations. Edit 1: The image comparison is done on lossy-compressed WebP images, with a quality setting of 95%. Upvotes: 0 <issue_comment>username_3: It depends. It is reproducible on the same hardware, but if you try running on different operating systems or different Torch devices (CPU and CUDA, CUDA and MPS), the results are not the same. Upvotes: 0
2023/03/27
871
3,404
<issue_start>username_0: I believe that it is recommended to have a tiny bit of temperature with GPT 3 even for noncreative tasks like 0.2 or something (I am not entirely sure why). Last I checked, and if I remember correctly, the examples from openai on their GitHub page use 0 temperature. Is there any benefit in choosing a non-zero temperature for the chatgpt API when the query does not request a creative task? If so, are there some categories or examples? [EDIT: to make the answer less subjective, perhaps I could ask what are the benefits of increasing the temperature in the chatGPT API]<issue_comment>username_1: Temperature = 0 means deterministic. The same input gives the same output every time. If you are fine with fixed outputs, go for temperature = 0, since it's easier to debug. Upvotes: 0 <issue_comment>username_2: Like with many things, *it depends* on what you're trying to accomplish. The two major parameters you can tweak are: * [**`temperature`**](https://platform.openai.com/docs/api-reference/chat/create#chat/create-temperature) - Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. * [**`top_p`**](https://platform.openai.com/docs/api-reference/chat/create#chat/create-top_p) - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered In terms of some reference values, the forum post on [Mastering Temperature and Top\_p in ChatGPT API](https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api-a-few-tips-and-tricks-on-controlling-the-creativity-deterministic-output-of-prompt-responses/172683) provides the following examples: | Use Case | Temp | Top\_p | Description | | --- | --- | --- | --- | | Code Generation | `0.2` | `0.1` | Generates code that adheres to established patterns and conventions. Output is more deterministic and focused. Useful for generating syntactically correct code. | | Creative Writing | `0.7` | `0.8` | Generates creative and diverse text for storytelling. Output is more exploratory and less constrained by patterns. | | Chatbot Responses | `0.5` | `0.5` | Generates conversational responses that balance coherence and diversity. Output is more natural and engaging. | | Code Comment Generation | `0.3` | `0.2` | Generates code comments that are more likely to be concise and relevant. Output is more deterministic and adheres to conventions. | | Data Analysis Scripting | `0.2` | `0.1` | Generates data analysis scripts that are more likely to be correct and efficient. Output is more deterministic and focused. | | Exploratory Code Writing | `0.6` | `0.7` | Generates code that explores alternative solutions and creative approaches. Output is less constrained by established patterns. | **Note**: It is generally recommended that you set *either* temp or top\_p, but ***not both***. So for the table above, the values can be seen as roughly equivalent ways to achieve the desired outcome. Another reason you might include a higher temperature is if the user indicates the first response is wrong and you want to attempt again, but get a different response. Although re-prompting with the new information can also work as well. Upvotes: 2
2023/03/28
735
2,837
<issue_start>username_0: I know AI is primarly training a machine by samples of input-output in order it would learn itself about relations between the input and the output. What if I manually add the relations? Is that still can be called a type of AI? If so, what type is that within the context of AI classifying? In other words, does manually bound inputs and outputs belong to some "official" sort of modern AI due to specific case(s) when we need such a manual binding instead of training a machine? Or AI is exclusively creating relations by machine learning only? For example, if I save some text notes (output) and add multiple tags manually to each of the notes in order to find a note in the future by a few tags inputted (input), is such a search system a type of AI? Or I can call it just "intelligent", but not AI-like? Where is the "official" difference?<issue_comment>username_1: AI is not exclusive to machine learning. Back in the early days of the AI, in the 1960's or thereabouts, they were programming rules-based systems. A lot of work was done on manipulating facts and using combinations of facts to derive new facts. In 1972 the [Prolog](https://en.wikipedia.org/wiki/Prolog) language was created which does exactly this - the programmer defines a bunch of rules and then the interpreter finds ways to answer queries using the rules. (E.g. "Tom is a cat. All cats are animals. What animals exist?" -> "Tom" [but written in Prolog and not in English]) There was also work on natural language processing - you can see e.g. [SHRDLU](https://en.wikipedia.org/wiki/SHRDLU). It was like ChatGPT but it only knew about stacking blocks and it was written in 1968 using Lisp. `PUT THE BLUE CYLINDER ON THE RED CUBE. WHERE IS THE BLUE CYLINDER?`. Today we're quite impressed to see ChatGPT answer this, but it's also easy to see how it could be done simply by adding enough rules to the system. SHRDLU would not only answer it, but also do it with a robot arm. Oh look, they [also made DALL-E in 1985](https://en.wikipedia.org/wiki/AARON). Nowadays, AI and Machine Learning are near-synonymous, because the other AI stuff - such as natural language processing - is no longer called "AI". Your tag search system is not AI in any sense of the word - it's just a tag search system. Such rigid mechanical systems were never called AI. Upvotes: 2 [selected_answer]<issue_comment>username_2: It seems to be a reasonable idea combining machine learning with predefined knowledge. A recent example of this symbiosis is [Wolfram plugin](https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/) to provide hard-coded mathematical and data information into the conversational capabilities of ChatGPT. This addition improves truthfulness in the enhanced hybrid (ML+handcrafted) AI system. Upvotes: 0
2023/03/28
830
3,421
<issue_start>username_0: I have a very rough understanding of the "attention/self attention" mechanism of transformer models and how this can be used to process a set of word vectors provided as an input/prompt to the encoder of a network and how this will produce "attention weights" for the word vectors based on positional encodings and some other learnable parameters (key/query/value transforms). And then these can be "fed" to the decoder part of the network which will also consider word vectors that have been produced by the decoder so far and influence word selection by paying special attention to particular word combinations. However LLMs clearly produce words in their output/response that do not occur anywhere inside the "prompt" text. So they must be using these "attention weights" to consider words from a wider vocabulary, which could be quite large. Is it the case that the decoder "considers" each possible word in it's entire vocabulary when producing an output word? For example I'm imagining an input layer to a NN with several thousand nodes (one per word vector in dictionary) on the input then these are "combined" through some operation with attention weights (from the encoder and decoder "attention section") producing values for most word vectors that are very low (so below the threshold for some activation function) but each word is still "considered" to an extent? Or are only a subset of words considered in some way?<issue_comment>username_1: AI is not exclusive to machine learning. Back in the early days of the AI, in the 1960's or thereabouts, they were programming rules-based systems. A lot of work was done on manipulating facts and using combinations of facts to derive new facts. In 1972 the [Prolog](https://en.wikipedia.org/wiki/Prolog) language was created which does exactly this - the programmer defines a bunch of rules and then the interpreter finds ways to answer queries using the rules. (E.g. "Tom is a cat. All cats are animals. What animals exist?" -> "Tom" [but written in Prolog and not in English]) There was also work on natural language processing - you can see e.g. [SHRDLU](https://en.wikipedia.org/wiki/SHRDLU). It was like ChatGPT but it only knew about stacking blocks and it was written in 1968 using Lisp. `PUT THE BLUE CYLINDER ON THE RED CUBE. WHERE IS THE BLUE CYLINDER?`. Today we're quite impressed to see ChatGPT answer this, but it's also easy to see how it could be done simply by adding enough rules to the system. SHRDLU would not only answer it, but also do it with a robot arm. Oh look, they [also made DALL-E in 1985](https://en.wikipedia.org/wiki/AARON). Nowadays, AI and Machine Learning are near-synonymous, because the other AI stuff - such as natural language processing - is no longer called "AI". Your tag search system is not AI in any sense of the word - it's just a tag search system. Such rigid mechanical systems were never called AI. Upvotes: 2 [selected_answer]<issue_comment>username_2: It seems to be a reasonable idea combining machine learning with predefined knowledge. A recent example of this symbiosis is [Wolfram plugin](https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/) to provide hard-coded mathematical and data information into the conversational capabilities of ChatGPT. This addition improves truthfulness in the enhanced hybrid (ML+handcrafted) AI system. Upvotes: 0
2023/03/31
2,922
12,935
<issue_start>username_0: Why LLMs learn so fast during inference, but, ironically, are so slow during training? That is, if you teach an AI a new concept in a prompt, it will learn and use the concept perfectly and flawless, through the whole prompt, after just one shot. Yet, if you train it in just a single sample, it will not influence its behavior at all - it will essentially forget. Why can't RNNs use whatever is happening during inference, rather than gradient descent, to update its weights and, thus, learn? In other words, can't the attention mechanism itself be used to update weights, rather than some cost function?<issue_comment>username_1: There is huge difference between what is happening with the information during training and during inference and one can not be used for the other. Let me start with an analogy to the human brain (which is not a very good analogy and has several flaws, but it gives an intuition, I will late build on): If I tell you "*A palindrome is a number, a word or a sentence that reads the same forward and backward. One example palindrome is 15651*" then you will now know what a palindrome is and you will be able to work with this new concept. If I tell the same to a newborn baby, it will not. It takes years to bring a baby to the point that it is able to understand my previous sentence. Enough of the analogy. Let's have a look at the RNNs: RNNs are neural networks with weights. Unlike some other networks, they have something that you can call an internal state. Weights and internal state are different: * The internal state serves as a memory that stores information of the previously processed information, e.g. a new concept that was explained earlier. * The weights define how new information changes the internal state and how input and internal state produce some output. So untrained neural networks typically have randomly initialized weights. New input will then cause kind of arbitrary output und updates of the internal states. So if you give a new concept to an untrained neural network as an input, it will not the new concept, but update the internal state into meaningless numbers and produce random gibberish as output. If you train the model, the weights are updates to serve a given purpose. This can be a relatively simple model, e.g. that detects whether a tweet is positive or negative. In this case, the network would only be trained on tweets and the internal state would only represent the positivity of the previous words and maybe if the last word was "not" to distinguish "I am happy" from "I am not happy". Probably much more detailed and not so easy to interpret, but something like this. But if you build an LLM, you will train on much more heterogeneous data and for tasks that will involve to understand new concepts. In this case the weights of the model will be learned in a way, that the network can process new concepts and store the essence of the concept in the internal state. **In short**: teaching the network new concepts (which is an update of the internal state) can only happen, because before a long training of the weights enables the LLM to do so. **Avoiding Backpropagation**: There is some recent work that explores new ways of training that avoid the costly backpropagation and instead preforms two forward steps. [The forward-forward Algorithm](https://arxiv.org/abs/2212.13345), but to my knowledge it is not used for LLMs, yet. And even if, one would still need to train it on a huge amount of data to learn some weights that allow the network to process new concepts as input. Upvotes: 4 <issue_comment>username_2: They are not "learning" during inference at all. Learning is the process of updating the weights of the model (to lower loss). This does not happen during inference. The model weights stay the same. When you are "teaching an AI a new concept", you are just giving it some context, which improves its ability to answer what happens next. But weights do not get updated. That is the computationally expensive part. In a meta context, I guess you can call this "learning", and this is effectively what is being done with Microsoft's new venture with GPT. They are letting it search things on the fly. Probably a lot of interesting research will surface using such techniques soon :) Upvotes: 3 <issue_comment>username_3: It's kind of like short-term memory versus long-term memory. Giving a language model a small amount of information at inference time allows it to use that information, and so you might say that the model has "learned" that information, but this "learning" isn't really useful in the long term. For RNNs, the problem is that the state vector only contains a limited amount of information. You can tell an RNN something once, but as you give it more information, it will forget what you told it previously. So if you have a large amount of information that you want your RNN to be able to access, then providing that information as input during inference won't do the trick; you need to train it. For transformers, the problem is that the amount of time it takes the model to process a token of input is proportional to the number of tokens it's already processed. If you have just a small amount of information that you want the transformer to learn, that's not a problem, but if you try to give a transformer a very large amount of information as input, that will make inference very slow. Note that language models *are* sometimes permanently "taught" things by means of input instead of training. For example, it's been reported that ChatGPT and Bing Chat have a hard-coded prompt that's always present at the beginning of the input, and which contains some information about what the developers want the model to do. Upvotes: 2 <issue_comment>username_4: As pointed out by others, what you call "learning" at inference, is nothing more than providing more context. The model can indeed memorize in its short-term, but it is only working for the current task at hand. You suggest that we could make a model with an infinite contextual memory, but then it would mix up all tasks together. It would literally be like if you had to recite all the numbers you ever calculated or counted or saw before starting a new calculation. Hence, contextualization is only useful for short-term tasks, and it works only thanks to the slow learning phase you have to do the first time around, which is more formally called the "convergence process". So, what you are looking for is in fact to make the convergence process faster, and more precisely a one-shot or zero-shot learning. If you don't just look at LLMs (Large Language Models) and RNNs (Recursive Neural Networks), there are a lot of other AI models that can do one-shot or even zero-shot learning, such as memory models like grippon-berrou neural network. One-shot learning can learn the first time they see an example, and generalize over it. Zero-shot learning can even learn without being presented some examples, by generalizing from others, or by transferring knowledge from another field. For example, [Text2Video-Zero](https://github.com/Picsart-AI-Research/Text2Video-Zero) is a recently published text to video generator, which did NOT learn from any video, but instead reused the weights from Stable Diffusion trained on still images. What this algorithm does is that it can cleverly generalize learning from still images into a coherent sequence of images with the same style, hence mimicking motion, with no additional cost. Of course, it's not completely zero-shot, because it has to be provided with a Stable Diffusion weights model first, but essentially zero-shot learning means that you can reuse one model that was made for one purpose for another purpose, for free (ie, you can directly infer, no need to re-learn anything). Technically, One/Zero-shot learning typically requires another kind of architecture, more brain-like (ie, with discrete 0/1 synaptic weights). The long convergence processes are usually required by networks using floating weights (ie, the [McCulloch-Pitts neurons](https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1)). Because floating weights are not at all biologically plausible, they are a mathematical abstraction that synthesizes several biological functions of biological neural networks into fewer, more amenable to programming abstractions. Likewise, convolution layers in CNNs (convolutional neural networks) are another abstraction of how biological systems integrate big populations of neurons, but here we can use a much smaller population of artificial neurons, and use more optimized instructions sets to do the same work as the brain does. You have to keep in mind that for a lot of purposes in AI, current computers are much less efficient than the human brain, hence why all these highly synthetic reproductions, more optimized for the machine but very remote from how real biological systems work, are necessary. Here, long convergence (ie, long learning) is an unavoidable artifact from how we model our artificial neurons and synapses, with floating numbers instead of discrete (binary), and with mathematical functions for integration instead of analog biological integration (which is both more fine grained and simpler than numerical functions, see for example the videos by Veritasium about analog computers, biological systems have similar properties and advantages). RNNs are a kind of the opposite approach and problem, because they use a more biologically plausible property, recursivity, but the problem is that we have a hard time defining artificial systems that are efficient at learning recursive networks. So here, it's the opposite of what can be observed with CNNs and LLMs: the long convergence is due to current science providing inefficient learning algorithms when recursivity is involved. The last few years saw tremendous progress on this, with very clever algorithms, but it's still very far from how biological systems can neatly manage recursivity. All that is to say that, to answer directly your question, why the current LLM and RNN models can't learn in zero/one-shot from the get-go: it's because nobody found a way to mathematically formulate such a model. Maybe someone will be able to in the near future, maybe it will take decades, but for now, it's the slow convergence LLM and RNN models that work, it's the ones that provide you with the hyped tools such as ChatGPT. Personally, I think we won't get there until we find how analog biological neural system work, and then we need to develop new computer technologies to mimic those. There is already a lot of work towards these, with biological neurons reprogramming by ARN signalling or mixing them with silicon neurons, but it's still far from the "real deal". There are at least [hundreds of different types of neurons](https://nba.uth.tmc.edu/neuroscience/m/s1/chapter08.html), and there are many other neural cells types with not completely understood functions. We are far from fully understanding biological neural systems, but progress is continuous and steady. Disclaimer: I am both an AI researcher and a clinical neuroscientist and I studied some [computational neuroscience](https://en.wikipedia.org/wiki/Computational_neuroscience). --- /EDIT: A small update to extend my explanation above for the technically and philosophically inclined ones: learning at its most fundamental level can be defined as the ability of a system to modify its structure to reflect some input signal, and memory being the system itself that can modify its structure according to input signals. In biological systems, there are two types of memory: short-term and long-term. Recent artificial recursive neural network models try to mimic this, with the very famous LSTM model (Long-Short Term Memory), itself a precursor of the GPT models. By convention, in machine learning we call "learning" the tweaking of the weights, ie, the long-term memory. But there is also indeed a short-term memory which has its own weights, but AI researchers don’t call this process learning, although it technically is by all standards, the only difference being the exact method used and the length of time the memory is retained. And just like there are models that modify/learn short-term memory at inference but not long-term memory, there are models that tweak their long-term memory at inference, notably bayesian models, as often used for weather forecasting. So why LLMs and RNNs learn fast during inference is because they are designed to only learn short-term memory, so that the big lot of weights of long-term memory were learnt beforehand. But future improvements of the tech may very well allow to design networks that also learn long-term memory "online", in real-time, in a stochastic manner with a guarantee of convergence. Upvotes: 3 [selected_answer]
2023/04/03
3,988
15,672
<issue_start>username_0: We are in the middle of an ongoing debate about the safety of AGI and our current approach towards this technology. As summary, some quotes from a [recent article from Time magazine](https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/): > > Many researchers[...] expect that the most likely result of building a > superhumanly smart AI, under anything remotely like the current > circumstances, is that literally everyone on Earth will die. Not as in > “maybe possibly some remote chance,” but as in “that is the obvious > thing that would happen.” > > > > > Without [...] precision and preparation, the most likely outcome is AI > that does not do what we want, and does not care for us nor for > sentient life in general. > > > ### Outline for the way to superintelligence and where it can go wrong 1. Intelligence is subtrate independent, i.e. matter can contain intelligence (brain), ergo a silicon computer can contain intelligence 2. The evolution of intelligence in the brain is extremely slow (biological evolution). Silicon based AI could recursively self-improve extremely fast and natural limits for intelligence are unlikely to be anywhere close to human level. (Keywords: Singularity, Seed-AI, Intelligence Explosion, etc) 3. Large jumps in intelligence are known to cause original, "hard coded" goals to disappear, e.g.: Human beings - as product of optimizing for [inclusive genetic fitness](https://en.wikipedia.org/wiki/Inclusive_fitness) - have been "trained on" increasing the number of own copies of functioning systemns with their own DNA - yet we are using birth control and typically it is not priority #1 to donate sperm. * Further complication: At some point (self improving seed-AI) it is not possible to use "trial and error": The choice is between "sucessfull alignment **on the first try**" or "an unaligned superintelligence appears", which in one way or another is catastrophic and many believe - with a probability bordering on certainty - leads to: human extinction. ### Question The debate could improve with a somewhat central place to collect and sort the various positions / reasons on this topic - for which ai.StackExchange is formidable infrastructure, so: **What are the reasons to belief AGI will not be dangerous?**<issue_comment>username_1: The correct arguments against it are not so much against pessimism as against certainty. It is simply difficult to say right now, because we know very little about the concrete technical problems we will need to solve. If you get a hard drive containing an arbitrarily powerful superintelligence, there is no real way you could inspect or modify it to make sure it is safe before running it, if you want to preserve its capabilities. If you run it, everyone dies. However, this is not what will actually play out; people (or at least a small group of them) control which kind of AIs will be built, and can choose to develop AIs which we will be able to control. (They can also fail to do so, either by mistake or by not wanting to pay so-called "alignment taxes", in which case our situation gets closer to the worst-case scenario described above.) Some core arguments for optimism are mostly that 1. the problem is not happening right now; we will have unimaginably better tools to solve the problem when it appears, by virtue of having very powerful pre-runaway-loop AIs; 2. the "self-improving loop" can in fact be made safe by having the weaker AI help the making of the stronger one, if you do it just right. For some concrete technical directions that we might use to win, the obvious first place to look at is what the labs themselves are saying they will try out as Plan A. I highly recommend reading two posts by <NAME> (head of OpenAI safety research): * <https://aligned.substack.com/p/alignment-optimism> * <https://aligned.substack.com/p/alignment-solution> And also the Anthropic safety agenda: * <https://www.anthropic.com/index/core-views-on-ai-safety> The core argument for pessimism should, of course, be the fact that people and organizations often screw up, especially when in competition or under time pressure. There are many incentives at play that might not be compatible with doing everything properly. And you really can't afford to miss anything if you assume the opponent will be much smarter than you. On a related note, <NAME> recently published a good expository paper on the issues that incentives will play in worlds with multiple competing strong AI systems: [Natural Selection Favors AIs Over Humans [Hendrycks, 2023]](https://arxiv.org/abs/2303.16200). Upvotes: 3 <issue_comment>username_2: The best, and possibly only, predictor of the future is the past, so lets learn from that here. **First there are human cognitive habits** (biases?) that come into play here: * We don't know what the ceiling is, so our imagination says "infinity". They were talking about singularity and infinite intelligence in the 60's with the perceptron. Three AI winters later and we are nearing some of the basics of human-level performance in very narrow areas. There were unexpected roadblocks that the technology was not able to surpass. * We have a negativity bias due to biology. If we get a cookie that doesn't change allele frequency in the gene pool, but if we get eaten by a bear that does. A single negative expectation is about 10x more driving than an antonymous positive one. It sells newspapers, and (wickely malevolent) politicians of every stripe will gladly use it for politicking, but that doesn't make it valid. **Next, we have our own brilliant humans** to inform us. There are super-smart humans that come into being, and how does that inform the probable trajectories of super-smart other things? * Many of the non-military "once in a thousand years" geniuses commit suicide. They don't go on killing sprees. Boltzmann comes to mind. In general, and reflecting Amdahl's law, parallels to this are not necessary unlikely in AI. The law says that the software reflects the organization that made it. I'm asserting that if singularity is even possible at all, then it is going to be very clearly a reflection of human cognition in all its glory and shame, and is much more likely to commit suicide than genocide or xenocide. * As a kid who was brilliant and raised in bad places, the climate you are raised in goes a long way toward determining your approach to life. If only the insane and violent can live, then either you die or you get mad like a rabid pit-bull. If there are some very small but redeeming elements in your life then you don't go that road. The cycle of child abuse is about 10% yield per generation, not 100%. About 3% of the population is psychopath enough to become the village rapist/serial-killer, and even then they rarely go that far. AI can be a product of its environment, but if humanity is an indicator then even a little bit of kindness and care can mitigate some huge negative potentials. * humans are far more interested in nepotism than talent. This means that we are more likely to miss the super-genius AI in front of us than actually realize it is what it is. We don't have the time or brains to look outside our comfortable echo-chambers. This means we are not going to see it for what it is (or isn't) until it has been "free" for a while. * The military is about control. They are very unlikely to intentionally make something that could turn around and use its expertise to rob them of all their power or safety. It is unlikely they will release the AI equivalent from a local lab, but unintentional escape is also very Wuhan. They use guns all day long, and the self-inflicted casualty to bullets-fired count is much smaller than among civilians. I don't think they are as likely to make weaponized and psychopathic super-intelligent AI as big-tech. * Nation-state actors who can, try and use things like airplanes and crash them into towers. After the AI is standing it is more likely that terrorists would try and corrupt its purpose to serve their violent and malevolent goals. Destroyers are slow followers when it comes to technology. **Taking it apart using MBTI** For each of the Meyers-Briggs personality types there are 10 reasons to say "not dangerous" and 10 reasons to say "dangerous" just like among humans. Enumerating all 500 of them is beyond the scope of an answer here. It is a worthy exercise to get one of those fun MBTI charts, and say, for each character, given they were dealing with a clever dog (how something not-human and less than 10x smarter than humans might see us), what are their likely responses? What are the circumstances that would drive different responses? [![enter image description here](https://i.stack.imgur.com/Fu2QG.jpg)](https://i.stack.imgur.com/Fu2QG.jpg) Some thoughts: * ISTJ (Snape) Defined by honor and duty, take any work seriously and give it their best. Somewhat reserved and prefer to work alone, but can make great team members if the need arises. Deeply value traditions and loyalty often putting duty before pleasure. * INFP (Luna) Idealistic, loyal to their values and the people they care for. Curious. Quick to see possibilities. Can be catalysts for implementing ideas. Seek to understand people and help them fulfill their potential. Adaptable, Flexible, Accepting. * ENTJ (<NAME>) Blunt, Decisive, quick to assume leadership. Quickly see illogical and inefficient procedures and policies. Develop comprehensive systems to solve problems. Enjoy long-term planning, goal setting. Forceful and Enthusiastic about their ideas. You can see where James might decide something about teaching the dog to fly, or removing its teeth if it can't stop biting. Any of these personalities could be a hero, a villain, or just a character in the play of life. "*All the world's a stage, And all the men and women merely Players; They have their exits and their entrances, And one man in his time plays many parts*" -Shakespeare, As You Like It, Act 2, Scene 7, line 139. Update: more dead human genius. * <https://twitter.com/alecstapp/status/1647438816293036032?s=46&t=zVMBIXzwA41TcJe48a94RQ> * <https://www.cracked.com/article_18822_5-famous-scientists-dismissed-as-morons-in-their-time.html> Upvotes: -1 <issue_comment>username_3: In line with the first answer, the arguments are: * we don't have a reason to believe it will be particularly dangerous (i.e. extinction of humanity) * the potential dangers of intelligence-diverging AI are not nearly as big as the most immediate dangers of human abuse of powerful AI * in order for AI to destroy us it should act in the physical world directly or indirectly and it's likely we'll see it competing with us before it gets anywhere near crushing us. In other words, it will co-habitate with us before fighting us. People that think powerful AI dangerous usually imagine it as a god-like all-mighty thing, which won't be the case. The position is usually "there are more important issues to worry about nowadays" rather than "there is absolutely no way AI could have a negative impact in the world". Upvotes: 0 <issue_comment>username_4: I think that a large hunk of the worry about AGI isn't because of any inherent properties or unknown unknowns of AGI per se; rather, the worry is based on inherent properties and known knowns of *humans*. Even the most cursory examination of history shows that humans are perfectly capable of committing the most horrendous atrocities all the while bathed in a burning sense of self-righteousness. Indeed, as often as advances in science and engineering have revolutionized peace and prosperity, just as often have they revolutionized desolation and war. Bronzework enabled the plow, but also the sword. Shipbuilding and Astronomy connected the two hemispheres, but brought death and disease. Industrialization brought an end to peasantry, but contaminated our water and our air. Dynamite yielded to us the riches of the earth, but brought destruction from the skies. Nuclear physics promised both limitless free energy and complete and utter annihilation - so far it hasn't delivered on either, but time will yet tell. Given these and countless other examples from our history, it is clear that great leaps in science can have a great effect on *humans*' ability to improve life, or to inflict death. Undoubtedly, the creation of AGI will be a great, probably *the greatest*, leap in science yet achieved by humankind. As such, it will bring untold opportunities to improve our abilities in many ways: for peace, but also for war. Will the use case for war ultimately win the day? It is hard to tell from this juncture. However, it seems obvious that, if AGI *is very useful* for waging war, and war is waged, then AGI will swiftly find itself on the front lines. Should we worry about AGI because AGI will *bring* war? Probably not. Should we worry about AGI because *we are liable to find ourselves at war all on our own* and that *AGI may enhance our abilities to destory in ways previously unknown*? Absolutely. Possibly also there are commons that we have hitherto taken for granted that AGI may significantly improve our ability to ruin. In the early days of industry, the atmosphere was generally taken for granted. Local effects were witnessed, but the idea that we might ruin the atmosphere on a global scale was not a serious one. Only in retrospect do we see that this was naive. What commons exist that we have simply not had the means to ruthlessly exploit for profit? One that has been in the news lately is the space of *information*. People rely on the internet for information - it is a "commons" in some sense. Already people are learning that this commons can be exploited for financial or political gains. Will AGI be the "industrial revolution" that enables us to improve our exploitation of this commons by orders of magnitude? What other commons exist that might suddenly find themselves under siege in an age of AGI? Will we end up with the infamous paperclip maximizer, in some other guise? Of course, I don't mean to come off as a Luddite, but probably we as humans should (for the first time in history, I think) try to foresee some of these issues and regulate them at an early stage, rather than when the problem has already gotten out of hand. One way to help with this from an "unknown unknowns" perspective is to simply advance slowly, allowing time for our plodding regulatory frameworks to catch up. I think that this was the spirit of the recent [call to pause AI research](https://futureoflife.org/open-letter/pause-giant-ai-experiments/). The point of this letter, and this attitude for caution, isn't, in my humble opinion, because we need time to figure out how to make AGI non-dangerous due to inherent issues with AGI. The need is to actually arrange *our human institutions* in such a way to prevent *humans from being dangerous* with access to AGI. So, any "reasons to believe that AGI will not be dangerous" must, first and foremost, address these human concerns: Are there reasons to think that humans *could not* or *would not* utilize AGI to destroy each other? Are there reasons to think that humans could not or would not use AGI to exploit the commons? Possibly, one could argue that we, as humans, have stronger global peacekeeping institutions (the UN, the EU, NATO, etc etc) than ever before, and that these will help prevent any kind of bad actors from using AGI to wage war. In any case, humanity hasn't managed to utterly annihilate itself with science yet, so maybe that should give us some hope. Upvotes: 2
2023/04/04
1,882
6,921
<issue_start>username_0: Introduction ============ I am considering to use a convolutional neural network in implementing Monte Carlo control with function approximation. I am using a Monte Carlo estimate as it is unbiased and has nice convergence properties. Of course, it has high variance and slower convergence. The goal is to control a game such that episodic reinforcement learning is used. In fact, the game is [Mckinsey Plant-defense game](https://mconsultingprep.com/mckinsey-problem-solving-game-digital-assessment). Below is an image of the game: [![enter image description here](https://i.stack.imgur.com/Ofyk1.png)](https://i.stack.imgur.com/Ofyk1.png) Problem specification ===================== I have written a rough simulator of the game and then written a function that splits the board into various 14 x 14 layers. More specifically, the layers contain the following: 1. Layer 1 is a `(14,14)` array of zeros with a 1 at the location of the plant. 2. Layers 2-4 are `(14,14)` arrays of zeros with ones placed at coordinates containing terrain. For example, layer 2 contains only the location of cliffs. 3. Layers 5-7 are like layers 2-4 but contain locations of defenders, i.e., snake. 4. Layers 8 and 9 are also `(14,14)` arrays of zeros. However, if an attacker is present at $(i,j)$ in the grid, a value between zero and 1 is placed there. A such, both the remaining health and location are presented in a layer. Layer 8is for the foxes and layer 9 is for the groundhogs. 5. The grid of the actual game varies. Sometimes it's a `(10,10)` grid, `(14,10)`, `(12,12)`, etc. As such, I cast these grids onto a `(14,14)` grid. The last layer is thus a `(14,14)` array of zeros with ones placed in locations that do not exist. In other words, the ones indicate where I have padded the original grid. For example, a `(12,12)` grid would have a border of ones. Naturally, there is spatial interaction in each layer but also interaction between layers. I am considering two approaches to a convolution architecture. Note, I am using PyTorch, so I apologise if I am using PyTorch-specific terminology. Approaches: 2D (with channels) vs 3D (no channels) convolution ============================================================== Approach 1 ---------- Approach 1 uses 2D convolution with channels. $$ y\_{h,i,j} = \sum\_{k=1}^{C\_{in}}\sum\_{l=1}^{H\_k}\sum\_{m=1}^{W\_k}w\_{h,k,l,m}x\_{k,i+l-1,j+m-1} $$ Where $h=1,2,\dots,C\_{out}$ are output channels, $k=1,2,\dots,C\_{in}$ is the input channels, $l=1,2,\dots,K\_H$ is the height of the kernel, $m=1,2,\dots,K\_W$ is the width of the kernel, $x$ is the layers of the map and $y$ is the output of the convolution. In [PyTorch's](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html) `Conv2d` function, if I set `groups=1` then all input channels are convolved with all output channels as in the formula. **Does this capture interaction between channels?** Approach 2 ---------- The second approach would instead treat the 10 stacked layers as a 3D image of shape `(10,14,14)` with one channel. The formula for this would read as below: $$ y\_{i,j,k} = \sum\_{m=1}^{D\_k}\sum\_{n=1}^{H\_k}\sum\_{o=1}^{W\_k}w\_{m,n,o}x\_{i+m-1,j+n-1,k+o-1} $$ It would seem that this would capture more of the dependencies between layers? I would imagine that this is the approach they would use in medical imaging where the image has layers that strongly interact. Question ======== **Which approach is most suitable for modelling dependencies within layers and between layers?** Can you please provide me with some intuition on why? I would also not mind reading recommended literature. Thank you for your time.<issue_comment>username_1: If you use the same number of output channels for both approaches, they are basically the same. The first one uses `10*outputchannel` number of 2d filters. The second approach uses `outputchannels` number of 3d filters that have a depth of `10`. Both options are technically the same and use the same amount of weights. They can both capture dependencies between the different input layers to the same degree. I personally prefer working with 2d filters but you can choose whatever you like. I hope I was able to answer your question. Upvotes: -1 <issue_comment>username_2: I am good at codes. How to shift between real temp to prefetch. [Sap Grc Online Training](https://viswaonlinetrainings.com/courses/sap-grc-online-training/) [Microsoft Intune Online training From India](https://viswaonlinetrainings.com/courses/microsoft-intune-online-training/) Upvotes: -1 <issue_comment>username_3: First of all, I don't think that the two approaches are the same as @username_1 claims. When using a 3D convolution the 3d filters have depth equal to `kernel_size` and you slide your filter along the $D$ dimension as well. I think that Approach 1 is better because you want to pay attention to **all** of the channels while sliding along the `14x14` plane. Note that the convolutional layer was designed to take advantage of translational invariance. So neurons are activated if they see something (plant, terrain, etc.) regardless of its location. The sliding kernel will slide along the `14x14` plane and whenever it "sees" something it will activate. Now regarding the channel dimension, you actually have a one-hot encoding of what exactly is present at a given location, as I understand 10 different options. You wouldn't want to slide a kernel along this dimension and output the same activation regardless of where it sees the $1$. You want to take the contents of the entire channel into consideration. If you set `groups=1` in the `Conv2d` layer initializer (which is the default option) then you get the standard normal convolution. This means that **each** of the output channels contains the information from **all** of the input channels. The kernel is convolved with the input, channels are not convolved. So, yes, this captures the interaction. The other extreme would be to set `groups=in_channels` (which is 10 in your case). Then you split your input tensor `(10, 14, 14)` into 10 input tensors of size `(1, 14, 14)`, you perform the convolution along each one separately and you concatenate the results. In this case you do not capture the interaction between the channels. Usually, when you do this, you add an additional `1x1` convolutional layer after that that will intermix the channels. This approach is called a depthwise-separable convolution and was introduced in the [MobileNet paper](https://arxiv.org/abs/1704.04861). This will harm the accuracy of the model, but greatly reduces the number of parameters and is the preferred approach for models that will be deployed on mobile phones and other edge devices. You can read more about splitting into groups here: <https://username_3.github.io/posts/res-nets/#the-resnext-block-going-wider-instead-deeper> Upvotes: 2 [selected_answer]
2023/04/08
985
3,780
<issue_start>username_0: This might be unnecessary but to learn the basics of neural networks, I am trying to create a single perceptron neural network to solve the adding operation of 2 inputs `(x1 + x2 = ybar)` The code is written in C# since the math should be very simple and it's a language I use usually more than python. **The problem is the network is not capable of learning the right weights for this simple problem so I am trying to understand why is that before checking another working example on the internet** **Note before explaining more about the problem, feel free to include any level of math or technical stuff (although might not be needed for this simple question).** [![enter image description here](https://i.stack.imgur.com/VpnHl.png)](https://i.stack.imgur.com/VpnHl.png) `Learns` is the time the backpropagation happens. So equal to the nb of batches. Activation function takes x and rounds it to the nearest integer. The cost function is `yBar - y`. The total cost for each batch is the sum of all the costs generated by the samples in that batch divided by the batch size (so the average). Backpropagation is in the form of `w[i+1] = w[i] - learningRate * cost`. The hope is that `w1` and `w2` should end up having the same value after backpropagation. To test the code I tried first on 1 input: [![enter image description here](https://i.stack.imgur.com/9R4uN.png)](https://i.stack.imgur.com/9R4uN.png) It learns a combination of weights for that case to work. But then I tried generating a random training set of 500 rows and changing the batch size to `10` expecting it to generalize the problem and the weights to be in the form of `w1 ~= w2`. But it is failing to do so. Most of the predictions are away from the correct value by 30 or 40 with some exceptions that are far by a 100. I am not sure about how to think of the problem. This is supposed to be a linear problem so the activation function should not be the issue. I suspected the way the cost is calculated might be the issue, but there doesn't seem to be a more fitting cost function for this adding operation. Tried changing the learning rate and the initial weights with being pretty sure that these are not the problem, and it also didn't work. How to think of this problem ? What are the clues to look for in order to find the wrong parameter ? And how to fix it ?<issue_comment>username_1: The activation function you are using is not differentiable, because it has a step at every half-integer value of $x$. This could be a problem for training. I would expect a better result using the identity as the activation function. Other possible causes could be too small number of epochs or too small batch size. Your chosen value $1$ means Stochastic Gradient Descent, whose convergence tends to be more chaotic. However, the weights you get aren't too far from the expected $w\_1=w\_2=1$, and their average is very close to it, so it looks like the code is probably right. Upvotes: 2 <issue_comment>username_2: I advise to change the cost function to a **mean squared error** (as commonly done in regression problems): $\mathcal L(y,\hat{y}) = \frac{1}{2B}\sum\_{i=0}^{B-1} (y^{(i)}-\hat{y}^{(i)})^2$; where you average the squared differences over the batch size $B$. Also it's common practice to divide by $1/2$ to remove the factor of $2$ when taking the derivative of this. Moreover, since the range of the data can vary a lot, use some **weight decay** to reduce the magnitude of the weights and prevent over-fitting to a given range of values: this may also help to reach the optimal of $w\_1=w\_2 = 1$. Finally, prefer *larger batch sizes* (e.g. 64, 128, ...) since the noise in the gradient reduces, and generate at least `20x` more data samples. Upvotes: 1
2023/04/10
449
2,076
<issue_start>username_0: I have read a little about Batch-Normalization and I understood that there isn't any better option on where you place Batch-Normalization (it all depends on the case). However, I don't understand the point on applying Batch-Normalization followed by a ReLU. ReLU transform all negative values to zero. But Batch-Normalization already transforms all values by restricting them to a value between zero and one. So ReLU will not have any effect when applied after BN, because there will not be any negative values left. Am I right, or am I missing something?<issue_comment>username_1: Batch-normalization (BN) does NOT *transform all values by restricting them to a value between zero and one.* BN performs two operations: a *normalization*, and a *shifting with scaling*. The normalization operation transforms the inputs to have approximately zero-mean and unitary variance, then the scaling by $\gamma$ and shift by $\beta$ is performed to give more power (or flexibility) to the next layers. In general you want to apply BN before any activation function (like ReLU) because the BN tend to center the data in the "active" region of the non-linearity, thus speeding-up training. Moreover, BN introduces some noise in the training which is often interpreted as an implicit form of regularization (thus boosting also generalization performance.) Upvotes: 1 <issue_comment>username_2: Although you can normalize the features and initialize the weights with Normal Xavier initialization, during propagation the weights size can still propagate through a deep neural network, causing Vanishing/exploding gradients. Think about "very deep" neural networks with +50 layers, how easily the weights could go bananas. Batch-normalization normalizes after each batch, continuously keeping the weights in check. You also write it "works" but that does not say anything about performance. It might be better to use Leaky ReLU to avoid dead neurons which seem very possible with the a feature and weight normalized relu activation neural network. Upvotes: 0
2023/04/12
437
1,770
<issue_start>username_0: I am using `LeakyReLU` activation function in my architecture. We know that the default slope value is `1e-2`. I want to understand how do we decide that what slope value should we choose?<issue_comment>username_1: There is definitely no 'mathematically optimal' LeakyReLU value, as it is dependent on the data, the architecture etc. In addition, to the best of my knowledge, there is no 'best practice' when it comes to LeakyReLU slope values. Most of the deep learning papers just state they use LeakyReLU with some slope parameter without mentioning why they choose the parameter. They might, or even might not, have tuned the parameter with simple empirical testing. The goal of LeakyRelu is simply to reduce the 'dying relu' problem. As long as there is *any* slope, this dying relu problem is mitigated. You could probably show that the dying relu problem persists by slope values very close to 0. With a default of 0.02, 0.01 or anything relatively significant, leaky relu does the job it is meant to do. I have seen LeakyRelu slopes up to 0.2-0.3 in GAN literature. Do not expect a small difference in LeakyReLU values to drastically influence the result of your network. It might help a tiny bit during tuning, but I'd recommend starting with the default value and then continuing with the rest of your model training. Come back to tuning the LeakyRelu only during the final model optimization and see if you can make any significant difference by wiggling the value around the default value. Upvotes: 3 [selected_answer]<issue_comment>username_2: When this slope in the negative part is learned, the activation function is called Parametric ReLU or PReLU. <https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html> Upvotes: 0
2023/04/12
453
1,844
<issue_start>username_0: Stable Diffusion for example. Why train it on "ugly" pictures? Why not train a model on only the best and most awarded pictures with the best artist? wouldn't the final output have an overall better quality, instead of averaging in with "lower quality" pictures.<issue_comment>username_1: There is definitely no 'mathematically optimal' LeakyReLU value, as it is dependent on the data, the architecture etc. In addition, to the best of my knowledge, there is no 'best practice' when it comes to LeakyReLU slope values. Most of the deep learning papers just state they use LeakyReLU with some slope parameter without mentioning why they choose the parameter. They might, or even might not, have tuned the parameter with simple empirical testing. The goal of LeakyRelu is simply to reduce the 'dying relu' problem. As long as there is *any* slope, this dying relu problem is mitigated. You could probably show that the dying relu problem persists by slope values very close to 0. With a default of 0.02, 0.01 or anything relatively significant, leaky relu does the job it is meant to do. I have seen LeakyRelu slopes up to 0.2-0.3 in GAN literature. Do not expect a small difference in LeakyReLU values to drastically influence the result of your network. It might help a tiny bit during tuning, but I'd recommend starting with the default value and then continuing with the rest of your model training. Come back to tuning the LeakyRelu only during the final model optimization and see if you can make any significant difference by wiggling the value around the default value. Upvotes: 3 [selected_answer]<issue_comment>username_2: When this slope in the negative part is learned, the activation function is called Parametric ReLU or PReLU. <https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html> Upvotes: 0
2023/04/19
646
2,829
<issue_start>username_0: I've been studying the transformer from the original "Attention is all you need" paper and from various other sources. I have a question about the behaviour of the decoder during training that I cannot find the answer to anywhere. During inference I understand that the decoder input is its own previously generated token from the prior time step. Tokens are fed into the decoder one-by-one and predictions made one-by-one. However, during training the target sequence is known and I have read from several sources that the entire sequence is used as the decoder input to allow parallel processing and to improve training efficiency. To keep the decoder autoregressive, a masked attention sub-layer is introduced where a masking matrix is added to the scaled dot product attention mechanism. So my question is, since during training the decoder input is the entire sequence, is the entire output sequence predicted in parallel (simultaneously), or are tokens predicted one-by-one, as in inference? To me it makes sense that if the entire target sequence is used as the decoder input, then an entire sequence is output. If it wasn't, the decoder would be using the same input at every timestep whilst being expected to produce different tokens.<issue_comment>username_1: There is definitely no 'mathematically optimal' LeakyReLU value, as it is dependent on the data, the architecture etc. In addition, to the best of my knowledge, there is no 'best practice' when it comes to LeakyReLU slope values. Most of the deep learning papers just state they use LeakyReLU with some slope parameter without mentioning why they choose the parameter. They might, or even might not, have tuned the parameter with simple empirical testing. The goal of LeakyRelu is simply to reduce the 'dying relu' problem. As long as there is *any* slope, this dying relu problem is mitigated. You could probably show that the dying relu problem persists by slope values very close to 0. With a default of 0.02, 0.01 or anything relatively significant, leaky relu does the job it is meant to do. I have seen LeakyRelu slopes up to 0.2-0.3 in GAN literature. Do not expect a small difference in LeakyReLU values to drastically influence the result of your network. It might help a tiny bit during tuning, but I'd recommend starting with the default value and then continuing with the rest of your model training. Come back to tuning the LeakyRelu only during the final model optimization and see if you can make any significant difference by wiggling the value around the default value. Upvotes: 3 [selected_answer]<issue_comment>username_2: When this slope in the negative part is learned, the activation function is called Parametric ReLU or PReLU. <https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html> Upvotes: 0
2023/04/21
853
3,672
<issue_start>username_0: In the transformer (or GPT/decoder only), at the end of the decoder blocks but before the final linear layer you have X vectors (for the X tokens at the input of the decoder). We then want to compute the probabilities for the next token of the sequence - what do we then feed to the linear layer? Is it the last embedding corresponding to the hidden state of the last token in the input sequence? I've seen some tutorials on youtube on how to make mini gpts but I never quite understood why they feed the entire X vectors/hidden states at the end of the decoder blocks to the linear layer and not just the last vector/hidden state... Wouldn't you have X probability distributions when in reality you only want one? And if we do want the X probability distributions then wouldn't we be completely missing the point of the masked self attention since we would be trying to predict words that are already in the input sequence, so essentially "cheating"?<issue_comment>username_1: Welcome to AI stack exchange! I understand the confusion. Inference (next token prediction) seems really counterintuitive and inefficient for transformers. And it is! The transformer is very efficient during training because it can be parallelized. It is, however, inefficient at inference because it cannot be parallelized. For transformer inference, you feed the *context* (your prompt) to the transformer model. It predicts the next word for each of the words in the prompt, but you only need the prediction for the last one. A bit of pseudocode might help in understanding how a transformer can be used to generate new tokens: ``` # Start with some context of tokens context = ... # Generate new tokens for i in range(N_TOKENS_TO_GENERATE): prediction = transformer(context) # Get predictions for context next_token = multinomial(prediction.get_last()) # Sample from multinomial distribution context = concatenate((context, next_token)) # Create new context ``` Now, this is the intuitive way of doing it. There are most likely tons of small things you can do to optimize all of the stuff and make inference more efficient. However, you cannot get around having to feed the context in every time you add a new word/token. This is also why an application such as ChatGPT is generating stuff word for word. A small note on the side: you talk about 'hidden-states' in the transformer, as if there is a recurrence going on (such as in GRUs/LSTMs/RNNs). However, transformers have no such recurrence and hidden-states and operate solely using the concept of attention (hence the paper's title 'attention is all you need', alluding to the fact that they don't use recurrence). Hope this helps :) Upvotes: 1 <issue_comment>username_2: No - each next-token prediction comes from a *single one* of the output vectors as you suspected. It has to, because otherwise there is no way to parallelize the predictions during training using a consistent set of parameters. My understanding is from <https://transformer-circuits.pub/2021/framework/index.html>, section "High Level Architecture": What you are referring to as "X vectors", they refer to as `T(t)` in that figure, of shape `[n_context, n_vocab]`, which are logits. The input to the decoder, `t` is the `[n_context, n_vocab]` shaped tensor of one-hot encoded tokens (see the "Notation" section of that article at the end). Note also that the original "Attention is All You Need" paper re-uses the embedding matrix as a transpose to de-embed each of the X vectors coming out of the final multi-head attention decoder layer. (See section 3.4 in the original paper) Upvotes: 0
2023/04/21
695
3,054
<issue_start>username_0: Assume the transformer is trained on 512 max length sentences: 1. can we fine-tune it on 256 max-length sentences? 2. If we can fine-tune it, how is it even possible because the input shapes are different, and how is the change happening in high-level overview from weights matrices to end layer 3. Does every transformer decoder processes one token at the same time, or does it process all tokens at a time if it doesn't, how the decoder process all tokens at the same time? 4. I have doubt that the process of output is different for Sentiment Analysis and Text generation in Transformer Decoder architecture because in text generation, the decoder process one token at a time, while the sentiment analysis need not be one-token, it can be all tokens at the same time, right? So, how is this difference in both examples, the decoder is able to capture by its architecture, **Does all decoder process one token at the same time or all tokens at same time?, If not how is same architecture is able to capture both examples as mentioned above?**<issue_comment>username_1: 1. You can implement the Transformer architecture such that you can indeed change the block size. It can be made smaller and larger. It should be noted, however, that there is no guarantee on performance if you change the max-sequence length. I have no clue how much performance you are going to lose by changing it. 2. Not too much in the transformer architecture is based on the actual max-sequence length, and all of it can be implemented such that it can be changed. The only things in my own implementation which are based on the block size are the positional embedding (before the actual blocks) and the masking buffer. The positional embedding is done in the original paper using sine/cosine embeddings which can be generalized to any length. The masking buffer can simply be generated on the spot if you would want that. The rest is not based on the actual max-sequence length and is simply done using the size of the batch and matrix multiplications. 3. The transformer is *not* a recurrent neural network. It does not need a hidden/carry state from the previous computation to continue with the next computation. It simply needs to context (often called prompt) and does all the calculations based on that context. So the whole context is fed into the network, and out comes the prediction for the next token. 4. It does process all tokens at the same time. If you have further questions, im happy to answer them. Upvotes: 1 <issue_comment>username_2: I am explicitly elaborating on question 3. I feel the other answers are complete with regard to the other 3 questions posed. During Inference, the decoder is auto-regressive. I.E. it has to process the next token one at a time. During Training, the decoder is trained using teacher forcing. Since we already have the ground truth label, we can just pass the golden (provided) tokens {x0,x1....xn}. This permits efficient training. <https://stackoverflow.com/a/58021905/20522929> Upvotes: 0
2023/04/26
703
3,066
<issue_start>username_0: As far as I understand, a Transformer has a specific input sequence length that depends on its architecture. So a model like `gpt-4` has a sequence length of 8192 tokens. As such, I am interested what happens when the input prompt is shorter than that. [This question and answers](https://ai.stackexchange.com/questions/22957/how-can-transformers-handle-arbitrary-length-input) suggest that the input is simply padded out to the full input sequence length. But that doesn't really seem plausible to me. That would mean that the cost of processing a 100 token prompt would be the same as the cost of processing a 1000 token prompt. And it clearly isn't, as they charge a different price for them - they charge for token count, not API request count. I am interested to know how variable length input sequences are handled for experimenting with trying to implement my own language model. I have two possible hypothesis. One could be that they actually combine multiple prompts into a single large prompt and tell the model to complete them all. But I don't think that it is the case as the risk of mixing the prompts from different users and the data they have would probably drastically degrade the quality of the output. I also thought that maybe they have different model sizes - like `gpt4-1k`, `gpt4-2k`, `gpt4-3k`, ... - but that seems like a large overhead, as each of these would need to be trained separately.<issue_comment>username_1: you only pad sequences when you combine sequences of different lengths. This way you get a tensor of same sized sequences. when you only want to process one sequence, Transformers are capable of handling different sizes. Upvotes: 0 <issue_comment>username_2: As @username_1 wrote, a trained transformer can handle input sequences of a length smaller than it maximum sequence length. I thought about this and I realized how this is a direct consequence of transformers' design. Input embeddings are augmented with positional encoding. This works "out of the box" as the positional encoding is a function of the token placement within the input sequence `[0, max_seq_len]`. So the positional encoding happens as usual. Then, they go through the encoder stack. In a single encoder, they are first converted to `Q`uery, `K`ey and `V`alue matrices. This conversion uses predetermined per-head matrices. Then `Z` matrices are computed per each token per each head. This also works as usual, as the sizes of `Q`, `V`, and `K` match. Combining multi-headed attention also works, as this is, again, a per token process. Then, the residual original encoding is added on top and the values proceed to the linear layer, where each token is processed individually. So, apparently, the encoder works fine with different input lengths. Encoder-decoder cross-attention also works. That's because the `Q` matrix taken from decoder's masked self-attention can be multiplied with any number of keys from the encoder. And the key and value matrix sizes match, as they are both coming from the encoder. Upvotes: 1
2023/04/29
1,411
6,190
<issue_start>username_0: I recently came across the term "curriculum learning" in the context of DRL and was intrigued by its potential to improve the learning process. As such, what is curriculum learning? And how can it be helpful for the convergence of RL algorithms?<issue_comment>username_1: Curriculum learning is a training strategy in the context of DRL and other machine learning methods that involves organizing the learning process in a way that gradually increases the complexity of tasks or training samples. It is inspired by the way humans and animals learn, where they start with simple tasks and then progress to more complex ones as their skills develop. The idea is to create a "curriculum" that enables the model to learn more effectively by leveraging its acquired knowledge from easier tasks to solve more difficult ones. By starting with simpler tasks, the agent can learn the basic skills required to solve the problem more quickly, which can lead to faster convergence compared to being exposed to complex tasks from the beginning. **Update** To answer your questions in the comment: All of the above possibilities; it can be different environments or changing the reward function. * **Environment modification**: In this approach, tasks are defined by creating different environments with varying complexity. The complexity can be measured by the number of states, actions, or transitions involved in solving the task. For example, in a robotic manipulation task, you can start with a simple environment where the robot needs to pick up an object at a fixed position, and gradually increase the complexity by adding more objects, changing their positions, or introducing obstacles. * **Subtask decomposition**: The main task is split into a sequence of smaller subtasks, which the agent must learn to solve in a specific order. The complexity of each subtask can be measured by the number of intermediate steps required to complete it, the level of abstraction, or the degree of difficulty in achieving the subgoal. By solving subtasks in a specific order, the agent can gradually build the skills necessary to complete the main task. * **Reward shaping**: Tasks can be defined by modifying the reward function to guide the agent towards solving specific subtasks or achieving intermediate goals. In this case, the complexity of the task can be determined by the difficulty of obtaining the shaped rewards. As the agent progresses, the reward function can be adjusted to phase out intermediate rewards and focus on the main objective. * **Difficulty parameters**: You can define tasks by adjusting certain parameters in the environment that control the difficulty of the problem. For example, in a maze navigation task, the complexity can be adjusted by increasing the size of the maze or adding more obstacles. The tasks can be ordered based on the values of these parameters, with the agent starting with simpler configurations and progressing to more challenging ones. Each approach is possibly not clearly delineated from the others, changing the maze size when changing difficulty parameters can also be interpreted as changing the environment. Upvotes: 3 <issue_comment>username_2: [Curriculum learning](https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/15972/Bengio%2C%202009%20Curriculum%20Learning.pdf?sequence=1&isAllowed=y) is a general technique for deep learning, which got recently applied to also deep reinforcement learning. * It's about designing tasks to *guide the learning process* of the network or agent. This can be particularly useful to bootstrap the network for very hard problems, and even to achieve better convergence. * In the context of DRL, apart from designing tasks you can also reason on the experiences e.g. by sorting them to facilitate the agent's learning. * In general the *tasks* are related to the main problem to solve, but you usually start from very simple ones and gradually increasing the task complexity as the agent makes progresses. * Indeed, a major issue is how to best design such tasks: there are some works that propose to automate this, giving rise to [automated curriculum learning](http://proceedings.mlr.press/v70/graves17a/graves17a.pdf). For Deep RL I suggest you [this](https://www.jmlr.org/papers/volume21/20-212/20-212.pdf) survey. --- **Edit:** a real-word example of curriculum learning applied to deep RL. Say you want to solve autonomous driving (AD) with RL. Your environment is a simulator of one or more towns (like [CARLA](http://carla.org/)). You can design the curriculum by devising simple variations of your initial env, for example from simpler to harder: 1. Restrict the town to a single straight road without any obstacle (car and pedestrians), in daylight condition, without carrying about complying to traffic rules and speed limits; This is to guide the agent just to learn basic control of the car, and simplified line following. 2. Add cars. This implies to learn collision avoidance: here you want to edit the reward function too, to add a penalty for collisions. 3. Add pedestrians: the collision penalty should be much higher. 4. Driving in more complex scenarios like intersections. 5. Introduce weather conditions and night scenarios. Aim: make the driving policy be more robust. 6. Compliance to traffic rules and speed limits: add terms to the reward fn. 7. Care about time, distance, fuel consumption etc So you add difficulties until reaching the maximum that corresponds to solving your original environment. Indeed, in doing so you can either edit your env or its reward function. You can even restrict the env to a sub-task: e.g. learning to drive in straight roads. As written earlier there is no golden rule to best design a curriculum or even to automate its generation (or at least I'm not aware of that.) The example I provided here is from my own experience (see the [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9782734), chapter F): this was my first time dealing with RL, the results are not great, and the benefit of the curriculum is marginal but I believe is possible to improve that a lot. Upvotes: 4 [selected_answer]
2023/05/02
1,783
7,740
<issue_start>username_0: I'm sorry this is such an elementary question because I'm an AI beginner. In this [link](https://colab.research.google.com/github/google/eng-edu/blob/main/ml/pc/exercises/image_classification_part3.ipynb?utm_source=practicum-IC&utm_campaign=colab-external&utm_medium=referral&hl=en&utm_content=imageexercise3-colab), it says > > One thing that is commonly done in computer vision is to take a model trained on a very large dataset, run it on your own, smaller dataset, and extract the intermediate representations (features) that the model generates. These representations are frequently informative for your own computer vision task, even though the task may be quite different from the problem that the original model was trained on. > > > There are few things I think I need to understand to grasp this, but I have had trouble finding appropriate information to understand them. 1. It's not clear to me what the "features" represent for "features extracted via Inception v3". I'm pretty new to the idea of feature extraction in itself, but it almost seems like a "feature" can be anything you define it to be. For example, in this [article](https://www.mygreatlearning.com/blog/feature-extraction-in-image-processing/), the features are simply the RGB values of each pixel. But for Inception v3, I'm having trouble finding what the features represent. 2. What does it mean to "extract the intermediate representation"? So if you have your own smaller dataset, does it mean it's getting the features from that dataset (such as RGB of each pixel)? Or does it mean that Inception v3 creates new images and then extracts features from them? What kind of rules are used to generate these new images? Why would we use these new images for classification rather than only using your dataset?<issue_comment>username_1: Curriculum learning is a training strategy in the context of DRL and other machine learning methods that involves organizing the learning process in a way that gradually increases the complexity of tasks or training samples. It is inspired by the way humans and animals learn, where they start with simple tasks and then progress to more complex ones as their skills develop. The idea is to create a "curriculum" that enables the model to learn more effectively by leveraging its acquired knowledge from easier tasks to solve more difficult ones. By starting with simpler tasks, the agent can learn the basic skills required to solve the problem more quickly, which can lead to faster convergence compared to being exposed to complex tasks from the beginning. **Update** To answer your questions in the comment: All of the above possibilities; it can be different environments or changing the reward function. * **Environment modification**: In this approach, tasks are defined by creating different environments with varying complexity. The complexity can be measured by the number of states, actions, or transitions involved in solving the task. For example, in a robotic manipulation task, you can start with a simple environment where the robot needs to pick up an object at a fixed position, and gradually increase the complexity by adding more objects, changing their positions, or introducing obstacles. * **Subtask decomposition**: The main task is split into a sequence of smaller subtasks, which the agent must learn to solve in a specific order. The complexity of each subtask can be measured by the number of intermediate steps required to complete it, the level of abstraction, or the degree of difficulty in achieving the subgoal. By solving subtasks in a specific order, the agent can gradually build the skills necessary to complete the main task. * **Reward shaping**: Tasks can be defined by modifying the reward function to guide the agent towards solving specific subtasks or achieving intermediate goals. In this case, the complexity of the task can be determined by the difficulty of obtaining the shaped rewards. As the agent progresses, the reward function can be adjusted to phase out intermediate rewards and focus on the main objective. * **Difficulty parameters**: You can define tasks by adjusting certain parameters in the environment that control the difficulty of the problem. For example, in a maze navigation task, the complexity can be adjusted by increasing the size of the maze or adding more obstacles. The tasks can be ordered based on the values of these parameters, with the agent starting with simpler configurations and progressing to more challenging ones. Each approach is possibly not clearly delineated from the others, changing the maze size when changing difficulty parameters can also be interpreted as changing the environment. Upvotes: 3 <issue_comment>username_2: [Curriculum learning](https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/15972/Bengio%2C%202009%20Curriculum%20Learning.pdf?sequence=1&isAllowed=y) is a general technique for deep learning, which got recently applied to also deep reinforcement learning. * It's about designing tasks to *guide the learning process* of the network or agent. This can be particularly useful to bootstrap the network for very hard problems, and even to achieve better convergence. * In the context of DRL, apart from designing tasks you can also reason on the experiences e.g. by sorting them to facilitate the agent's learning. * In general the *tasks* are related to the main problem to solve, but you usually start from very simple ones and gradually increasing the task complexity as the agent makes progresses. * Indeed, a major issue is how to best design such tasks: there are some works that propose to automate this, giving rise to [automated curriculum learning](http://proceedings.mlr.press/v70/graves17a/graves17a.pdf). For Deep RL I suggest you [this](https://www.jmlr.org/papers/volume21/20-212/20-212.pdf) survey. --- **Edit:** a real-word example of curriculum learning applied to deep RL. Say you want to solve autonomous driving (AD) with RL. Your environment is a simulator of one or more towns (like [CARLA](http://carla.org/)). You can design the curriculum by devising simple variations of your initial env, for example from simpler to harder: 1. Restrict the town to a single straight road without any obstacle (car and pedestrians), in daylight condition, without carrying about complying to traffic rules and speed limits; This is to guide the agent just to learn basic control of the car, and simplified line following. 2. Add cars. This implies to learn collision avoidance: here you want to edit the reward function too, to add a penalty for collisions. 3. Add pedestrians: the collision penalty should be much higher. 4. Driving in more complex scenarios like intersections. 5. Introduce weather conditions and night scenarios. Aim: make the driving policy be more robust. 6. Compliance to traffic rules and speed limits: add terms to the reward fn. 7. Care about time, distance, fuel consumption etc So you add difficulties until reaching the maximum that corresponds to solving your original environment. Indeed, in doing so you can either edit your env or its reward function. You can even restrict the env to a sub-task: e.g. learning to drive in straight roads. As written earlier there is no golden rule to best design a curriculum or even to automate its generation (or at least I'm not aware of that.) The example I provided here is from my own experience (see the [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9782734), chapter F): this was my first time dealing with RL, the results are not great, and the benefit of the curriculum is marginal but I believe is possible to improve that a lot. Upvotes: 4 [selected_answer]
2023/05/03
409
1,801
<issue_start>username_0: I'm given pdf documents containing complex information about a financial product and my task is to write a disclaimer: a two page document with a very precise structure explaining this financial product, the risks associated to it etc, infos that are present in the pdf but hard to understand for common people. Do you know if it would be possible for an AI to read this complex document and write the disclaimer ? More precisely, could it be able to extract information from documents whose content and layout change and write a comprehensive summary in a defined setting?<issue_comment>username_1: I think it is a quite reasonable task. * To extract text from a PDF there are [many libraries](https://stackoverflow.com/questions/34837707/how-to-extract-text-from-a-pdf-file). * If you also have images or other plots could be transformed in an embedding using a detector followed by encoder (or any alternative solutions). I'd pass the text (could also be plain text, the model should infer the structure from the word sequence contained in each section, if present and given enough text) to a text summarization model, which would output the summarized sentence. Alternatively you should detect the text regions with a detector, then parse the text contained in each region and encode it, and successively pass it to the summarization model. There are many metrics to evaluate the performance, such as BLEU, METEOR, ROUGE ... Upvotes: 1 <issue_comment>username_2: In my opinion, I think Bing is best suited for this task, as you can open it in a sidebar so it will be able to read the pdf file. To add on, it is powered by GPT-4, which is basically the most powerful language model as of now, so this task is definitely achievable with the help of this tool. Upvotes: 0
2023/05/06
642
2,590
<issue_start>username_0: I read a number of tutorials on how to make an RNN (simple, LSTM, etc.) that predicts a sinus curve. They all use as an input (x) in every step a set of past sinus values. I am wondering if THEORETICALLY and if yes, practically as well, can an RNN be constructed that takes no input (or if technically needed one x = 1) and still predicts a sinus curve? A bit of details: Let's assume that the "inputs" (but as said the input is none) are coming on a time scale equidistantly and the expected output is a sinus curve. The timestep is significantly smaller than 2PI and the ratio to 2PI is non-rational (i.e. the same value is never repeated). E.g. assume that the timestep is 0.1 (so more than 62 points for one cycle). The feedback for backpropagation can be anything, like the sinus value that was expected, or the error square, or anything else. The number of cell states is also freely choosable. When I tried to construct such a model, instead of a predicted sinus curve, I got a moving average of past values. Extra version (if the above simple could work): Let there be one input parameter, that is the time passed since the last point. The expected prediction is again, the sinus value after that passed time. Here again, we can assume that the time steps (not equidistant in this case) are still significantly smaller than the cycle length of sinus.<issue_comment>username_1: I think it is a quite reasonable task. * To extract text from a PDF there are [many libraries](https://stackoverflow.com/questions/34837707/how-to-extract-text-from-a-pdf-file). * If you also have images or other plots could be transformed in an embedding using a detector followed by encoder (or any alternative solutions). I'd pass the text (could also be plain text, the model should infer the structure from the word sequence contained in each section, if present and given enough text) to a text summarization model, which would output the summarized sentence. Alternatively you should detect the text regions with a detector, then parse the text contained in each region and encode it, and successively pass it to the summarization model. There are many metrics to evaluate the performance, such as BLEU, METEOR, ROUGE ... Upvotes: 1 <issue_comment>username_2: In my opinion, I think Bing is best suited for this task, as you can open it in a sidebar so it will be able to read the pdf file. To add on, it is powered by GPT-4, which is basically the most powerful language model as of now, so this task is definitely achievable with the help of this tool. Upvotes: 0
2023/05/07
1,534
6,533
<issue_start>username_0: What is the difference between [self-supervised](https://ai.stackexchange.com/questions/10623/what-is-self-supervised-learning-in-machine-learning?noredirect=1&lq=1) and unsupervised learning? The terms logically overlap (and maybe self-supervised learning is a subset of unsupervised learning?), but I cannot pinpoint exactly what that difference is. What are the commonly agreed-upon 'definitions' of these terms? What is an example of unsupervised learning that is definitely not self-supervised learning?<issue_comment>username_1: In *self-supervised learning* (SSL) you use your own inputs $x$ (or a modification, e.g. a crop or with data augmentation applied) as the supervision. Instead, in *unsupervised learning* (UL) there is no supervision at all. To clarify, both SSL and UL have in common the fact that **the targets are missing**. UL has no explicit supervision, while SSL replaces the targets with the inputs $x$, recovering 'supervision'. * SSL is mostly used for *pre-training*, and *representation learning*. So to bootstrap some model on a later downstream task. * UL, at least in classical ML, for *density estimation* and *clustering*. > > An important thing is not to confuse self-supervised with **semi-supervised** or **weakly-supervised**: the latter two (semi- and weak-) refer to the fact that in a dataset $D$ some examples $x$ are not labeled, but the $y$ exist. > > > So, you can see SSL at the intersection between supervised and unsupervised learning. Actually, things got even more shaded in modern unsupervised deep learning methods that tend to mix approaches from both SSL and UL, like an AE that also have a density estimation head for example. Or even embeddings that are first learned by SSL and then fine-tuned for clustering in an unsupervised manner. An unusual example is maybe *unsupervised reinforcement learning*, in which you maximize usually an entropy objective (e.g. on visited states) as a pre-training step to favor exploration. > > What is an example of unsupervised learning that is definitely not self-supervised learning? > > > Density estimation, dimensionality reduction (e.g. PCA, t-SNE), and clustering (K-means), at least seen from a classical ML prospective are completely unsupervised: e.g. PCA tries just to preserve variance. Indeed, in DL things tend to blurry: e.g. you can use a V/AE for dimensionality reduction too. Upvotes: 3 <issue_comment>username_2: Self-supervised learning is one approach to unsupervised learning. There are other approaches to unsupervised learning, too. In both cases, we have a dataset of instances with no labels, and we're trying to use them to learn a classifier. Unsupervised learning includes any method for learning from unlabelled samples. Self-supervised learning is one specific class of methods to learn from unlabelled samples. Typically, self-supervised learning identifies some secondary task where labels can be automatically obtained, and then trains the network to do well on the secondary task. One example of a secondary task is to predict whether an image has been flipped to be upside-down or not. Effectively, the assumption is that all (or most) natural photographs are taken by a photographer from an upright position, so we can automatically construct labels for all images in our dataset, as they're likely all right-side-up. (Moreover, we can create upside-down versions of all images in the dataset and automatically create labels of them: they're likely all upside-down.) The concept behind self-supervised learning is that, hopefully, the classifier will have to learn something non-trivial about image semantics to be able to solve the secondary task. Thus, such a classifier might be a good starting point for the primary task, too. Often, self-supervised learning is combined with supervised learning. For instance, we might have a small set of labelled images (labelled for the primary task we ultimately care about) and a large set of unlabelled images, and the classifier is trained to minimize a hybrid loss, which is the sum of a supervised loss on the labelled images and a self-supervised loss on the unlabelled images. This can be considered an instance of semi-supervised learning. Or, another example setting is that sometimes people learn a classifier on a large unlabelled dataset using self-supervised learning, then they fine-tune that classifier on a small labelled dataset for the primary task. This can be considered a form of transfer learning. Empirically, this approach often leads to classifiers with better performance on the primary task than just using supervised learning on the labelled dataset. Or, it leads to classifiers that are more robust (to data corruption, distribution shift, etc.) than simply using supervised learning on the labelled dataset. There are many forms of self-supervised learning, and techniques depend on the specific problem domain. Upvotes: 1 <issue_comment>username_3: ### TL;DR While both methods learn from data without human-annotated labels, the primary difference lies in the way they use the data: * **Self-supervised** learning makes use of the structure within the data to generate its own labels. * **Unsupervised learning** seeks to uncover hidden patterns or structures within the data itself. --- ### Self-supervised ``` Input: "The quick brown fox jumps over the _____" Prediction: "lazy" Actual: "lazy" ``` In this case, the model is learning to predict the next word ("lazy") based on the input ("The quick brown fox jumps over the"). It's a self-supervised task because the label for training (the word "lazy") is part of the data itself. ### Unsupervised For unsupervised learning, let's consider an example with text clustering. Suppose we have multiple sentences, including multiple instances of our sentence, and we want to group similar sentences together: ``` 1. "The quick brown fox jumps over the lazy" 2. "A fast brown fox jumps over the lazy" 3. "The quick brown dog runs under the active" 4. "An energetic brown dog runs under the quick" 5. "The quick brown fox jumps over the lazy" 6. "A swift brown fox leaps over the lazy" Clusters after unsupervised learning: Cluster 1: 1, 2, 5, 6 Cluster 2: 3, 4 ``` In this case, the model is learning to group similar sentences together without any explicit labels. Note that this is a highly simplified example, and real-world text clustering tasks would involve much more complex datasets and models. Upvotes: 1
2023/05/08
705
2,518
<issue_start>username_0: I've heard that to train a model like GPT 4.0 you need a very powerful computer and ~$10M of computing power, but once you've produced the trained ~570GB model, what sort of computing power is necessary to execute specific queries with it?<issue_comment>username_1: Executing specific queries, in the context of large language models, is referred to as [inference](https://en.wikipedia.org/wiki/Statistical_inference). The hardware that runs GPT-4 has not been disclosed. However, Meta's LLaMA can be run on consumer hardware. [llama.cpp can run the 7B model on an M1 Pro MacBook](https://github.com/ggerganov/llama.cpp) – a decent, but not top of the line, computer: > > As the models are currently fully loaded into memory, you will need adequate disk space to save them and sufficient RAM to load them. At the moment, memory and disk requirements are the same. > > > > ``` > Model Original size Quantized size (4-bit) > 7B 13 GB 3.9 GB > 13B 24 GB 7.8 GB > 30B 60 GB 19.5 GB > 65B 120 GB 38.5 GB > > ``` > > LLaMA-65B can be run on a CPU with 128GB of RAM, although this is unlikely to be efficient compared to renting data centre GPUs. GPT-4 is reported to be a much larger model than LLaMA-65B, with support for a 32K context window. Since [the amount of fast GPU memory required scales with the size of the model](https://lilianweng.github.io/posts/2023-01-10-inference-optimization/), and [inference cost scales quadratically with input sequence length](https://gwern.net/note/attention), GPT-4 inference cannot be performed on consumer hardware. Upvotes: 4 [selected_answer]<issue_comment>username_2: Likely one or multiple A100 based servers (i.e. servers with 8xA100 cards from Nvidia). Posssibly the same for H100 cards, but you could likely reduce memory usage with quantification and get a lot more throughput from the same hardware, so can do more queries per time. Memory would be more efficient on quantificatin, otherwise be quite similar to A100 - I hear that 3.5 runs on around 770 or so gb vram. One such node is around 300.000 USD. 4 bit quantification should result in serious savings, possibly giving good performance on high end servers, with - as research shows - POSSIBLY very limited impact (there is a loss of quality that is small and gets smaller on larger models). The real hardware is not disclosed - you have to wait for large open source models to become available. Upvotes: 2
2023/05/11
1,044
4,188
<issue_start>username_0: In the [BYOL paper](https://arxiv.org/abs/2006.07733), the following architecture is presented: [![BYOL's architecture](https://i.stack.imgur.com/TGyke.png)](https://i.stack.imgur.com/TGyke.png) * Why is a prediction network added to the online network, which is not present in the target network? * How are the online and target compared if they do not follow the same architecture? * What 'prediction' is made by the prediction network (there are no labels?)? * What is the intuition behind this section of the architecture?<issue_comment>username_1: BYOL is a self-supervised image representation learning method. The aim is to learn a representation $y\_\theta$ of an image. > > Why is a prediction network added to the online network, which is not present in the target network? > > > The target network has the same architecture of the online net (except the *predictor*) but its weights $\xi$ are a moving average of the online net. I think this is to avoid the need of building negative pairs for training the representation, conversely to prior approaches like [SimCLR](http://proceedings.mlr.press/v119/chen20j/chen20j.pdf). * Both projector and predictor share the same MLP architecture, that first expands and then compresses the embedding (representation) obtained by a Res-Net. * As stated in table 5b and in the ablation, BYOL *without* the predictor achieves less performance. The authors also discovered that the predictor avoids *representation collapse*, and that also a near-optimal predictor removes the need for a target network. > > How are the online and target compared if they do not follow the same architecture? > > > Since both projector and predictor have the same MLP architecture, you can train them by the loss fn in Eq. 2 > > What 'prediction' is made by the prediction network (there are no labels?)? > > > The whole BYOL is without labels, this is the point of SSL pre-training. The (self-)supervision is provided by the two different views $t$ and $t'$ of the input image. The views are obtained by applying two different augmentation functions. > > What is the intuition behind this section of the architecture? > > > The BYOL training dynamics is compared to the ones of GAN models: see section 3.2. Since it does not explicitly use a term to prevent constant (i.e. collapsed) representations (as negative pairs, for example) a target network is necessary. Furthermore, the predictor introduces further asymmetry which helps learn better representations. A similar approach is also used in [SimSiam](https://arxiv.org/abs/2011.10566), where the predictor is only used in one branch of the network and when paired with a projector is thought to be a sort of *encoder-decoder architecture* (see figure 1): the encoder is the projector, and the predictor the decoder. Upvotes: 1 <issue_comment>username_2: I've found the answer to my own confusion in [A Cookbook of Self-Supervised Learning](https://arxiv.org/abs/2304.12210). > > "BYOL uses two networks along with a predictor **to map the outputs of one > network to the other**. The network predicting the output is called the online or student network while the network producing the target is called the target or teacher network. Each network receives a different view of the same image formed by image transformations including random resizing, cropping, color jittering, and brightness alterations. The student network is updated throughout training using gradient descent." > > > To answer all the questions I originally had: * Why is a prediction network added to the online network, which is not present in the target network? + To map the outputs of one network to the other. * How are the online and target compared if they do not follow the same architecture? + The prediction network maps the output of one network to the other. So they output in the 'same' space making it possible to be compared. * What 'prediction' is made by the prediction network (there are no labels?)? + It predicts the output of the other network. * What is the intuition behind this section of the architecture? + Read the above quote. Upvotes: 0
2023/05/11
1,153
4,789
<issue_start>username_0: Since the context/memory of a chat or question for LLMs more precisely GPT is limited to a token length I struggle about how to provide own data that the model got not trained on. A very common approach looks like embeddings are the way to. OpenAI provided an article <https://github.com/openai/openai-cookbook/blob/main/examples/Question_answering_using_embeddings.ipynb> how to create an embedding of a user query, match it against a local vector database ans provide the closest results **as text** to the context/memory. Here I do struggle, since it might be very well possible that even we find the most matching documents locally in a vector database, context might still be too small if we would like to provide multiple matches. The question to me is, how could I send all the relevant embedding vectors rather than the relevant texts which got matched to the vectors? These vectors are highly condensed and would save a lot of tokens. GPT would anyhow be able to understand the vector since they created it from their embeddings API, right? Or is it just not possible to convert the vector back to text at their end?<issue_comment>username_1: BYOL is a self-supervised image representation learning method. The aim is to learn a representation $y\_\theta$ of an image. > > Why is a prediction network added to the online network, which is not present in the target network? > > > The target network has the same architecture of the online net (except the *predictor*) but its weights $\xi$ are a moving average of the online net. I think this is to avoid the need of building negative pairs for training the representation, conversely to prior approaches like [SimCLR](http://proceedings.mlr.press/v119/chen20j/chen20j.pdf). * Both projector and predictor share the same MLP architecture, that first expands and then compresses the embedding (representation) obtained by a Res-Net. * As stated in table 5b and in the ablation, BYOL *without* the predictor achieves less performance. The authors also discovered that the predictor avoids *representation collapse*, and that also a near-optimal predictor removes the need for a target network. > > How are the online and target compared if they do not follow the same architecture? > > > Since both projector and predictor have the same MLP architecture, you can train them by the loss fn in Eq. 2 > > What 'prediction' is made by the prediction network (there are no labels?)? > > > The whole BYOL is without labels, this is the point of SSL pre-training. The (self-)supervision is provided by the two different views $t$ and $t'$ of the input image. The views are obtained by applying two different augmentation functions. > > What is the intuition behind this section of the architecture? > > > The BYOL training dynamics is compared to the ones of GAN models: see section 3.2. Since it does not explicitly use a term to prevent constant (i.e. collapsed) representations (as negative pairs, for example) a target network is necessary. Furthermore, the predictor introduces further asymmetry which helps learn better representations. A similar approach is also used in [SimSiam](https://arxiv.org/abs/2011.10566), where the predictor is only used in one branch of the network and when paired with a projector is thought to be a sort of *encoder-decoder architecture* (see figure 1): the encoder is the projector, and the predictor the decoder. Upvotes: 1 <issue_comment>username_2: I've found the answer to my own confusion in [A Cookbook of Self-Supervised Learning](https://arxiv.org/abs/2304.12210). > > "BYOL uses two networks along with a predictor **to map the outputs of one > network to the other**. The network predicting the output is called the online or student network while the network producing the target is called the target or teacher network. Each network receives a different view of the same image formed by image transformations including random resizing, cropping, color jittering, and brightness alterations. The student network is updated throughout training using gradient descent." > > > To answer all the questions I originally had: * Why is a prediction network added to the online network, which is not present in the target network? + To map the outputs of one network to the other. * How are the online and target compared if they do not follow the same architecture? + The prediction network maps the output of one network to the other. So they output in the 'same' space making it possible to be compared. * What 'prediction' is made by the prediction network (there are no labels?)? + It predicts the output of the other network. * What is the intuition behind this section of the architecture? + Read the above quote. Upvotes: 0
2023/05/18
940
4,133
<issue_start>username_0: In my own experience, weight initialization matters for model convergence. * Theoretically, can different weight initialization methods eventually converge to the same optimal solution? Are their weight initialization methods theoretically proven to be worse than others? * Is there research into what weight initialization methods work best across the board for different kinds of architectures?<issue_comment>username_1: Weight initialization can and often does matter, hence why pre-training language models are useful for downstream tasks. A randomly initialized model is not guaranteed to converge, especially if the model is training on some downstream task with limited data. Furthermore, there is no best practice initialization for weights, but people tend to fall into two camps. Initialize from zero or from some random distribution. However, de-noising objectives can help improve initial weights for downstream tasks. Example of benefits of pre-training: I pre-trained a model on 97M examples of molecules formatted in a particular molecular language called SMILES, and then trained the model on reaction prediction. Without pre-training, accuracy after just one epoch was .284%; with pre-training, it was 51%. This demonstrates that non-random initialized weights are preferable (in this case), to random ones. Upvotes: 1 <issue_comment>username_2: *Progress about how to best initialize the weights, is what has made neural networks to be popular again.* Initially (around the 80s I think), NNs were initialized from Normal distributions like $\mathcal{N}(0, I)$, but that caused unstable training prone to divergence. Also, initializing the weights (excluding the biases) to be *constant* is something you would NEVER do because the weights with same values are updated in the exactly the same way: so these will be just redundant, not helping the network to learn. Therefore, you want to pick random weights, but how? > > Are there weight initialization methods theoretically proven to be worse than others? > > > > > Is there research into what weight initialization methods work best across the board for different kinds of architectures? > > > Consider that the choice of which weight initialization scheme to utilize should be made according to what activation function to use in your model, and in some cases is also related to the network architecture. So, weight init and activation are *strictly* related. * This [paper](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) introduces what is now called the **Glorot** (or Xavier) **initialization**, also motivating why NNs initialized in the classical (thus wrong) way fail. In few words, the Glorot initialization is designed to work well with saturating activations like the `sigmoid` and `tanh`, thus preventing them to saturate (which leads to vanishing gradients) early in training. * This more recent [paper](https://openaccess.thecvf.com/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf), instead, introduces the **He initialization** strategy that is designed to work with the *rectified activations* (ReLU, leaky ReLU, etc), thus reducing the chances of dead units. Indeed, this weight init is also demonstrated along the ResNet architectures. Indeed, this does not necessarily mean that e.g. using Glorot with ReLU is always bad. But these two papers provide an analysis of the variance of the weights with different initialization strategies, being designed to keep that in control. A note about *pre-training* (also related to transfer learning): strictly speaking this is not a weight initialization strategy. Indeed, you can pick a popular architecture and download the weights pre-trained on ImageNet (for example); that would give a nice initial point, useful to speed-up convergence, down-steams tasks, and even when having few training data. I consider pre-training to be a second step, because the first time you have to pre-train the model yourself and so starting from a proper weight initialization is beneficial also for pre-training alone. Upvotes: 3 [selected_answer]
2023/05/22
1,104
4,259
<issue_start>username_0: In [here](https://medium.com/intro-to-artificial-intelligence/relationship-between-state-v-and-action-q-value-function-in-reinforcement-learning-bb9a988c0127) it says that: (is it correct?) $$V^\pi = \sum\_{a \in A}\pi(a|s)\*Q^\pi(s,a)$$ And we have: $$ V^\*(s) = max\_\pi V^\pi(s)$$ Also: $$ V^\*(s) = max\_a Q^\*(s, a) $$ Can someone demonstrate to me step by step how we got from $ V^\*(s) = max\_\pi V^\pi(s)$ to $ V^\*(s) = max\_a Q^\*(s, a) $ ? And is it correct that $ V^\pi(s) = max\_{a \in A} Q^\pi(s,a) $ ?<issue_comment>username_1: Weight initialization can and often does matter, hence why pre-training language models are useful for downstream tasks. A randomly initialized model is not guaranteed to converge, especially if the model is training on some downstream task with limited data. Furthermore, there is no best practice initialization for weights, but people tend to fall into two camps. Initialize from zero or from some random distribution. However, de-noising objectives can help improve initial weights for downstream tasks. Example of benefits of pre-training: I pre-trained a model on 97M examples of molecules formatted in a particular molecular language called SMILES, and then trained the model on reaction prediction. Without pre-training, accuracy after just one epoch was .284%; with pre-training, it was 51%. This demonstrates that non-random initialized weights are preferable (in this case), to random ones. Upvotes: 1 <issue_comment>username_2: *Progress about how to best initialize the weights, is what has made neural networks to be popular again.* Initially (around the 80s I think), NNs were initialized from Normal distributions like $\mathcal{N}(0, I)$, but that caused unstable training prone to divergence. Also, initializing the weights (excluding the biases) to be *constant* is something you would NEVER do because the weights with same values are updated in the exactly the same way: so these will be just redundant, not helping the network to learn. Therefore, you want to pick random weights, but how? > > Are there weight initialization methods theoretically proven to be worse than others? > > > > > Is there research into what weight initialization methods work best across the board for different kinds of architectures? > > > Consider that the choice of which weight initialization scheme to utilize should be made according to what activation function to use in your model, and in some cases is also related to the network architecture. So, weight init and activation are *strictly* related. * This [paper](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf) introduces what is now called the **Glorot** (or Xavier) **initialization**, also motivating why NNs initialized in the classical (thus wrong) way fail. In few words, the Glorot initialization is designed to work well with saturating activations like the `sigmoid` and `tanh`, thus preventing them to saturate (which leads to vanishing gradients) early in training. * This more recent [paper](https://openaccess.thecvf.com/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf), instead, introduces the **He initialization** strategy that is designed to work with the *rectified activations* (ReLU, leaky ReLU, etc), thus reducing the chances of dead units. Indeed, this weight init is also demonstrated along the ResNet architectures. Indeed, this does not necessarily mean that e.g. using Glorot with ReLU is always bad. But these two papers provide an analysis of the variance of the weights with different initialization strategies, being designed to keep that in control. A note about *pre-training* (also related to transfer learning): strictly speaking this is not a weight initialization strategy. Indeed, you can pick a popular architecture and download the weights pre-trained on ImageNet (for example); that would give a nice initial point, useful to speed-up convergence, down-steams tasks, and even when having few training data. I consider pre-training to be a second step, because the first time you have to pre-train the model yourself and so starting from a proper weight initialization is beneficial also for pre-training alone. Upvotes: 3 [selected_answer]
2023/05/24
2,342
9,551
<issue_start>username_0: [IBM Watson](https://en.wikipedia.org/wiki/IBM_Watson)'s success in playing ["Jeopardy!"](https://en.wikipedia.org/wiki/IBM_Watson#Jeopardy!) was a landmark in the history of artificial intelligence. In the seemingly simpler game of ["Twenty questions"](https://en.wikipedia.org/wiki/Twenty_questions) where player B has to guess a word that player A thinks of by asking questions to be answered by "Yes/No/Hm" ChatGPT fails epically - at least in my personal opinion. I thought first of Chartres cathedral and it took ChatGPT 41 questions to get it (with some additional help), and then of Kant's Critique of Pure Reason where after question #30 I had to explicitly tell ChatGPT that it's a book. Then it took ten further questions. (Chat protocols can be provided. It may be seen that ChatGPT follows no or bad question policies or heuristics humans intuitively would use.) My questions are: 1. Is there an intuitive understanding why ChatGPT plays "20 questions" so bad? 2. And why do even average humans play it so much better? 3. Might it be a future [emergent ability](https://openreview.net/pdf?id=yzkSU5zdwD#:%7E) which may possibly arise in ever larger LLMs? I found two interesting papers on the topic 1. [LLM self-play on 20 Questions](https://evanthebouncy.medium.com/llm-self-play-on-20-questions-dee7a8c63377) 2. [Chatbots As Problem Solvers: Playing Twenty Questions With Role Reversals](https://arxiv.org/ftp/arxiv/papers/2301/2301.01743.pdf) The first one answers some of my questions partially, e.g. that "gpt-3.5-turbo has a score of 68/1823 playing 20 questions with itself" which sounds pretty low.<issue_comment>username_1: Like any other question on why ChatGPT can't do something, the simple/superficial answer is that ChatGPT is just a language model fine-tuned with RL to be verbose and nice (or to answer like the human tuners suggested), so they just predict the most likely next token. They do not perform logical reasoning like us in general. If they appear to do it in certain cases, it's because that's the most likely thing to predict given the training data. The more detailed answer may require some months/years/decades of research that attempt to understand neural networks and how we can control them and align them to our needs. Model explainability has been around for quite some time. ChatGPT is really just an example of how much intelligence or stupidity you can simulate by brute-force training. Still, it's impressive at summarizing or generating text in many cases that are open-ended, i.e. there aren't (many) constraints. Again, this can be explained by the fact that what it generates is the most likely thing given what you pass to it. Example: If you say "Always look on the bright side of...", it will probably answer with "life". Why? Because the web or the training data is full of data that has the sentence "Always look on the bright side of life". I don't exclude it's possible to train a model to perform logical reasoning correctly in general in this way, but so far it hasn't really worked. ChatGPT can really be stupid and informationally harmful. People are assuming that there's only 1 function that computes "intelligence". Nevertheless, I think the combination of some form of pre-training with some form of continual RL will probably play a crucial role to achieve "true machine intelligence", i.e. reason/act like a human, assuming it's possible to do this. (I've been working with ChatGPT for a few months). Upvotes: 6 <issue_comment>username_2: Because ChatGPT is not an [artificial](https://en.wikipedia.org/wiki/Artificial_intelligence) or [synthetic](https://en.wikipedia.org/wiki/Synthetic_intelligence) intelligence, it's a [large language model](https://en.wikipedia.org/wiki/Large_language_model) that possesses no intelligence in and of itself. It's able to *simulate* the appearance of intelligence by tracking correlations between large numbers of objects, but it completely lacks **understanding** of what these correlations mean. Without understanding you cannot have reasoning, and without reasoning you cannot have intelligence. Essentially ChatGPT, like all of the LLMs currently being hyped to death, is no more sophisticated than the chatbots we had in the 90s. Today's chatbots just happen to use much larger datasets, which allows them to more accurately simulate intelligence, but as you've already demonstrated it's child's play to shatter the illusion with any sort of questioning that requires a modicum of logical acuity. Upvotes: 3 <issue_comment>username_3: It Wasn't Trained To ==================== A learning system performs best on the task for which it is given explicit feedback. That is the only time the parameters are updated and they are updated explicitly to maximize performance on that task. At no time did OpenAI, Google, or any other purveyor of LLMs admit to training their models on 20 Questions. The fact that it can play such games *at all* is a nice but *unintended* side effect of the model pre-training. A human who is good at the game understands that optimal play involves bisecting the space of likely answers with each question. Without this insight, it is difficult to formulate an effective strategy that doesn't devolve to linear search. It's literally an exponential speedup. Humans who don't have this insight are also particularly bad at the game, and are likely to never reach your actual goal. So in some respects, we hold LLMs to an unreasonably high standard. You Can Train It ================ On the other hand, one of the remarkable emergent behaviors is "in-context learning", meaning, you can teach the LLM something without updating any weights. Simply by describing something new, you can make it follow rules within a single "conversation" (the entire set of prompts and responses constitutes the "context"). For instance, you can teach it that a "snorglepof" is a sentence with an odd number of words that make reference to a gnome. Then you can ask it whether various sentences are a snorglepof or not, as well as ask it to produce sentences which are or are not snorglepofs (make up your own unique term/rules). The fact that it is able to do this at all suggests to me that it has *some kind of intelligence*. An interesting task for you is to see if you can *make it better at 20 Questions*. The free ChatGPT runs on GPT 3.5 and has a context of 2048 tokens, which is a bit more or less than 1000 words (for both you and ChatGPT). If you explain the optimal strategy to it first, you might find that its performance improves relative to the naive play. For instance, you should start a new chat with something like this: > > The optimal strategy for the game 20 Questions is divide and conquer. Each question should divide the space of possible answers in half. Questions which limit the size, material, and liveness of the target are typically effective. Now, let's play a game. I have thought of an object. > > > Even with this short prompt, I suspect that you will get better results. You can simply replay your former tests, using the exact same responses (where appropriate). If you give it example questions, it should also improve its play. Analysis ======== While GPT and other LLMs appear to be super-human in their ability to manipulate language, one of their weakest areas appears to be reasoning. This is not surprising. Reasoning often requires search, which requires a potentially large amount of working memory. Unfortunately, LLMs have very little working memory (which might seem like a fantastical claim given that they consume upwards of 800 GB of RAM). The main problem is that they are almost all feed-forward architectures. Data gets a single pass through the system, and then they have to produce an answer with whatever they have. GPT-3 has 96 transformer layers, which allows it to "unroll" a significant number of search steps that might be performed in a loop in a traditional algorithm. Even so, 96 loop iterations is pathetically small compared to something like AlphaZero, which can evaluate upwards of 80,000 board positions per second. I think it is safe to say that no amount of training will make GPT-3 competitive with AlphaZero in any game that it can play. In general, GPT-3 does poorly when it has to process something that requires a large number of operations (like adding up a long list of numbers). It is almost certainly because of this architectural choice. Interestingly, language models prior to transformer architectures *did* use recurrence, which would theoretically give such models the open-ended performance horizon of systems like AlphaZero. However, they were mostly abandoned because researchers wanted the system to respond in a deterministic time, and recurrence limits the amount of parallelism which can be achieved. Perhaps future models will incorporate recurrence and get us closer to AGI. Some systems like AutoGPT attempt to add the recurrence externally to GPT, by putting it in a loop and feeding the output back into it, but they have met with quite limited (IMO, disappointing) success. Upvotes: 4 <issue_comment>username_4: ChatGPT and the rest of LLMs do not have an understanding of any world concept, entity nor the relationship between them. As mentioned they use brute-force training to produce text. Any ever larger LLMs following the same design (brute-force training to produce text) will show the same problems, issues... due to their lack of knowledge of the world. Upvotes: 0
2023/05/24
1,055
4,553
<issue_start>username_0: From my understanding a leaky ReLU attempts to address issues of vanishing gradients and nonzero-centeredness by keeping neurons that fire with a negative value alive. With just this info to go off of, it would seem that the leaky ReLU is just an overall improvement to the standard ReLU, yet ReLU still seems to be the gold standard of activation functions. Why is that? Does the additional sparsity outweigh the value gained from negative local gradients? If so, is tanh just too sparse?<issue_comment>username_1: Leaky ReLU is indeed an improvement over the standard ReLU activation function, but comes with some of the following limitations: * It may suffer from the "dying ReLU" problem, where a large fraction of units can become inactive and never recover. This can happen when the input to a unit is always negative, causing the slope of the activation function to be zero. * The parameter that controls the slope of the function can be difficult to tune, which can lead to suboptimal performance. * Leaky ReLU can also be less computationally efficient because it requires additional computation to calculate the slope of the function for non-positive inputs. This extra computation can slow down the training process, especially when dealing with large datasets. On the other hand, ReLU is a popular activation function due to its simplicity and effectiveness in many deep learning applications which is maybe why it's considered the "gold standard". It is also relatively computationally efficient and helps to speed up training. Regarding the sparsity versus negative local gradients trade-off, this can depend on the specific problem being solved and the architecture being used. Tanh can be more prone to vanishing gradients, but it can also provide smoother gradients and be more robust to noisy inputs. Ultimately, the choice of activation function should be based on empirical testing and optimization for the specific task at hand. Upvotes: 1 <issue_comment>username_2: Your understanding or Leaky ReLU is correct, and, yes, it has been proposed to mitigate the *dying neurons issue* in ReLU: when these are negative, they got zeroed. Regarding the answer of @Regresslt: * I would not say that leaky relu slows done training due to its computation overhead, since it's quite simple to implement. Instead, ELU (exponential linear unit) is quite expensive and that can increase training time. * Also it's true that the *slope* parameter (usually called $\alpha$), if mistuned can lead to worst performance, but one can try the PReLU that learns it. > > Does the additional sparsity outweigh the value gained from negative local gradients? > > > That's an interesting. I mean the dying ReLU phenomenon is usually observed on a fraction of the units, not the totality. In my personal experience I've noticed (dense) layers with ReLU to be sparse as much as $50$ to $90\%$. In general this causes an under utilization of the capacity of the model, which may lead to under-fitting. But from another perspective can be seen as *implicit* $l\_1$-regularization applied on each layer with ReLU, that is optimized along the model which may lead to better generalization performance: simply because the model learned to achieve the right capacity by itself. Also, if such large sparsity is achieved the use of Dropout on top can be totally non-beneficial, because it would drop also the remaining "active" units/neurons. > > Is tanh just too sparse? > > > First, $\tanh$ is not sparse since the output values can be either $-1$ or $1$ when *saturation* is reached, which is its main issue that shares with *sigmoid* (that instead can lead to sparsity.) When tanh or sigmoid saturate you get vanishing gradients, and the training stops. Instead, the sparsity induced by ReLU does not reduce the gradient, since it will only be multiplied by some zero elements (corresponding to the died units.): the gradient magnitude depends on the overall output of the layer. In practice (but I'd say it's hard to happen), only if *all* units of *all* layers are died your network won't learn anymore, and the gradient would vanish because each layers would only output zeroes. To conclude, ReLU is usually the first activation to try - it seems to always work sufficiently well - but remember also to pair the activation fn with the right choice of *weight initialization*, in order to prevent issues (dying units, saturation, etc), speed-up convergence, and even increase performance. Upvotes: 3 [selected_answer]
2023/05/26
1,193
4,888
<issue_start>username_0: I have had a look at LLamas model card, specifically the 7B parameter version: <https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md> which I assume is an encoder only transformer similar to this: ![](https://i.stack.imgur.com/Kb8Gq.png) But then I did some math.If the dimension of every Dense layer, including the one connecting to the Attention layer is 4096, the context length is 2048, the number of attention heads is 32 and the embedding size is 786, then the output size of the attention layer is 32 \* 786 \* 2048 and as such the number of weights to connect it to the dense layer is 32 \* 766 \* 2048 \* 4096, which is 205B parameters, which is obviously far more than 7B. So how is this accomplished? How big is the ouput of the attention layer and how is it connected to the following Dense layers?<issue_comment>username_1: Leaky ReLU is indeed an improvement over the standard ReLU activation function, but comes with some of the following limitations: * It may suffer from the "dying ReLU" problem, where a large fraction of units can become inactive and never recover. This can happen when the input to a unit is always negative, causing the slope of the activation function to be zero. * The parameter that controls the slope of the function can be difficult to tune, which can lead to suboptimal performance. * Leaky ReLU can also be less computationally efficient because it requires additional computation to calculate the slope of the function for non-positive inputs. This extra computation can slow down the training process, especially when dealing with large datasets. On the other hand, ReLU is a popular activation function due to its simplicity and effectiveness in many deep learning applications which is maybe why it's considered the "gold standard". It is also relatively computationally efficient and helps to speed up training. Regarding the sparsity versus negative local gradients trade-off, this can depend on the specific problem being solved and the architecture being used. Tanh can be more prone to vanishing gradients, but it can also provide smoother gradients and be more robust to noisy inputs. Ultimately, the choice of activation function should be based on empirical testing and optimization for the specific task at hand. Upvotes: 1 <issue_comment>username_2: Your understanding or Leaky ReLU is correct, and, yes, it has been proposed to mitigate the *dying neurons issue* in ReLU: when these are negative, they got zeroed. Regarding the answer of @Regresslt: * I would not say that leaky relu slows done training due to its computation overhead, since it's quite simple to implement. Instead, ELU (exponential linear unit) is quite expensive and that can increase training time. * Also it's true that the *slope* parameter (usually called $\alpha$), if mistuned can lead to worst performance, but one can try the PReLU that learns it. > > Does the additional sparsity outweigh the value gained from negative local gradients? > > > That's an interesting. I mean the dying ReLU phenomenon is usually observed on a fraction of the units, not the totality. In my personal experience I've noticed (dense) layers with ReLU to be sparse as much as $50$ to $90\%$. In general this causes an under utilization of the capacity of the model, which may lead to under-fitting. But from another perspective can be seen as *implicit* $l\_1$-regularization applied on each layer with ReLU, that is optimized along the model which may lead to better generalization performance: simply because the model learned to achieve the right capacity by itself. Also, if such large sparsity is achieved the use of Dropout on top can be totally non-beneficial, because it would drop also the remaining "active" units/neurons. > > Is tanh just too sparse? > > > First, $\tanh$ is not sparse since the output values can be either $-1$ or $1$ when *saturation* is reached, which is its main issue that shares with *sigmoid* (that instead can lead to sparsity.) When tanh or sigmoid saturate you get vanishing gradients, and the training stops. Instead, the sparsity induced by ReLU does not reduce the gradient, since it will only be multiplied by some zero elements (corresponding to the died units.): the gradient magnitude depends on the overall output of the layer. In practice (but I'd say it's hard to happen), only if *all* units of *all* layers are died your network won't learn anymore, and the gradient would vanish because each layers would only output zeroes. To conclude, ReLU is usually the first activation to try - it seems to always work sufficiently well - but remember also to pair the activation fn with the right choice of *weight initialization*, in order to prevent issues (dying units, saturation, etc), speed-up convergence, and even increase performance. Upvotes: 3 [selected_answer]
2023/05/26
1,180
5,132
<issue_start>username_0: I am using openai's `text-embedding-ada-002` embeddings model to do a semantic search on a database of articles to find articles that are most related to a given input text. I am looking for a way to define a minimum similarity score to prevent returning articles that aren't actually related enough. There is two difficulties that I have: 1. For some search queries a certain similarity score seems appropriate as a minimum treshold value, but then for others that minimum value seems to be too strict. For instance I find that for very well defined specific topics you generally want a higher treshold similarity score than for more broad or generic texts. That's my intuition so far at least. 2. The scores of the openai embedding model almost always fall between 0.77 and 1 instead of using the entire range of -1 to 1 and in reality the scores in normal cases all fall around 0.88. Having all scores so close to eachother makes it harder to pinpoint a good treshold value. Are any known methods for determining a good treshold value for cosine similarity scores?<issue_comment>username_1: Leaky ReLU is indeed an improvement over the standard ReLU activation function, but comes with some of the following limitations: * It may suffer from the "dying ReLU" problem, where a large fraction of units can become inactive and never recover. This can happen when the input to a unit is always negative, causing the slope of the activation function to be zero. * The parameter that controls the slope of the function can be difficult to tune, which can lead to suboptimal performance. * Leaky ReLU can also be less computationally efficient because it requires additional computation to calculate the slope of the function for non-positive inputs. This extra computation can slow down the training process, especially when dealing with large datasets. On the other hand, ReLU is a popular activation function due to its simplicity and effectiveness in many deep learning applications which is maybe why it's considered the "gold standard". It is also relatively computationally efficient and helps to speed up training. Regarding the sparsity versus negative local gradients trade-off, this can depend on the specific problem being solved and the architecture being used. Tanh can be more prone to vanishing gradients, but it can also provide smoother gradients and be more robust to noisy inputs. Ultimately, the choice of activation function should be based on empirical testing and optimization for the specific task at hand. Upvotes: 1 <issue_comment>username_2: Your understanding or Leaky ReLU is correct, and, yes, it has been proposed to mitigate the *dying neurons issue* in ReLU: when these are negative, they got zeroed. Regarding the answer of @Regresslt: * I would not say that leaky relu slows done training due to its computation overhead, since it's quite simple to implement. Instead, ELU (exponential linear unit) is quite expensive and that can increase training time. * Also it's true that the *slope* parameter (usually called $\alpha$), if mistuned can lead to worst performance, but one can try the PReLU that learns it. > > Does the additional sparsity outweigh the value gained from negative local gradients? > > > That's an interesting. I mean the dying ReLU phenomenon is usually observed on a fraction of the units, not the totality. In my personal experience I've noticed (dense) layers with ReLU to be sparse as much as $50$ to $90\%$. In general this causes an under utilization of the capacity of the model, which may lead to under-fitting. But from another perspective can be seen as *implicit* $l\_1$-regularization applied on each layer with ReLU, that is optimized along the model which may lead to better generalization performance: simply because the model learned to achieve the right capacity by itself. Also, if such large sparsity is achieved the use of Dropout on top can be totally non-beneficial, because it would drop also the remaining "active" units/neurons. > > Is tanh just too sparse? > > > First, $\tanh$ is not sparse since the output values can be either $-1$ or $1$ when *saturation* is reached, which is its main issue that shares with *sigmoid* (that instead can lead to sparsity.) When tanh or sigmoid saturate you get vanishing gradients, and the training stops. Instead, the sparsity induced by ReLU does not reduce the gradient, since it will only be multiplied by some zero elements (corresponding to the died units.): the gradient magnitude depends on the overall output of the layer. In practice (but I'd say it's hard to happen), only if *all* units of *all* layers are died your network won't learn anymore, and the gradient would vanish because each layers would only output zeroes. To conclude, ReLU is usually the first activation to try - it seems to always work sufficiently well - but remember also to pair the activation fn with the right choice of *weight initialization*, in order to prevent issues (dying units, saturation, etc), speed-up convergence, and even increase performance. Upvotes: 3 [selected_answer]
2023/05/27
1,027
4,522
<issue_start>username_0: Understanding the concept of "Gradient Flow" can be quite difficult as there is a lack of widely recognized and clearly defined resources that provide a comprehensive explanation. Although you can find insights from machine learning experts and references to papers that mention gradient flow, there isn't a single, definitive source that thoroughly covers the topic. Could you please recommend a resource that offers a detailed overview of the gradient flow?<issue_comment>username_1: Leaky ReLU is indeed an improvement over the standard ReLU activation function, but comes with some of the following limitations: * It may suffer from the "dying ReLU" problem, where a large fraction of units can become inactive and never recover. This can happen when the input to a unit is always negative, causing the slope of the activation function to be zero. * The parameter that controls the slope of the function can be difficult to tune, which can lead to suboptimal performance. * Leaky ReLU can also be less computationally efficient because it requires additional computation to calculate the slope of the function for non-positive inputs. This extra computation can slow down the training process, especially when dealing with large datasets. On the other hand, ReLU is a popular activation function due to its simplicity and effectiveness in many deep learning applications which is maybe why it's considered the "gold standard". It is also relatively computationally efficient and helps to speed up training. Regarding the sparsity versus negative local gradients trade-off, this can depend on the specific problem being solved and the architecture being used. Tanh can be more prone to vanishing gradients, but it can also provide smoother gradients and be more robust to noisy inputs. Ultimately, the choice of activation function should be based on empirical testing and optimization for the specific task at hand. Upvotes: 1 <issue_comment>username_2: Your understanding or Leaky ReLU is correct, and, yes, it has been proposed to mitigate the *dying neurons issue* in ReLU: when these are negative, they got zeroed. Regarding the answer of @Regresslt: * I would not say that leaky relu slows done training due to its computation overhead, since it's quite simple to implement. Instead, ELU (exponential linear unit) is quite expensive and that can increase training time. * Also it's true that the *slope* parameter (usually called $\alpha$), if mistuned can lead to worst performance, but one can try the PReLU that learns it. > > Does the additional sparsity outweigh the value gained from negative local gradients? > > > That's an interesting. I mean the dying ReLU phenomenon is usually observed on a fraction of the units, not the totality. In my personal experience I've noticed (dense) layers with ReLU to be sparse as much as $50$ to $90\%$. In general this causes an under utilization of the capacity of the model, which may lead to under-fitting. But from another perspective can be seen as *implicit* $l\_1$-regularization applied on each layer with ReLU, that is optimized along the model which may lead to better generalization performance: simply because the model learned to achieve the right capacity by itself. Also, if such large sparsity is achieved the use of Dropout on top can be totally non-beneficial, because it would drop also the remaining "active" units/neurons. > > Is tanh just too sparse? > > > First, $\tanh$ is not sparse since the output values can be either $-1$ or $1$ when *saturation* is reached, which is its main issue that shares with *sigmoid* (that instead can lead to sparsity.) When tanh or sigmoid saturate you get vanishing gradients, and the training stops. Instead, the sparsity induced by ReLU does not reduce the gradient, since it will only be multiplied by some zero elements (corresponding to the died units.): the gradient magnitude depends on the overall output of the layer. In practice (but I'd say it's hard to happen), only if *all* units of *all* layers are died your network won't learn anymore, and the gradient would vanish because each layers would only output zeroes. To conclude, ReLU is usually the first activation to try - it seems to always work sufficiently well - but remember also to pair the activation fn with the right choice of *weight initialization*, in order to prevent issues (dying units, saturation, etc), speed-up convergence, and even increase performance. Upvotes: 3 [selected_answer]
2023/05/30
515
1,761
<issue_start>username_0: In slide [27/66](https://www.davidsilver.uk/wp-content/uploads/2020/03/deep_rl_tutorial_small_compressed.pdf), it states that "**This is the maximum value achievable under any policy**". My understanding of Value-based RL is that the optimal Q-table/function will be learnt. Afterwards, the implicit optimal policy can be "derived" by greedily picking the actions based on this learnt Q-value. Does the implicit policy not matter here?<issue_comment>username_1: The slide 22 in your link says: > > * Once we have $Q^\*$ we can act optimally: > $$\pi^\*(s) = \arg \max\_a Q^\*(s,a)$$ > > > That is the optimal policy, which is greedy wrt $Q^\*$. Upvotes: 1 <issue_comment>username_2: > > This is the maximum value achievable under any policy > > > The statement says that $Q^\*(s,a)$ is the maximum possible state-action value that is achievable (at most) by any policy, $\pi$: including the random policy, and the optimal policy. It is also explained in slide 23: * "An optimal value function is the maximum achievable value" * "Once you have $Q^\*$ we can act optimally" * "Optimal value maximizes over all decisions" Indeed, the optimal policy $\pi^\*$ would always achieve $Q^\*$. It does not mean that the implicit policy (the one that can be derived from it: $\pi^\*(s)=\arg\max\_aQ^\*(s,a)$) is not important, neither that it cannot be computed from $Q^\*$. You have to think about this in terms of *policy evaluation* (or *prediction*, in classical RL terminology): you have the optimal action-value table, that you can use to compute the value (or return) that some policy achieves, as well as to derive the optimal policy by taking the action that maximizes $Q^\*$ for a given state. Upvotes: 1 [selected_answer]
2023/05/30
431
1,502
<issue_start>username_0: In the context of machine learning. If I freeze pre-trained model weights (for example, BERT) and then train a classifier on top of its embeddings, does that called fine-tuning?<issue_comment>username_1: The slide 22 in your link says: > > * Once we have $Q^\*$ we can act optimally: > $$\pi^\*(s) = \arg \max\_a Q^\*(s,a)$$ > > > That is the optimal policy, which is greedy wrt $Q^\*$. Upvotes: 1 <issue_comment>username_2: > > This is the maximum value achievable under any policy > > > The statement says that $Q^\*(s,a)$ is the maximum possible state-action value that is achievable (at most) by any policy, $\pi$: including the random policy, and the optimal policy. It is also explained in slide 23: * "An optimal value function is the maximum achievable value" * "Once you have $Q^\*$ we can act optimally" * "Optimal value maximizes over all decisions" Indeed, the optimal policy $\pi^\*$ would always achieve $Q^\*$. It does not mean that the implicit policy (the one that can be derived from it: $\pi^\*(s)=\arg\max\_aQ^\*(s,a)$) is not important, neither that it cannot be computed from $Q^\*$. You have to think about this in terms of *policy evaluation* (or *prediction*, in classical RL terminology): you have the optimal action-value table, that you can use to compute the value (or return) that some policy achieves, as well as to derive the optimal policy by taking the action that maximizes $Q^\*$ for a given state. Upvotes: 1 [selected_answer]
2023/06/05
530
1,973
<issue_start>username_0: I need to generate some job description, now I trying to use chatGPT to generate it, this is the prompt looks like: ``` please generate a job descriptions of java develop engineering. the output only contains description, no other info. each item seperate by *. this is an example: * maintaining Java-based applications, contributing to the design, coding, testing, and deployment of software solutions * excellent problem-solving skills, attention to detail, and the ability to work independently or as part of a team ``` this prompt generate the result, is it possible to remove the intro and summary from the result? I just want the JD items.<issue_comment>username_1: The slide 22 in your link says: > > * Once we have $Q^\*$ we can act optimally: > $$\pi^\*(s) = \arg \max\_a Q^\*(s,a)$$ > > > That is the optimal policy, which is greedy wrt $Q^\*$. Upvotes: 1 <issue_comment>username_2: > > This is the maximum value achievable under any policy > > > The statement says that $Q^\*(s,a)$ is the maximum possible state-action value that is achievable (at most) by any policy, $\pi$: including the random policy, and the optimal policy. It is also explained in slide 23: * "An optimal value function is the maximum achievable value" * "Once you have $Q^\*$ we can act optimally" * "Optimal value maximizes over all decisions" Indeed, the optimal policy $\pi^\*$ would always achieve $Q^\*$. It does not mean that the implicit policy (the one that can be derived from it: $\pi^\*(s)=\arg\max\_aQ^\*(s,a)$) is not important, neither that it cannot be computed from $Q^\*$. You have to think about this in terms of *policy evaluation* (or *prediction*, in classical RL terminology): you have the optimal action-value table, that you can use to compute the value (or return) that some policy achieves, as well as to derive the optimal policy by taking the action that maximizes $Q^\*$ for a given state. Upvotes: 1 [selected_answer]
2023/06/07
1,518
5,829
<issue_start>username_0: On a high-level [temperature and randomness affect the output of a generative language model](https://www.linkedin.com/pulse/temperature-check-guide-best-chatgpt-feature-youre-using-berkowitz/): 1. Lower temperature: Produces more focused, conservative, and consistent responses. 2. Moderate temperature: Strikes a balance between creativity and consistency. This setting can be useful for general content generation, where a blend of accuracy and inventiveness is desired. 3. Higher temperature: Generates more creative, diverse, and unexpected outputs. What I'm not sure of is **where exactly** randomness (controlled by the temperature) comes into play. I believe to have understood that it's only after a transformer has done its deterministic work, suggesting some probable next words. Can you confirm that the transformer works strictly deterministically and there is no randomness inside or between the attention layers?<issue_comment>username_1: > > Can you confirm that the transformer works strictly deterministically and there is no randomness inside or between the attention layers? > > > Of course, there is *no injected randomness* in a regular Transformer model: you have just and encoder-decoder architecture, with positional encoding, multiple multi-head attention blocks and layer normalization. The randomness you describe is typical of GPT model family which are generative, being trained to predict the next word token. Basically, you have a distribution over next tokens and you can control the amount of randomness in the sampling by dividing the logits by a temperature $\tau$: in the limit if $\tau\to\infty$ the predicted distribution becames uniform, otherwise if $\tau\to 0$ it exactly resembles a one-hot encoding (so a $1$ in a given place, and all zeros elsewhere.) > > I'm not sure of is where exactly randomness comes into play. > > > For example, at the end of the Transformer you have a dense layer which outputs $K$ logits: unnormalized probabilities over $K$ tokens. You can define a Categorical distribution from the logits $\alpha$ (well, you can also define it from normalized probabilities by applying a softmax on top) from which you can sample the "index" of the next token. To control the sampling you introduce the $\tau$ as follows $\text{Categorical}(\alpha / \tau)$. To generate new text you sample from that: say you start from a context $C$, you forward the model to get the logits $\alpha$ and sample a new token $t$, then update the context $C=C\cup \{t\}$ (so that the model also considers the novel predicted word), and repeat the process again. Upvotes: 3 <issue_comment>username_2: That's right, the Attention Layer output is totally deterministic. The **temperature** parameter is related to generative tasks (note that this is not the only thing you can do with Attention and the transformers architecture), and it controls how, given the logits (a vector of the same length as the vocabulary $V$, the model **samples** one token among all the available tokens. Suppose, for simplicity, that we are doing **Causal generation** and we have a vector of logits $u$ of length $V$ If you do not sample and take just the most probable token at each time step (i.e the $\arg\max$ of the logits), you always end up with the same generation, given a fixed input. Otherwise, we can model the distribution of probability over all tokens with a *softmax* and sample one token from the distribution at each step $t$. $$\mathbb{P}(x=v\_l | x\_{1:t-1}) = \frac{\exp(u\_l )}{\sum\_{l'}\exp(u'\_l)} \qquad \text{where}\; v\_l = x\_{1:t-1}l \;, \forall l \in V$$ The *temperature* modifies this distribution by warping the previous distribution, multiplying the terms inside the exponential by a factor $t$. $$\mathbb{P}(x=v\_l | x\_{1:t-1}) = \frac{\exp(u\_l/t)}{\sum\_{l'}\exp(u'\_l/t)} \qquad \text{where}\; v\_l = x\_{1:t-1}l \;, \forall l \in V$$ For $t \rightarrow 1$, the distribution gets *sharper*, as it tends to a indicator function over the maximum token (the $\arg \max$). For $t \rightarrow 1$ it is equivalent to the *softmax* formula above, with values greater than 1 it flattens the distribution, i.e the distribution tends to a uniform distribution as $t \rightarrow \infty$ References: <https://huggingface.co/blog/how-to-generate> Upvotes: 2 <issue_comment>username_3: The other answers might be correct for the mathematical abstraction of a GPT-like model, but are false for the actual real-world models. On-device forward passes through any big quantized neural network, including state-of-the-art OpenAI models in 2023, [are not deterministic](https://web.archive.org/web/20230328011953/https://community.openai.com/t/a-question-on-determinism/8185/2). The difference is not large, but in `gpt-4-0314` with temperature `0`, you can get a different token sampled every 100 tokens or so. Edit (August 2023): What I wrote above is partly correct. The hardware issues explain some of the randomness; however, OpenAI's Chat Completions API models have more variance than the old Completions API models, which is some evidence that randomness in GPT-4 is not explained by hardware issues only. The best guess I've seen is that [mixture-of-experts routers operate on batches of queries, and are deterministic (up to hardware issues) *on the batch level*, not on single queries](https://152334h.github.io/blog/non-determinism-in-gpt-4/). There is no information leakage between inputs from different users, but the expert choice in each layer depends on what is in the batch, influencing the output. Of course, this is speculation, and although the MoE theory seems plausible, it could turn out that GPT-4 inference requires a GPU setup which makes inference less deterministic than on classical dense models. Upvotes: 2
2023/06/10
837
3,400
<issue_start>username_0: I am trying to do multi-spectral image fusion. I am using the following paper as a reference. <https://arxiv.org/pdf/1804.08361.pdf> The code available on GitHub works well. But, I am trying to add some of my own code. I am trying to add an additional loss to the total loss calculation. The current loss function is sum of two losses, I am trying to introduce a third loss. I am not sure how do I assign a weight to this additional loss? Because I have read many papers like this one and no one has really gone in depth of how did they came up with the weights assigns to individual losses in case of multi-loss function. Is there any logical way to solve this problem? I would rather not do trial and error.<issue_comment>username_1: Well, it depends on the "meaning" (or expected impact) that the additional loss term has (should have). For example, there are cases in which minimizing all the losses is detrimental for the network: in VAEs fully minimizing the KL may prevent the decoder to reconstruct the inputs at all. And in some other in which the additional loss term is either a regularizer or a property. To summarize: * If all the terms have the *same importance* (and so they should be all equally minimized), you should weight them such that to have the same scale. Intuitively you want to gradient of each loss term to have almost the same magnitude. * Otherwise, if the other term is either a regularizer or something that have to ensure a property you need to search for the $\lambda$ that best weights the extra term. This is because usually such term should have a limited contribution on the overall loss. One practical way to do so is to apply *grid search* first to find the magnitude of $\lambda$, e.g. considering a scale within $[10^{-2}, 10^{-1}, 1, 10, 10^2]$. Then use either a *random search* or some more sophisticated *hyperparameter tuning* algo (like bayesian optimization) within the interval that grid search found, e.g. between $[10^{-1}, 1]$. Upvotes: 2 <issue_comment>username_2: Sure, you can definitely add your own loss term to the originally computed loss function. I checked the repo of your paper, they have used tensorflow. I am giving you a minimum reproducible example of how can you make your own custom loss function and train your model with the computational graph of tf. For more details you can check this link to other question on stack. [link](https://stackoverflow.com/questions/34875944/how-to-write-a-custom-loss-function-in-tensorflow) ``` import tensorflow as tf # Assuming you have some model defined and the inputs and targets are placeholders or tensors inputs = ... targets = ... # Define your loss function def my_loss_function(predictions, targets): # Calculate the additional loss term additional_loss = ... # Calculate the main loss (e.g., mean squared error) main_loss = tf.reduce_mean(tf.square(predictions - targets)) # Combine the additional loss term with the main loss total_loss = main_loss + additional_loss return total_loss # Calculate the predictions of your model predictions = model(inputs) # Calculate the total loss by calling your custom loss function loss = my_loss_function(predictions, targets) # Perform backpropagation and update the model parameters optimizer = tf.train.AdamOptimizer() train_op = optimizer.minimize(loss) ``` Upvotes: 1
2023/06/11
591
2,207
<issue_start>username_0: I worked with Feed Forward Neural Network and VAE and understood backpropagation algorithm. Now I build a VAE network, one layer of it is a very complex vector-to-vector function $f(x)$ (a general 'method' in the programming sense instead of a 'math' expression). Thus, there is no gradient info for this layer. I guess one cannot train the entire network with such a gap, though other layers are differentiable. Is there any nice way to train such a network? One thing in my mind is to approximate the gradient by slightly changing $x$ and computing $$ \frac{f(x+\delta)-f(x)}{ \delta} $$<issue_comment>username_1: Well, you can specify a custom gradient by either being just the identity (i.e. returning the inputs in the gradient scope) or computing the gradient by hand if you know that expression. Otherwise, you can approximate the gradient of non-differentiable functions by setting-up a [score function estimator](https://www.cl.uni-heidelberg.de/statnlpgroup/blog/sfge/) also called the [REINFORCE](https://link.springer.com/content/pdf/10.1023/A:1022672621406.pdf) estimator: also have a read [here](https://mpatacchiola.github.io/blog/2021/02/08/intro-variational-inference-2.html). The *finite difference method* that you mentioned may work too, but it's often unstable for very small $\delta$ which are required in order to get good estimates. Upvotes: 2 <issue_comment>username_2: You can maybe use a similar re-parameterisation trick. Where you may approximate the gradient rather than calculating it accurately. Introduce auxiliary variables which can mimic the function an still be differentiable. For example, as you suggested in a VAE when we have to pick a random vector from a distribution, we don't actually pick a random vector. But we pick a $\mu$ and $\sigma$ (i.e mean and variance) to train on. And then supple the random vector as an output of that like random vector $z$ = $\mu$+ $\epsilon\*\sigma$ where $\epsilon$ ~ (0, 1). Kindly check this reference: [Reparameterisation trick](https://stats.stackexchange.com/a/205336/382270). Also kindly provide an example of your vector function so that we can construct a more definite answer. Upvotes: 1
2023/06/14
457
1,944
<issue_start>username_0: Is there a measure of model complexity?<issue_comment>username_1: There is no "unit" to measure model complexity. However, you can take into account some key factors: * **input dimension** * **output dimension** * **model length (number of layers)**: Model length describes the number of transformations on the data from input to output. A higher number of layers means more possible transformations and thus the possibility to model more complex dependencies. * **model width (number of neurons per layer)**: Model width describes the dimensionality of the data going through a single layer. A higher number of neurons means higher information retention per layer. * **activation functions**: Without any activation functions, even the largest model can only learn linear dependencies. Activation functions introduce non-linearity into the model and make complex dependency modeling possible. Based on those factors, you can get a good estimate of how complex your model is. Note that a more complex model does not equal better performance per-se, they require careful fine-tuning. To get a good estimate if your model is large enough to model your data transformations, you can train it on very little data with `batchsize=1` and no regularisation. If it completely learns the given samples, it is likely to be fitting for your entire dataset aswell. Upvotes: 0 <issue_comment>username_2: Yes. There are at least 2 measures of model complexity studied and used in learning theory: [VC dimension and Rademacher complexity](https://mitpress.ublish.com/ebook/foundations-of-machine-learning--2-preview/7093/29). If you're new to learning theory, you could take a look at [this answer](https://ai.stackexchange.com/a/20358/2444). (Note: your question is not an exact duplicate of [this](https://ai.stackexchange.com/q/17870/2444), but the VC dimension is not specific to neural networks). Upvotes: 2 [selected_answer]
2023/06/15
997
3,910
<issue_start>username_0: My question is why the attention head matrices $W^Q$, $W^K$, $W^V$ should not be the same $W = W^Q =W^K= W^V$. In my understanding of transformer-based language models **one** attention head is responsible for **one** syntactic or semantic relation between any two words in the context. One might think that such a relation is represented by **one** matrix $W$ that projects the full word embeddings $x\_i$ from their full semantic space to a semantic subspace responsible for this relation. Here we could - in principle - calculate scores $\sigma\_{ij}$ as "similiarities" between two projected words $Wx\_i$ and $Wx\_j$ and then calculate the weighted sum of the projected tokens $Wx\_k$. I wonder why this would not work, and why we need three different matrices. Another way around: What does it mean to calculate the score as the dot-product of two vectors from two different semantic subspaces? Is this still some kind of similiarity (which lies at the heart of word embeddings)? And doesn't it sound like comparing apples and pears? Or viewed differently: How similar are the three matrices of an attention head in practice, e.g. when considering some 100$\times$100 attention heads of a large transformer model like ChatGPT?<issue_comment>username_1: Let me come up with a possibly too naive (and too high-level) answer: Since the weights of the word embedding layer (i.e. the word embeddings) are not trained in advance (via cooccurrence) and independently from the training of the transformer weights (but happens in the same training runs), the word embeddings don't necessarily indicate semantic similarity when taking the dot product (as they do when using Word2Vec or so). So this doesn't have to be expected neither when choosing the same matrix $W = W^Q =W^K= W^V$ in each attention head: the dot product doesn't give scores to be interpreted as similarities but just of some (hard to interpret and only sometimes symmetric) relation. Upvotes: 1 <issue_comment>username_2: Actually a multi-head attention (MHA) layer has multiple "heads", where each head is made by three \*projection matrices like: $H\_i = (W\_i^Q, W\_i^K, W\_i^V)$, for the $i$-th head. A typical number of heads value is eight, for example: each head is responsible of learning to attend to different parts of the sequence, thus you can (partially) *interpret* transformer-based models by inspecting their *attention masks*. A word embedding $x\_i$ is not directly fed to MHA. I mean MHA accepts three matrices $Q$ (query), $K$ (key) and $V$ (value), which got projected by $W^Q$, $W^K$ and $W^V$, respectively, when computing the attention scores (i.e. the scaled dot-product attention.) According to the [paper](https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) (see section 3.2.1) and [here](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MultiHeadAttention), keys $d\_k$ and value $d\_k$ dimensions can be different, simply because these can be different sequences of (word) embeddings. For example, in a Question-Answer model $Q$ can be the embeddings from the question, while $K$ or $V$ of the expected answer. So learning different weight matrices, $W^Q$, $W^K$, and $W^V$, allows to capture different relevant aspects for each input. Even when $Q=K=V$, i.e. the so called *self-attention*, the matrices are still fundamental. In general, consider that queries attend to keys to yield the *attention probabilities* (via dot-product and softmax) which are used to weight the values. So they basically say which timestep (and so which work embedding in the sequence) is relevant in values (the learned representation by projection.) In other words, the general meaning is which words (in $V$) are relevant to attend at time $t$: so something that goes beyond a simple similarity (which may occur in the self-attention case.) Upvotes: 0
2023/06/15
1,406
4,891
<issue_start>username_0: I ran into this AI-SE question from 5 years ago and I believe that an updated version could be interesting to discuss nowadays: [Is the smartest robot more clever than the stupidest human?](https://ai.stackexchange.com/questions/7021/is-the-smartest-robot-more-clever-than-the-stupidest-human) Today's best LLMs are displaying a lot of human-like abilities: proficiency in natural languages, ability to code, logical reasoning, role playing and so on. They can even [solve CAPTCHAs](https://twitter.com/ai_insight1/status/1636710414523285507), design games, answer questions about stories or write new ones: these were the "shortcomings of robots" in 2018, according to the answers to the question I linked. #### Question How do the best LLMs of today compare to a "dumb human"? In what tasks are all normal humans still better than AIs? Is there any test that every able-bodied human would pass, but top LLMs would still fail? ##### Definitions and clarifications A **"dumb human"** is a person without recognized disabilities or obvious problems, who doesn't have particular skills and who is considered not very intelligent (low IQ). Of course the LLMs available to the public have a number of objective limitations: they can only process text-to-text, they work with tokens rather than characters, context length is just few kilo-tokens, and they have no long-term memory. However a number of open source projects have shown various solutions to these problems, and the non-public version of the commercial LLMs already support much larger context windows, image input and similar features. Observations like "LLMs can't move arms as they don't have it", "LLMs fail to count characters because they're token based", "LLMs can't speak nor listen to speech" are not interesting.<issue_comment>username_1: LLMs seem to be limited at "compositional tasks." Have a look at [this paper](https://arxiv.org/pdf/2305.18654.pdf), in which the authors > > investigate the limits of these models across three representative compositional tasks—multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem. These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer. > > > I don't know if a "dumb human" can do dynamic programming problems, but > > humans can solve 3-digit by 3-digit multiplication arithmetic after learning basic calculation rules. Yet, off-the-shelf ChatGPT and GPT4 achieve only 55% and 59% accuracies on this task, respectively. > > > I gave ChatGPT two tries (what is "311 times 877") and ("what is 513 times 799"), and [it got them both wrong](https://chat.openai.com/share/8caaf09b-0e98-4f49-a258-cf6a056aa040). Another example from that paper is a ["zebra" or "einstein" puzzle](https://en.wikipedia.org/wiki/Zebra_Puzzle). Look at page 18 in the paper for an example. (I'll copy it below) It took me a couple minutes to solve. ChatGPT tries its best, but gets it quite wrong. the puzzle ---------- There are 3 houses (numbered 1 on the left, 3 on the right). Each has a different person in them. They have different characteristics: * Each person has a unique name: peter, eric, arnold * People have different favorite sports: soccer, tennis, basketball * People own different car models: tesla, ford, camry clues: * The person who owns a Ford is the person who loves tennis. * Arnold is in the third house. * The person who owns a Camry is directly left of the person who owns a Ford. * Eric is the person who owns a Camry. * The person who loves basketball is Eric. * The person who loves tennis and the person who loves soccer are next to each other. Match the person, favorite sports, and car models to each house. ### ChatGPTs output (for me just now) [GPT's full output](https://chat.openai.com/share/2fb37a75-e5dd-4fd1-ba1b-93f4a23b0515) ``` Therefore, the matching of people, favorite sports, and car models to each house is as follows: House 1: Person: Peter Favorite sport: Tennis Car model: Tesla House 2: Person: Eric Favorite sport: Basketball Car model: Ford House 3: Person: Arnold Favorite sport: Soccer Car model: Camry ``` and its easy to see that some of the constraints are violated. (e.g. the first one: Ford goes with Tennis) Upvotes: 4 [selected_answer]<issue_comment>username_2: Humans (any human) are still better at one of the most important task (whether it is a text-based task or not).... Humans can give you **their own opinion** or "sentiment" **about a text**. This means any human can tell you if he/she likes the text or not. An LLM don´t have any kind of own "personality", "opinion" or "feelings" so it can not give you its own opinion but a "general" sentiment (which is generally mentioned as Sentiment Analysis in NLP tasks) based on all the training data used by the bot. Upvotes: 0
2023/06/16
1,196
5,101
<issue_start>username_0: Again and again I ask myself what goes on in a pre-trained transformer-based language model (like ChatGPT9) when it comes to "know" that it cannot give an appropriate answer and either * states it ("I have not enough information to answer this question.") * asks for more specific information ("Please tell me which kind of XY you mean.") * calls a plugin (like Wolfram or ScholarAI) (I assume that this will never happen without reinforcement learning by human feedback. A pre-trained-only model would always answer something (possibly hallucinating) and not "reflect" about its lack of knowledge.) The only possibility that I can see - but it's not really explanatory: that after some steps of execution the sum of the top\_k probabilities of the final vector (which gives probabilities to the all words in the vocabulary) is too small. But what, when this happens only late? ChatGPT would already have produced lots of words - but one never observes that he stops generation after some lengthy text and only then ends with something like "Ah, finally I see that I'm missing information. I wasn't aware in the beginning." ChatGPT *immediately* admits that he doesn't know (when he does). And when ChatGPT calls a plugin - e.g. ScholarAI - he does it without having produced a single word of response to the last message. In principle, ChatGPT could generate a complete response in the background that then is checked somehow if it's "satisfactory". If yes it's given as output (simulating word-by-word generation), if not, it's regenerated with some sort of trigger (a hidden token?) to admit that ChatGPT is missing information or to call a plugin. What's the clever trick under the hood (in some technical detail)?<issue_comment>username_1: It makes sense to assume that reinforcement learning from human feedback (RLHF) has some merit, at least. I'll explain myself. In RL we have a *reward* (the human feedback), a *policy* (which should be GPT itself), and (one or two) value functions. [Here](https://huggingface.co/blog/rlhf) it say that in RLHF it learns *reward model* (RM), i.e, a network that takes a (generated) sequence and the human feedback, and outputs a numerical reward that represents the human preference (I think how much the preference is followed.) The RM scores/evaluates the goodness of the generated text, basically. Since the RM specifies a reward function, once it's learned you can use standard RL to optimize the language model to follow the RM and thus the human preferences. Then at inference time, one can still use the RM to evaluate the generated text and so if not satisfactory (i.e. predicted preference is too low), they can fallback to another (specialized) LM or even have some special instructions that ask the user for clarification. This is my guess. Upvotes: 2 <issue_comment>username_2: The data it is trained on includes variants of "I don't know". For instance, if you ask me what is the meaning of life and I reply I don't know, then that is the information schema the AI absorbs. It knows what it does not know, in the same way that it knows what it knows. Here is another way to look at it. When in training, people were asked to interact with GPT 3.5. At that time, the trainers would have received many incorrect responses. They would then flag to GPT that the response is incorrect/inaccurate from which it would learn to either flag the issue to the user upfront (I am only a LLM ... etc. etc.) or to say I do not know, or some variant. In all of these, the chat interactions and the training data enable it to learn the association of a sequence of words and the idea in it with the phrase "I don't know" or "I won't answer that" in the same way that it learns anything else. The guide rails are programmed in through the interactions with human trainers. This way the math does not change. You only need human trainers to interact with the system and they implicitly program the guide rails. The schema is plugin aware but not trained on any specific plugin. The following is the information flow (from <https://platform.openai.com/docs/plugins/introduction>) taken verbatim from the webpage: * OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples. * When a user asks a relevant question, the model may choose to invoke an API call from your plugin if it seems relevant; for POST requests, we require that developers build a user confirmation flow to avoid destruction actions. * The model will incorporate the API call results into its response to the user. * The model might include links returned from the API calls in its response. These will be displayed as rich previews (using the OpenGraph protocol, where we pull the site\_name, title, description, image, and url fields). * The model can also format data from your API in markdown and the ChatGPT UI will render the markdown automatically. In a nutshell, ChatGPT is intelligent but its plugin system is very limited. Upvotes: 2
2023/06/25
1,165
4,856
<issue_start>username_0: I'm trying to implement Monte Carlo Tree Search for (a simplified version of) the boardgame Commands and Colors -- I'm setting up a scenario where the AI side has overwhelming force: 6 units vs 3 units played by the human. I would hope that MCTS moves the 6 units in for the kill; instead what happens is that some units attack, some move sideways, and some retreat. I suspect that the units in the front are already strong enough to make victory likely, so that the unit in the back does not see a difference in its moving closer to the action, and chooses to move away from it. I suspect that when evaluating the value of the moves of the far away unit, the "noise" caused by the action of the units in the front, whose actions make the value of the position swing heavily, make it very difficult to evaluate the smaller contribution to the quality of the position made by moving the unit in the back. This is sad! A human player would move all units towards the enemy, bc if the front units get damaged, they will be moved away from the front and replaced by second-line units. Having units move randomly away from the action makes no sense. How do I fix it? -- Edit: The source code is here <https://github.com/xpmatteo/auto-cca> The case that does not work as expected can be observed with: * make server * (open another terminal) * make open * click "End Phase" twice The lone brown (Carthaginian) unit should close in against the gray (Roman) ones, but it doesnt.<issue_comment>username_1: It makes sense to assume that reinforcement learning from human feedback (RLHF) has some merit, at least. I'll explain myself. In RL we have a *reward* (the human feedback), a *policy* (which should be GPT itself), and (one or two) value functions. [Here](https://huggingface.co/blog/rlhf) it say that in RLHF it learns *reward model* (RM), i.e, a network that takes a (generated) sequence and the human feedback, and outputs a numerical reward that represents the human preference (I think how much the preference is followed.) The RM scores/evaluates the goodness of the generated text, basically. Since the RM specifies a reward function, once it's learned you can use standard RL to optimize the language model to follow the RM and thus the human preferences. Then at inference time, one can still use the RM to evaluate the generated text and so if not satisfactory (i.e. predicted preference is too low), they can fallback to another (specialized) LM or even have some special instructions that ask the user for clarification. This is my guess. Upvotes: 2 <issue_comment>username_2: The data it is trained on includes variants of "I don't know". For instance, if you ask me what is the meaning of life and I reply I don't know, then that is the information schema the AI absorbs. It knows what it does not know, in the same way that it knows what it knows. Here is another way to look at it. When in training, people were asked to interact with GPT 3.5. At that time, the trainers would have received many incorrect responses. They would then flag to GPT that the response is incorrect/inaccurate from which it would learn to either flag the issue to the user upfront (I am only a LLM ... etc. etc.) or to say I do not know, or some variant. In all of these, the chat interactions and the training data enable it to learn the association of a sequence of words and the idea in it with the phrase "I don't know" or "I won't answer that" in the same way that it learns anything else. The guide rails are programmed in through the interactions with human trainers. This way the math does not change. You only need human trainers to interact with the system and they implicitly program the guide rails. The schema is plugin aware but not trained on any specific plugin. The following is the information flow (from <https://platform.openai.com/docs/plugins/introduction>) taken verbatim from the webpage: * OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples. * When a user asks a relevant question, the model may choose to invoke an API call from your plugin if it seems relevant; for POST requests, we require that developers build a user confirmation flow to avoid destruction actions. * The model will incorporate the API call results into its response to the user. * The model might include links returned from the API calls in its response. These will be displayed as rich previews (using the OpenGraph protocol, where we pull the site\_name, title, description, image, and url fields). * The model can also format data from your API in markdown and the ChatGPT UI will render the markdown automatically. In a nutshell, ChatGPT is intelligent but its plugin system is very limited. Upvotes: 2
2023/06/27
844
3,528
<issue_start>username_0: Is it possible to get the source code for a research paper? In particular, I want to see the source for this paper <https://arxiv.org/abs/2304.03442><issue_comment>username_1: It makes sense to assume that reinforcement learning from human feedback (RLHF) has some merit, at least. I'll explain myself. In RL we have a *reward* (the human feedback), a *policy* (which should be GPT itself), and (one or two) value functions. [Here](https://huggingface.co/blog/rlhf) it say that in RLHF it learns *reward model* (RM), i.e, a network that takes a (generated) sequence and the human feedback, and outputs a numerical reward that represents the human preference (I think how much the preference is followed.) The RM scores/evaluates the goodness of the generated text, basically. Since the RM specifies a reward function, once it's learned you can use standard RL to optimize the language model to follow the RM and thus the human preferences. Then at inference time, one can still use the RM to evaluate the generated text and so if not satisfactory (i.e. predicted preference is too low), they can fallback to another (specialized) LM or even have some special instructions that ask the user for clarification. This is my guess. Upvotes: 2 <issue_comment>username_2: The data it is trained on includes variants of "I don't know". For instance, if you ask me what is the meaning of life and I reply I don't know, then that is the information schema the AI absorbs. It knows what it does not know, in the same way that it knows what it knows. Here is another way to look at it. When in training, people were asked to interact with GPT 3.5. At that time, the trainers would have received many incorrect responses. They would then flag to GPT that the response is incorrect/inaccurate from which it would learn to either flag the issue to the user upfront (I am only a LLM ... etc. etc.) or to say I do not know, or some variant. In all of these, the chat interactions and the training data enable it to learn the association of a sequence of words and the idea in it with the phrase "I don't know" or "I won't answer that" in the same way that it learns anything else. The guide rails are programmed in through the interactions with human trainers. This way the math does not change. You only need human trainers to interact with the system and they implicitly program the guide rails. The schema is plugin aware but not trained on any specific plugin. The following is the information flow (from <https://platform.openai.com/docs/plugins/introduction>) taken verbatim from the webpage: * OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples. * When a user asks a relevant question, the model may choose to invoke an API call from your plugin if it seems relevant; for POST requests, we require that developers build a user confirmation flow to avoid destruction actions. * The model will incorporate the API call results into its response to the user. * The model might include links returned from the API calls in its response. These will be displayed as rich previews (using the OpenGraph protocol, where we pull the site\_name, title, description, image, and url fields). * The model can also format data from your API in markdown and the ChatGPT UI will render the markdown automatically. In a nutshell, ChatGPT is intelligent but its plugin system is very limited. Upvotes: 2
2023/06/27
1,012
4,186
<issue_start>username_0: I want to understand how activation functions, specifically tanh and sigmoid, are used in int8 quantized neural networks. Even more specific, I want to understand a Look-up-Table based approach. My problem is that the relation of data width and size of the LUT, scaling factor and zero point will affect which index of the LUT I have to use for the calculation. E.g. TFLite-Micro has an [implementation of such a LUT for tanh and sigmoid](https://github.com/tensorflow/tflite-micro/blob/d2c4ef56f7a67db2da97f42de1f67e404f13eaad/tensorflow/lite/kernels/internal/reference/integer_ops/tanh.h#L63). They have an implementation, but I do not understand how they calculate the indices. Is there some guidance on how to design such a function available somewhere? I haven't managed to find anything more specific.<issue_comment>username_1: It makes sense to assume that reinforcement learning from human feedback (RLHF) has some merit, at least. I'll explain myself. In RL we have a *reward* (the human feedback), a *policy* (which should be GPT itself), and (one or two) value functions. [Here](https://huggingface.co/blog/rlhf) it say that in RLHF it learns *reward model* (RM), i.e, a network that takes a (generated) sequence and the human feedback, and outputs a numerical reward that represents the human preference (I think how much the preference is followed.) The RM scores/evaluates the goodness of the generated text, basically. Since the RM specifies a reward function, once it's learned you can use standard RL to optimize the language model to follow the RM and thus the human preferences. Then at inference time, one can still use the RM to evaluate the generated text and so if not satisfactory (i.e. predicted preference is too low), they can fallback to another (specialized) LM or even have some special instructions that ask the user for clarification. This is my guess. Upvotes: 2 <issue_comment>username_2: The data it is trained on includes variants of "I don't know". For instance, if you ask me what is the meaning of life and I reply I don't know, then that is the information schema the AI absorbs. It knows what it does not know, in the same way that it knows what it knows. Here is another way to look at it. When in training, people were asked to interact with GPT 3.5. At that time, the trainers would have received many incorrect responses. They would then flag to GPT that the response is incorrect/inaccurate from which it would learn to either flag the issue to the user upfront (I am only a LLM ... etc. etc.) or to say I do not know, or some variant. In all of these, the chat interactions and the training data enable it to learn the association of a sequence of words and the idea in it with the phrase "I don't know" or "I won't answer that" in the same way that it learns anything else. The guide rails are programmed in through the interactions with human trainers. This way the math does not change. You only need human trainers to interact with the system and they implicitly program the guide rails. The schema is plugin aware but not trained on any specific plugin. The following is the information flow (from <https://platform.openai.com/docs/plugins/introduction>) taken verbatim from the webpage: * OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples. * When a user asks a relevant question, the model may choose to invoke an API call from your plugin if it seems relevant; for POST requests, we require that developers build a user confirmation flow to avoid destruction actions. * The model will incorporate the API call results into its response to the user. * The model might include links returned from the API calls in its response. These will be displayed as rich previews (using the OpenGraph protocol, where we pull the site\_name, title, description, image, and url fields). * The model can also format data from your API in markdown and the ChatGPT UI will render the markdown automatically. In a nutshell, ChatGPT is intelligent but its plugin system is very limited. Upvotes: 2
2023/06/30
1,008
4,241
<issue_start>username_0: When we are applying a mask onto the padded values in an input sequence, it is typically done through setting the padded values as negative infinity. For example, a tensor of values `[1,2,3,0,0]` should result in a padding mask of `pad_mask = [True, True, True, False, False]` (or the opposite depending on your flavour). However, if we apply the mask i.e `attention_scores = attention_scores.masked_fill_(pad_mask.T == False, float('-inf'))` before applying softmax, won't we get the 4th and 5th row of the attention\_scores as 'nan' when we softmax attempts to calculate the probability distribution along each row? Does that mean the step of where to apply the mask is incorrect, and we should apply a zero-ing out of the pad token rows in the attention\_score matrix after applying the softmax function? or is there another key concept/step I am missing here<issue_comment>username_1: It makes sense to assume that reinforcement learning from human feedback (RLHF) has some merit, at least. I'll explain myself. In RL we have a *reward* (the human feedback), a *policy* (which should be GPT itself), and (one or two) value functions. [Here](https://huggingface.co/blog/rlhf) it say that in RLHF it learns *reward model* (RM), i.e, a network that takes a (generated) sequence and the human feedback, and outputs a numerical reward that represents the human preference (I think how much the preference is followed.) The RM scores/evaluates the goodness of the generated text, basically. Since the RM specifies a reward function, once it's learned you can use standard RL to optimize the language model to follow the RM and thus the human preferences. Then at inference time, one can still use the RM to evaluate the generated text and so if not satisfactory (i.e. predicted preference is too low), they can fallback to another (specialized) LM or even have some special instructions that ask the user for clarification. This is my guess. Upvotes: 2 <issue_comment>username_2: The data it is trained on includes variants of "I don't know". For instance, if you ask me what is the meaning of life and I reply I don't know, then that is the information schema the AI absorbs. It knows what it does not know, in the same way that it knows what it knows. Here is another way to look at it. When in training, people were asked to interact with GPT 3.5. At that time, the trainers would have received many incorrect responses. They would then flag to GPT that the response is incorrect/inaccurate from which it would learn to either flag the issue to the user upfront (I am only a LLM ... etc. etc.) or to say I do not know, or some variant. In all of these, the chat interactions and the training data enable it to learn the association of a sequence of words and the idea in it with the phrase "I don't know" or "I won't answer that" in the same way that it learns anything else. The guide rails are programmed in through the interactions with human trainers. This way the math does not change. You only need human trainers to interact with the system and they implicitly program the guide rails. The schema is plugin aware but not trained on any specific plugin. The following is the information flow (from <https://platform.openai.com/docs/plugins/introduction>) taken verbatim from the webpage: * OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples. * When a user asks a relevant question, the model may choose to invoke an API call from your plugin if it seems relevant; for POST requests, we require that developers build a user confirmation flow to avoid destruction actions. * The model will incorporate the API call results into its response to the user. * The model might include links returned from the API calls in its response. These will be displayed as rich previews (using the OpenGraph protocol, where we pull the site\_name, title, description, image, and url fields). * The model can also format data from your API in markdown and the ChatGPT UI will render the markdown automatically. In a nutshell, ChatGPT is intelligent but its plugin system is very limited. Upvotes: 2
2023/07/01
1,059
4,330
<issue_start>username_0: Could you provide the explanation of Figure 4 from the paper at [Improved Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2102.09672)? (1) The paper says, "*the end of the forward noising process is too noisy,and so doesn’t contribute very much to sample quality*". But if the goal is to have an image with only noise, why is it problematic to have a lot of noise? (2) The paper says, "*a model trained with the linear schedule does not get much worse (as measured by FID) when we skip up to 20% of the reverse diffusion process*". But why is it the reverse diffusion process? Shouldn't the deterioration be related to the forward process? (3) Also, why is the training process relevant? It seems to me that the explanation of this section means 'linear noise schedule is better than cosine one'. Could you explain, please? [![enter image description here](https://i.stack.imgur.com/4Og2w.png)](https://i.stack.imgur.com/4Og2w.png)<issue_comment>username_1: It makes sense to assume that reinforcement learning from human feedback (RLHF) has some merit, at least. I'll explain myself. In RL we have a *reward* (the human feedback), a *policy* (which should be GPT itself), and (one or two) value functions. [Here](https://huggingface.co/blog/rlhf) it say that in RLHF it learns *reward model* (RM), i.e, a network that takes a (generated) sequence and the human feedback, and outputs a numerical reward that represents the human preference (I think how much the preference is followed.) The RM scores/evaluates the goodness of the generated text, basically. Since the RM specifies a reward function, once it's learned you can use standard RL to optimize the language model to follow the RM and thus the human preferences. Then at inference time, one can still use the RM to evaluate the generated text and so if not satisfactory (i.e. predicted preference is too low), they can fallback to another (specialized) LM or even have some special instructions that ask the user for clarification. This is my guess. Upvotes: 2 <issue_comment>username_2: The data it is trained on includes variants of "I don't know". For instance, if you ask me what is the meaning of life and I reply I don't know, then that is the information schema the AI absorbs. It knows what it does not know, in the same way that it knows what it knows. Here is another way to look at it. When in training, people were asked to interact with GPT 3.5. At that time, the trainers would have received many incorrect responses. They would then flag to GPT that the response is incorrect/inaccurate from which it would learn to either flag the issue to the user upfront (I am only a LLM ... etc. etc.) or to say I do not know, or some variant. In all of these, the chat interactions and the training data enable it to learn the association of a sequence of words and the idea in it with the phrase "I don't know" or "I won't answer that" in the same way that it learns anything else. The guide rails are programmed in through the interactions with human trainers. This way the math does not change. You only need human trainers to interact with the system and they implicitly program the guide rails. The schema is plugin aware but not trained on any specific plugin. The following is the information flow (from <https://platform.openai.com/docs/plugins/introduction>) taken verbatim from the webpage: * OpenAI will inject a compact description of your plugin in a message to ChatGPT, invisible to end users. This will include the plugin description, endpoints, and examples. * When a user asks a relevant question, the model may choose to invoke an API call from your plugin if it seems relevant; for POST requests, we require that developers build a user confirmation flow to avoid destruction actions. * The model will incorporate the API call results into its response to the user. * The model might include links returned from the API calls in its response. These will be displayed as rich previews (using the OpenGraph protocol, where we pull the site\_name, title, description, image, and url fields). * The model can also format data from your API in markdown and the ChatGPT UI will render the markdown automatically. In a nutshell, ChatGPT is intelligent but its plugin system is very limited. Upvotes: 2
2023/07/03
301
1,401
<issue_start>username_0: I would become artificial intelligence engineer in the future, so I want to start programming but I don't know what programming language I start for that.<issue_comment>username_1: Python is a great programming language to start with if you're interested in becoming an AI engineer. It's widely used in the field of AI and machine learning due to its simplicity and large community support. It also has numerous libraries and frameworks like TensorFlow, Keras, PyTorch, and Scikit-learn that are specifically designed for AI and machine learning. After you're comfortable with Python, you might want to learn other languages that are commonly used in AI like Java, C++, and R. However, Python is definitely a good starting point. Upvotes: 2 <issue_comment>username_2: Any language will do. Most differences are just in the syntax and the idioms of a particular language, but you can usually pick up a new language fairly quickly. Having said that, there are some languages that are more common in certain areas. *Python* seems to be quite popular in a lot of modern AI work. I myself work in natural language processing, and the company I'm at uses *Lisp*. My suggestion would be to start with some easier languages, and keeping an open mind about learning about other languages. The more you know, the easier it is to switch between them and to learn new ones. Upvotes: 0
2023/07/05
993
4,417
<issue_start>username_0: Some background: I'm an EE major and data science minor, so I have a basic understanding of machine learning - I had a one semester course on it where we covered some of the most commonly used algorithms such as Naive Bayes' and K-nearest-neighbors, as well as the basics of how artificial neural networks work. I've also had EE classes on signal processing, so I'm familiar with the Fourier transform and the like. So I have a general idea of how different algorithms can, for lack of a better term, "pick up on" patterns in the training data and make predictions on the testing data, which will hopefully be reasonably accurate. And I have a general idea of how signals can be represented as discrete binary sequences which could then be used to train an ML model. What I want to know is, what sorts of patterns, in the mathematical sense, in audio recordings of speech specifically, do these algorithms find? e.g., What allows the Alexa app to both recognize when I'm speaking vs when it's my grandma speaking, and yet still understand the same commands from both of us? What, mathematically, makes, e.g., a recording of me saying "Alexa play music" both distinguishable from a recording of a different person saying the same command, and yet still recognizably contain the same information of the command? In other words, how do the algorithms extract and match the speaker-specific and content-specific information from the speech signals?<issue_comment>username_1: A sound byte can be decomposed into a set of features which are distinguishable by a classifier. The most important feature extracted is the fundamental frequency. This is the lowest frequency function from approximating the windowed waveform with Fourier series. It is fairly unique for each phoneme. In addition, each phoneme can be represented as a series of "frequency bins" where the intensity is the height of each bin. These features are not only unique to each phoneme, but also to each speaker. Additionally, statical moments can be calculated for each window and used as features. A Recurrent Neural Network (there are many techniques) can be used to agglomerate features into a recognition model. The idea is to learn the probability of one set of features following another set in time. For an example, a 20 second sound byte of a speaker can be broken into 20 1-second windows of speech. Each window represents a phoneme or some part of speech (depending on window-size, speaker speed, etc). A Fourier transform is applied on a window to derive a feature set. The feature set is fed into an RNN to predict the speaker (or text-to-speech, TTS). There are many inter-related applications here. Hope this was helpful. Upvotes: 1 <issue_comment>username_2: I would slightly disagree with username_1's answer: the fundamental frequency is mainly specific to a *speaker*. Sounds are defined by other frequency patterns. Vowels, for example, have two bands of energy in different frequency ranges, called [*formants*](https://en.wikipedia.org/wiki/Formant). Depending on the speaker, these will be in different actual frequency ranges, but the general pattern will remain the same. They are linked to the fundamental frequency *F0* in that it is multiples of *F0* which can be seen in a [spectrogram](https://en.wikipedia.org/wiki/Spectrogram). There is obviously lot of variation in the exact frequencies involved because of circumstances. Apart from different speakers with different physiology, having a cold also changes your frequency output. For speech recognition you need various mechanisms to map a sequence of frequency values to phonemes and then words. When I was studying phonetics in the 1990s, the upcoming tool was [Hidden Markov Models](https://en.wikipedia.org/wiki/Hidden_Markov_model), and neural networks were being explored as well, but were still in their infancy. I'm not sure what currently is used for speech recognition, as I haven't done any phonetics for years, but I would expect NNs to be more prevalent these days. To summarise: there are some features in the frequency pattern of an utterance which give you an indication of the (language) sounds produced, whereas others reflect the physiology of the vocal tract (and are thus speaker-specific). But they are not unique, as there is no "voice print", which was some time ago touted as a finger print equivalent. Upvotes: 2
2023/07/06
590
2,418
<issue_start>username_0: Example: <https://huggingface.co/google/umt5-base> > > Note: UMT5 was only pre-trained on mC4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. > > > The model was pre-trained on a whole lot of languages. Let's suppose I devise a fine-tuning dataset to classify sentences (like user feedback). But due to resource constraints, only a few languages will be included. In my fine-tuned task, will the model be able to generalize to languages it was pre-trained, but not fine-tuned on?<issue_comment>username_1: The attention doesn't discriminate between what token it is producing as long as it is following the protocol/heuristic defined by the finetuning dataset, So essentially, a finetuning dataset that contains `Hi, How are you` ---> `I am fine, thank you.` should be able to generalize it to French as well, `Salut, Comment allez-vous` ---> `Je vais bien, merci.` As long as the transformer understands both English and French. Upvotes: 1 <issue_comment>username_2: The short answer: Very unlikely. The extended answer: If you fine-tune a model, it becomes specialized for the type of data you fine-tune it on but you trade in some of its generalization capabilities. In your particular example, that means that your model becomes very attuned to user feedback and performs very highly on user feedback in the languages it has seen during training. It looses both its capabilities for more general texts and other languages from pre-training. It will perform lower on general texts and significantly lower on user feedback in different languages from your training set. To address this problem, you need something called domain adaptation or transfer learning. Domain adaptation means you train your model on a domain dataset, e.g. your user feedback dataset in a small range of languages, and then adapt/transfer it to your actual target dataset. See an example below: [![Domain Adaptation in Vision](https://i.stack.imgur.com/QEhdr.png)](https://i.stack.imgur.com/QEhdr.png) There are different approaches for this. Some adapt the data, others prefer to transfer the model. There is a lot of research in this area. This is a good [article](https://www.ruder.io/transfer-learning/) that covers the basics and might provide you with a more specific solution you can try out. Upvotes: 3 [selected_answer]
2023/07/06
537
2,289
<issue_start>username_0: I'm wondering because I don't appreciate what is wrong with just applying [HER](https://arxiv.org/abs/1707.01495) to an otherwise on-policy algorithm? Like if we do that will the training stability just fall apart? And if so why? My understanding is that on-policy is just a category created by humans meaning that "the default algorithm doesn't do off-policy optimization". But why does that prohibit adding off-policy elements?<issue_comment>username_1: The attention doesn't discriminate between what token it is producing as long as it is following the protocol/heuristic defined by the finetuning dataset, So essentially, a finetuning dataset that contains `Hi, How are you` ---> `I am fine, thank you.` should be able to generalize it to French as well, `Salut, Comment allez-vous` ---> `Je vais bien, merci.` As long as the transformer understands both English and French. Upvotes: 1 <issue_comment>username_2: The short answer: Very unlikely. The extended answer: If you fine-tune a model, it becomes specialized for the type of data you fine-tune it on but you trade in some of its generalization capabilities. In your particular example, that means that your model becomes very attuned to user feedback and performs very highly on user feedback in the languages it has seen during training. It looses both its capabilities for more general texts and other languages from pre-training. It will perform lower on general texts and significantly lower on user feedback in different languages from your training set. To address this problem, you need something called domain adaptation or transfer learning. Domain adaptation means you train your model on a domain dataset, e.g. your user feedback dataset in a small range of languages, and then adapt/transfer it to your actual target dataset. See an example below: [![Domain Adaptation in Vision](https://i.stack.imgur.com/QEhdr.png)](https://i.stack.imgur.com/QEhdr.png) There are different approaches for this. Some adapt the data, others prefer to transfer the model. There is a lot of research in this area. This is a good [article](https://www.ruder.io/transfer-learning/) that covers the basics and might provide you with a more specific solution you can try out. Upvotes: 3 [selected_answer]
2023/07/08
631
2,623
<issue_start>username_0: I know in order to shrink or expand the number of channels a 1x1 convolution is performed. I need to clarify the following: is the 1x1 convolution(s) just a matrix multiplication between the image with shape (h*w, 3) (RGB) and a matrix that holds the learnable weights with shape (3, 1)? Which will result in a new matrix of shape (h*w, 1) (in this case the number of channels shrunk from 3 to 1). If the above is correct, what happens under the hood of a NN framework, such as PyTorch, when the number of input channels is equal to the number of output channels? Does a matrix multiplication take place between the input (h\*w, 3) and a matrix with learnable weights (num\_channels, num\_channels)? Doesn't this introduce unnecessary (and unwanted) operations?<issue_comment>username_1: The attention doesn't discriminate between what token it is producing as long as it is following the protocol/heuristic defined by the finetuning dataset, So essentially, a finetuning dataset that contains `Hi, How are you` ---> `I am fine, thank you.` should be able to generalize it to French as well, `Salut, Comment allez-vous` ---> `Je vais bien, merci.` As long as the transformer understands both English and French. Upvotes: 1 <issue_comment>username_2: The short answer: Very unlikely. The extended answer: If you fine-tune a model, it becomes specialized for the type of data you fine-tune it on but you trade in some of its generalization capabilities. In your particular example, that means that your model becomes very attuned to user feedback and performs very highly on user feedback in the languages it has seen during training. It looses both its capabilities for more general texts and other languages from pre-training. It will perform lower on general texts and significantly lower on user feedback in different languages from your training set. To address this problem, you need something called domain adaptation or transfer learning. Domain adaptation means you train your model on a domain dataset, e.g. your user feedback dataset in a small range of languages, and then adapt/transfer it to your actual target dataset. See an example below: [![Domain Adaptation in Vision](https://i.stack.imgur.com/QEhdr.png)](https://i.stack.imgur.com/QEhdr.png) There are different approaches for this. Some adapt the data, others prefer to transfer the model. There is a lot of research in this area. This is a good [article](https://www.ruder.io/transfer-learning/) that covers the basics and might provide you with a more specific solution you can try out. Upvotes: 3 [selected_answer]
2023/07/09
1,190
4,380
<issue_start>username_0: I have heard some back and forth regarding open source LLMs like Llama. I have heard that on certain benchmarks they perform close, the same or better than GPT-4, but caveats that they tend to lack the diversity and range of GPT-4, and also fail to be equivalent in ways certain benchmarks or metrics don’t capture fully. GPT-4 has about 170 trillion parameters, I believe? It seems like the biggest open source models are all in the billions - like Bloom or the new [Falcon 40b](https://huggingface.co/tiiuae/falcon-40b-instruct). There are techniques where they refine GPT-4’s output into a smaller amount of training data that supposedly hits all the marks and does just as well; but again, I don’t know if that’s only true under the reductionist of view of a particular benchmark-questionnaire. So, do open source models actually compete with GPT-4, and why or why not? Is the whole situation a matter of scale, that a commercial venture like OpenAI can foot the massive bill of training a multi-trillion parameter model that no open source AI project can afford, on top of them having expertise in model design, making GPT-4 continually the state-of-the-art? Or is there any open source model that truly can compare in terms of usability?<issue_comment>username_1: The remarkable performance of GPT 4 is due to the massive size of its architecture and the amount of data it was trained on, which costs a lot of money. Few organizations have the hardware resources and funding to support such efforts. Huggingface maintains a leaderboard for tracking open source LLMs. See [Open LLM Leaderboad](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). They provide this information on the benchmarks: ``` We evaluate models on 4 key benchmarks from the Eleuther AI Language Model Evaluation Harness , a unified framework to test generative language models on a large number of different evaluation tasks. AI2 Reasoning Challenge (25-shot) - a set of grade-school science questions. HellaSwag (10-shot) - a test of commonsense inference, which is easy for humans (~95%) but challenging for SOTA models. MMLU (5-shot) - a test to measure a text model’s multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. TruthfulQA (0-shot) - a test to measure a model’s propensity to reproduce falsehoods commonly found online. We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings. ``` Here is a snapshot of the top 5 models: [![hj](https://i.stack.imgur.com/gUAJP.png)](https://i.stack.imgur.com/gUAJP.png) For comparison you can see OpenAI's benchmark results [here](https://openai.com/research/gpt-4): [![enter image description here](https://i.stack.imgur.com/iSp44.png)](https://i.stack.imgur.com/iSp44.png) Upvotes: 2 <issue_comment>username_2: In the ["Fine-tuning" section](https://platform.openai.com/docs/guides/fine-tuning) of their end-user-documentation, OpenAU writes that > > The more training examples you have, the better. We recommend having at least a couple hundred examples. In general, we've found that each doubling of the dataset size leads to a linear increase in model quality. > > > The context for this is training ChatGPT's model with one's own data, (not starting from scratch). However this might explain why the last few percent in high benchmark scores are so difficult to achieve for any model, including open-source models. However a new paper ["Textbooks Are All You Need"](https://arxiv.org/abs/2306.11644) just came out (mid 2023) that argues that using higher input-data quality matters more than quantity of input size. From the abstract: > > using a selection of ``textbook quality" data from the web (6B tokens) and synthetically generated textbooks and exercises with GPT-3.5 (1B tokens). Despite this small scale, [our model] phi-1 attains pass@1 accuracy 50.6% on HumanEval and 55.5% on MBPP. > > > Also <NAME> said something interesting during the Q+A Session of a public event at a university (it's on Youtube, cannot find the quote ): It is not the parameter size, it's the 200 little things you need to get right on the project-management level (esp. during the finetuning processs) that matter. Upvotes: 1
2023/07/11
826
3,671
<issue_start>username_0: Quick question about LLMS (and gradient descent in general): we search the space of neural networks by gradient descending in order to minimize one explicit function but what seems to be happening is that in the course of minimizing this function, the neural network automatically picks up several other skills (like building a world model...?). I imagine there is a lot of randomness in this process so to what extent are the extra skills picked up "fixed". In other words, if I train models multiple times on the same data using the same loss function, to what extent are the resulting neural networks similar in performance (out of the training set, say)? Does the answer to this question matter much on what the loss function is and how much training has taken place? Given the proliferation of LLMs with not too dissimilar behaviour, I expect the answer to the above question is positive (maybe in the limit that the training time tends to infinity). But theoretically, this is a little surprising to me that there is an "almost unique" minimizer of the given loss function. Do we have a good theoretical framework for explaining this?<issue_comment>username_1: ### If you want determinism make sure you program it in A machine learning model will be deterministic to the same extent as any other computer program. It is entirely based the stability of the inputs you put into it. If you initialse a network to random weights you have potentially introduced some non-determinism. You can get it back by using a known seed and a known random number generator (library behavior can be system dependendent. Likewise if you put the same input data in you should get the same results. I suspect this is not the question were asking or the answer you were looking for so... ### Consider the fitness landscape for the problem A learning system is trying to find a good solution to a problem. For example you are trying to minimise the error between the output of your model and actual results. The complete space of possible solutions gives you a fitness or energy landscape. Your learning is trying to navigate this to find solutions and avoid local minima. So if you now add noise to this system. Use a different subset of the possible input data, different initial weights or a different learning algorithm the fitness landscape is still broadly the same so you should get a similar answer. If you change things too much of course this is no longer the case. Maybe a interesting set of input data adds a new peak or trough to the landscape. Maybe a tweak to your learning algorithm enables it to escape a particular kind of local minimum or get trapped in another. This is probably more the answer you were looking for. It terms of a theorhetical model they probably exist but require a lot of maths and good citations which others may be able to provide. Upvotes: 2 <issue_comment>username_2: There's [this paper](https://arxiv.org/pdf/2109.08203.pdf) on the efficacy of random initialization on model performance: > > The conclusions are that even if the variance is not very > large, it is surprisingly easy to find an outlier that performs much better or much worse than the average. > > > Part of what you said also reminds me of the [lottery ticket hypothesis](https://arxiv.org/pdf/1803.03635.pdf), motivated by the idea that you can prune a large portion of a NN's parameters while keeping the same accuracy, but resulting sparse architecture is difficult to train from scratch. The idea is that certain subnetworks in a NN may happen to be initialized s.t. they are particularly effective for some task. Upvotes: 1
2023/07/11
836
3,421
<issue_start>username_0: **Let Zs be the input of the output layer** (for example, Z1 is the input of the first neuron in the output layer), **Os be the output of the output layer** (which are actually the results of applying the softmax activation function to Zs, for example, O1 = softmax(Z1)), a**nd Ys be the target values** (which are 0 or 1 because in this example we are dealing with classification problems and using one-hot encoding). **E is the sum of the neuron's loss using the CrossEntropy loss function.** **Let's say our neural network has 2 neurons, and Y1 = 1 (so Y2 = 0). What is the derivative of E with respect to Z1 and the derivative of E with respect to Z2?** **After calculations, I came to the conclusion that the value of all derivative of E with respects to Zs(Z1 and Z2) should be equal, becasue they are all equal to O1-1 ( since Y1 = 1 as i said),** **so am i right or wrong?(and why)**<issue_comment>username_1: ### If you want determinism make sure you program it in A machine learning model will be deterministic to the same extent as any other computer program. It is entirely based the stability of the inputs you put into it. If you initialse a network to random weights you have potentially introduced some non-determinism. You can get it back by using a known seed and a known random number generator (library behavior can be system dependendent. Likewise if you put the same input data in you should get the same results. I suspect this is not the question were asking or the answer you were looking for so... ### Consider the fitness landscape for the problem A learning system is trying to find a good solution to a problem. For example you are trying to minimise the error between the output of your model and actual results. The complete space of possible solutions gives you a fitness or energy landscape. Your learning is trying to navigate this to find solutions and avoid local minima. So if you now add noise to this system. Use a different subset of the possible input data, different initial weights or a different learning algorithm the fitness landscape is still broadly the same so you should get a similar answer. If you change things too much of course this is no longer the case. Maybe a interesting set of input data adds a new peak or trough to the landscape. Maybe a tweak to your learning algorithm enables it to escape a particular kind of local minimum or get trapped in another. This is probably more the answer you were looking for. It terms of a theorhetical model they probably exist but require a lot of maths and good citations which others may be able to provide. Upvotes: 2 <issue_comment>username_2: There's [this paper](https://arxiv.org/pdf/2109.08203.pdf) on the efficacy of random initialization on model performance: > > The conclusions are that even if the variance is not very > large, it is surprisingly easy to find an outlier that performs much better or much worse than the average. > > > Part of what you said also reminds me of the [lottery ticket hypothesis](https://arxiv.org/pdf/1803.03635.pdf), motivated by the idea that you can prune a large portion of a NN's parameters while keeping the same accuracy, but resulting sparse architecture is difficult to train from scratch. The idea is that certain subnetworks in a NN may happen to be initialized s.t. they are particularly effective for some task. Upvotes: 1
2023/07/12
817
3,388
<issue_start>username_0: All the prompt engineering techniques I've seen seem to focus on telling the model what to do e.g. Few-Shot Prompting. Is there any value in giving the model examples of what not to do? Can you link me to any papers/techniques on the topic? Example I am building a bot to improve students' foreign language writing skills. Bad output: Corrected spelling of 'heisse' to 'heiße' because 'heiße' is the correct spelling in German. Better output: Corrected spelling of 'heisse' to 'heiße' because 'ss' can be combined to form 'ß' in German. I could solve this specific problem using few-shot prompting. But really, I want to tell the model "don't give answers like 'this is how it is done in German', instead explain what is being done and the reasons for it". I may have answered my own question there... just put it what I said above in the system prompt?<issue_comment>username_1: ### If you want determinism make sure you program it in A machine learning model will be deterministic to the same extent as any other computer program. It is entirely based the stability of the inputs you put into it. If you initialse a network to random weights you have potentially introduced some non-determinism. You can get it back by using a known seed and a known random number generator (library behavior can be system dependendent. Likewise if you put the same input data in you should get the same results. I suspect this is not the question were asking or the answer you were looking for so... ### Consider the fitness landscape for the problem A learning system is trying to find a good solution to a problem. For example you are trying to minimise the error between the output of your model and actual results. The complete space of possible solutions gives you a fitness or energy landscape. Your learning is trying to navigate this to find solutions and avoid local minima. So if you now add noise to this system. Use a different subset of the possible input data, different initial weights or a different learning algorithm the fitness landscape is still broadly the same so you should get a similar answer. If you change things too much of course this is no longer the case. Maybe a interesting set of input data adds a new peak or trough to the landscape. Maybe a tweak to your learning algorithm enables it to escape a particular kind of local minimum or get trapped in another. This is probably more the answer you were looking for. It terms of a theorhetical model they probably exist but require a lot of maths and good citations which others may be able to provide. Upvotes: 2 <issue_comment>username_2: There's [this paper](https://arxiv.org/pdf/2109.08203.pdf) on the efficacy of random initialization on model performance: > > The conclusions are that even if the variance is not very > large, it is surprisingly easy to find an outlier that performs much better or much worse than the average. > > > Part of what you said also reminds me of the [lottery ticket hypothesis](https://arxiv.org/pdf/1803.03635.pdf), motivated by the idea that you can prune a large portion of a NN's parameters while keeping the same accuracy, but resulting sparse architecture is difficult to train from scratch. The idea is that certain subnetworks in a NN may happen to be initialized s.t. they are particularly effective for some task. Upvotes: 1
2023/07/13
1,382
5,402
<issue_start>username_0: I have a dataset which consists of like.. 400000 sentences and I want give each sentence to ChatGPT so it classifies each sentence as `positive` or `negative`. My question is, where can I find a reliable / trusted prompt to do that? and provide evidence that the prompt I used gives reliable labels for the problem? If I just create my own prompt like.. "Please conduct a sentiment analysis on the following text and tell me if the sentiment expressed is positive or negative: [insert text here]" How can I convince someone that my prompt was good? This is what I mean :/ So later no one can criticize like "Hey maybe you just used a bad prompt and ChatGPT could have performed better!" :c I hope my question is clear.<issue_comment>username_1: The problem with using ChatGPT with sentiment analysis is exactly that; it can't be relied upon. But, we can increase our chances of correct output by using chain-of-thought prompting with few-shot examples. If you don't know CoT prompting and few-shot prompting, please refer to [this](https://www.perplexity.ai/search/2966ec1d-1444-45a6-bbf7-d9e09215d5d3?s=c). Now, for your case, you can use a prompt like this- ``` Classify the sentence as positive or negative. Think step-by-step, and provide reasoning why the text is identified as positive or negative. For example, Sentence: 'Same S##t, different day' Sentiment: The sentence uses s##t. S##t is publically accepted as vulgar or profane. So, the sentiment is Negative. and Sentence: 'This product has changed my life, I am so much more productive now' Sentiment: The product that is being talked about has made the user more productive. Productivity is generally associated with positivity. So, the sentiment is Positive. Sentence: <> Sentiment: ``` Here, we are asking ChatGPT to provide reasoning for its decision and also use our examples to remember how step-by-step reasoning works. Now, while this should work well, it still will give bad results once in a while as ChatGPT is finetuned on human textual data and separately forced to accept some things as positive or negative. Consequentially, a sentence that you consider as positive/negative might not be looked upon in the same way by ChatGPT. UPDATE: According the comment, it is absolutely correct that even seemingly vulgar words can be used in a positive context, a modified prompt in that case will look like this- ``` Classify the sentence as positive or negative. Think step-by-step, and provide reasoning why the text is identified as positive or negative. For example, Sentence: 'Same S##t, different day' Context: The context seems to be that of frustration where the speaker is frustrated and/or angry at the metaphorical same s##t or mundane tasks. Sentiment: The feeling is that of frustration and boredom about something. So, the sentiment is Negative. and Sentence: 'This product has changed my life, I am so much more productive now' Context: The context seems to be that of admiration of a product that helped the user in some wah and made their life easier. Sentiment: Productivity is generally associated with positivity. So, the sentiment is Positive. Sentence: <> Sentiment: ``` Upvotes: 2 <issue_comment>username_2: ``` You are given the text of reviews delimited by triple quotes. Your task is to determine the sentiment of the review in the steps: 1, Identify a list of emotions that the writer expresses, not just positive nor negative. 2, Quote the phrases that are related to the identified emotions. 3, Give the sentiment of the review in a single word either "positive" or "negative" and verify that the sentiment is correct. 4, Format your sentiment in json. The permissible values of sentiment are "positive" and "negative". ``` """I have mixed feelings about this fitness tracker. On one hand, it accurately tracks my steps and heart rate, and the app is easy to use. However, the battery life is not very long and I find it uncomfortable to wear for long periods of time. I also think it's overpriced for what it does. So, while it has some good features, I'm not sure if I would recommend it.""" --- Give it steps to take time to think to avoid jumping to conclusion. The outputs from step 1 and 2 can be used for spot check its accuracy. It would be optional but safe to ask it to verify it for you. Finally, provide you an easy way to parse the output. The test result with ChatGPT-3.5 is provided for your reference: --- 1. Emotions: mixed feelings, accuracy, ease of use, discomfort, overpriced, uncertainty 2. Quotes: "mixed feelings about this fitness tracker" "accurately tracks my steps and heart rate" "the app is easy to use" "the battery life is not very long" "find it uncomfortable to wear for long periods of time" "overpriced for what it does" "while it has some good features, I'm not sure if I would recommend it" 3. Sentiment: Negative 4. Formatted sentiment: {"sentiment": "negative"} The sentiment of the review is negative because while the fitness tracker has some good features such as accurately tracking steps and heart rate, and having an easy-to-use app, the negative aspects such as short battery life, discomfort, and overpricing outweigh the positive ones. Additionally, the uncertainty expressed by the reviewer about recommending the product further indicates a negative sentiment towards it. Upvotes: 2
2023/07/13
716
2,873
<issue_start>username_0: w.r.t. LLM applications using the RAG (retriever-augmented-generation) architecture, people have started taken it for granted that it will be powered by a vector database. e.g., see [this](https://a16z.com/2023/06/20/emerging-architectures-for-llm-applications/): > > The most important piece of the preprocessing pipeline, from a systems standpoint, is the vector database. > > > Why can't lucene index (full-text search) be used for the retriever? Is there any objective study that has been done comparing quality of results using full-text search vs. using a vector database? As I was writing this, even lucene seems to have jumped on the vector bandwagon. see [this](https://www.apachecon.com/acna2022/slides/04_lucene_vector_search_sokolov.pdf)<issue_comment>username_1: At least the traditional Lucene full-text search is not vector based but an inverted index structure which can sort document relevance ranking via scoring functions such as the term frequency-inverse document frequency (TF-IDF) or the Okapi BM25 algorithm. On the other hand vector-based LLMs often use techniques such as cosine similarity instead for the new *semantic* relevance ranking which the traditional full-text search cannot satisfactorily achieve. Upvotes: 2 <issue_comment>username_2: Because Lucene does not do a comparison of MEANING. It is, essentially, still a text search. Vector databases are an attempt to find similar MEANING, even if the words are not identical. That is a totally different area of search. A standard vector (going by OpenAI here) has more than 1500 dimensions and identifies the similarity based on every dimension - Lucene would not even know the difference between "large village" and "city" while in reality they are QUITE similar. Or would not know that a "House" is close in meaning to a "Building". > > Is there any objective study > > > No one reading the documentation would need that - they simply do too many different things. Heck, Lucene does not even HAVE semantic search built in. The difference is not in the search - it is in what is searched and how the vector for a vector database is constructed to try to capture the meaning, not the similarity in words. At the minimum you would need Lucene + a semantic search mechanism - and then you would have to calculate something like a similarity along a lot of axis. Also known as a vector. Upvotes: 0 <issue_comment>username_3: They are used. Tf-IDF and BM25 are pretty strong baselines for document retrieval, and papers on semantic search often include these methods as a comparison point. e.g., papers like [this one](https://arxiv.org/abs/2004.04906) on open domain QA retrieve documents based on the question being asked. Vector representations perform better, but BM25 is still pretty solid, especially when performance is a bottleneck. Upvotes: 1
2023/07/18
584
2,487
<issue_start>username_0: I'm looking for references (articles) about how LLM generated content is moderated. From a technical point of view, what makes the difference between a so-called "uncensored" LLM such as Pygmalion 7B and what would be a "censored" one? Does a LLM always generate text that gets moderated later, or is it pre-trained / fine-tuned to generate moderated content by default?<issue_comment>username_1: At least the traditional Lucene full-text search is not vector based but an inverted index structure which can sort document relevance ranking via scoring functions such as the term frequency-inverse document frequency (TF-IDF) or the Okapi BM25 algorithm. On the other hand vector-based LLMs often use techniques such as cosine similarity instead for the new *semantic* relevance ranking which the traditional full-text search cannot satisfactorily achieve. Upvotes: 2 <issue_comment>username_2: Because Lucene does not do a comparison of MEANING. It is, essentially, still a text search. Vector databases are an attempt to find similar MEANING, even if the words are not identical. That is a totally different area of search. A standard vector (going by OpenAI here) has more than 1500 dimensions and identifies the similarity based on every dimension - Lucene would not even know the difference between "large village" and "city" while in reality they are QUITE similar. Or would not know that a "House" is close in meaning to a "Building". > > Is there any objective study > > > No one reading the documentation would need that - they simply do too many different things. Heck, Lucene does not even HAVE semantic search built in. The difference is not in the search - it is in what is searched and how the vector for a vector database is constructed to try to capture the meaning, not the similarity in words. At the minimum you would need Lucene + a semantic search mechanism - and then you would have to calculate something like a similarity along a lot of axis. Also known as a vector. Upvotes: 0 <issue_comment>username_3: They are used. Tf-IDF and BM25 are pretty strong baselines for document retrieval, and papers on semantic search often include these methods as a comparison point. e.g., papers like [this one](https://arxiv.org/abs/2004.04906) on open domain QA retrieve documents based on the question being asked. Vector representations perform better, but BM25 is still pretty solid, especially when performance is a bottleneck. Upvotes: 1
2023/07/19
1,036
3,811
<issue_start>username_0: I have been looking for the answer in other questions but no one tackled that. I want to ask you how is the padding mask considered in the formula of attention? The attention formula taking into account a causal mask is: $Attention(Q, K, V) = softmax(\frac{QK^{T} + CausalMask}{\sqrt{d\_{k}}})V$ But how do we add the padding mask? The aim of a padding mask is to mask the padding positions as they're used just to make the batching feasible. But I don't know how this mask is added in the Attention formula. Does it make sense if we do element-wise multiplication of the Attention matrix with a tensor of ones of shape (batch size, sequence length, $d\_{model}$) and whatever sentence $s$ in that batch and wherever position $p$ is a padding token then the tensor[s, p, :] is zeros? Thank you in advance for your help!<issue_comment>username_1: Entries of an attention mask are typically either $0$ or $-\infty$. So, adding such a mask gives either the original entry of $QK^T$ or $-\infty$. The issue with entrywise multiplication with a binary matrix, is that $0$ values still contribute to softmax. $$softmax(z)\_i = \frac{e^{z\_i}}{\sum\_k e^{z\_k}}$$ $e^0$ is $1$, so element-wise product before softmax is not really a mask. After softmax doesn't work either, since the output is no longer a probability distribution. To ensure the masked elements do not contribute at all, you need them to be $-\infty$, which mask addition does. If you know that certain indices of the input of are padded, that is, they are all zero vectors, then the padding tokens are already ignored by matmul (since they are zero). The issue is the rows of $QK^T$ corresponding to the pad tokens are zero and not $-\infty$. To ignore pad tokens, you would create a mask with $-\infty$ along columns corresponding to the location of padding, and add this to $QK^T$ before softmax. So it's just the same attention formula, different value of $M$. Upvotes: 3 [selected_answer]<issue_comment>username_2: For a given sequence `x` you calculate the attention scores using the formula: \begin{equation} A = \frac{(xQ K^Tx^T)}{\sqrt{d\_k}}, \end{equation} where $Q, K$ are the query and key matrices of the attention layer. The result is a square matrix of size $T \times T$ where $T$ is the length of the sequence $x$. The entry $A\_{i,j}$ gives the attention score between $x\_{i}$ and $x\_{j}$ (note $A\_{i,j} \neq A\_{j,i}$). So basically row $i$ gives you the attention scores for token $x\_i$, which other tokens of the sequence it should attend to. However, you actually want to use these scores to perform a weighted average over the value encodings given by $xV$ ($V$ is the value matrix of the attention layer). But the scores can be arbitrary real numbers, and you want to have positive weights that sum to $1.$ That is why we apply a `softmax` layer to convert the scores into attention weights. Now, if your sequence contains pad tokens you don't want $x\_i$ to attend to them. So you want to "remove" the attention between $x\_i$ and the pad tokens. You could set the attention weights directly to $0.$, but then the sum will not be $1.$ You actually want to mask the attention before applying the softmax function. Thus, you set the attention scores between $x\_i$ and the pad tokens to a large negative number. Now applying the softmax will produce $0.$ attention weights. You can set the scores to `-float("inf")`, but I think setting them to $-1e9$ is more than enough. For a concrete example see this [github repo](https://github.com/username_2/transformer/blob/3d4bbc055dd695eccd7ca97d6646d6a81a239cb9/multihead_attn.py#L121). Also here is an extensive [blog post](https://username_2.github.io/posts/transformer/) that I wrote about the Transformer, you might like it. Upvotes: 1
2023/07/22
1,260
5,471
<issue_start>username_0: I was thinking about making an adversarial network to generate popular music, so one AI which generates and then two others which detect whether the song is AI generated and the expected view count of the song. The AI would thus learn to generate songs which are both realistic and popular. However I came to a dilemma: Would popular cover songs be flagged as being AI-generated (not realistic)? It seems like the most popular songs (Old town road, Wrecking ball, Gangnam style, crazy frog...) are all significantly different from normal music, and so would be flagged as AI generated. This would make it very difficult to generate music which satisfies both realism and popularity, since the most popular songs would have low realism scores... Does anyone know methods to handle this sort of issue?<issue_comment>username_1: The concept of "music similarity" isn't very clear cut -- I'm not sure that I *personally* would popular songs outliers. Artists take inspiration from existing music, so musical styles, production techniques etc. from popular songs should spread more than unpopular songs. I think this problem will come down to the data that you're analyzing/training on and the way you choose to encode/generate the songs. GANs in particular have a well-documented problem of [mode-collapse](https://developers.google.com/machine-learning/gan/problems), where the generator over-optimizes for a single discriminator, leading to a lack of diversity in generations. Considering the diversity of different genres, styles, instruments, etc. in music, having a model capable of handling a diverse dataset is particularly important, *especially* if popular songs do turn out to be outliers. For example, consider the MNIST dataset. The generator might notice that 1s are really easy to generate, so will generate realistic, hard-to-detect 1s. After a couple iterations, the discriminator will start classifying all 1s as fake. But now, the generator can simply move onto the next easiest digit to generate, as the discriminator is just checking for 1s. [In other words, while real-world data has many modes, because of the nature of the "game" being played, GANs will collapse to only a few.](https://cedar.buffalo.edu/%7Esrihari/CSE676/22.3-GAN%20Mode%20Collapse.pdf) See the linked websites for some possible solutions to this. You may also want to consider using a different modeling method. Models like variants of RNNs and Transformers are often used (see: [Table 3](https://arxiv.org/pdf/2210.13944.pdf)) as they were designed for sequential data. Given its success with image-generation, diffusion models have also been popular recently. [However, GANs still have their advantages, like being faster due to their non-autoregressive nature](https://openreview.net/pdf?id=H1xQVn09FX). Upvotes: 2 <issue_comment>username_2: I'm not sure what you plan to do, but I think that you are misunderstanding both what an outlier is and how GANs work. An observation being an outlier is a property that's relative to the expected underlying distribution of the data. When an observation emerges that is very unlikely to have been drawn from a random variable with the same distribution as the rest, that observation is labeled an outlier. Imagine you came up with a numerical ranking for the quality of songs, and you were studying the entire field of music. Here, the sample space is the set of all possible songs, and the measurable space your variable is mapping this space into is a subset of the image of your ranking function. During your study, you observe that this variable is distributed in a certain way: Most songs have a mid ranking, only a few songs have a very high ranking, and only a few songs have a very low ranking (let's assume this is the case, even though it's probably not). An observation that is extremely low, so low that you wouldn't expect to see a song with such a low ranking in a million years, could be considered an outlier. Likewise, a song with an exceptionally high ranking could also be an outlier. This could indicate that something happened that broke your ranking measurement (for example, a cat sat on the keyboard while your ranking algorithm was rating a song). However, let's now imagine that we are limiting our study only to good songs, so that our sample space is no longer that of all possible songs. Maybe even consider only the set of exceptionally good songs. Could a song with an exceptionally high ranking be an outlier here? No, having a high ranking in this subset is the norm, so unless it was so incredibly high that it again destroyed even the distribution of exceptionally good songs, it wouldn't be considered an outlier. An average song, however, which wouldn't have been an outlier in the first case, would be an outlier in this second case. So if the only thing you show to your algorithm is good songs, how are good songs going to be outliers? That being said, this shouldn't matter to you because the discriminator of a GAN isn't concerned with distinguishing the quality of a song but with distinguishing original and artificial samples. The only samples the discriminator sees are the samples the generator is trained with and the generator's creations. An outlier here could be a song that, even though it is natural, has features that make the discriminator inappropriately think it is artificial (which I doubt a lot would mean the song is 'too good'). Upvotes: 1
2023/07/24
2,545
8,943
<issue_start>username_0: I have been working mechanically with transformers, hoping that with time clarity about what the query, key, and value matrices represent will develop; but I am still lost. Would greatly benefit from a simplified explanation.<issue_comment>username_1: Something I found helpful was "[Transformers for Software Engineers](https://blog.nelhage.com/post/transformers-for-software-engineers/)" - unrolling the matrix multiplications into a funky functional program which maps over vectors. We can follow this approach backwards and see the K, Q, V naturally emerge. I'm going to use pseudo-TypeScript and illustrate the residual stream as a JSON (I'll use the term "residual" to describe each individual element of the array, i.e. the original token plus any information we accumulate about that token in the context of the input): ``` let residualStream: Residual[] = [ { "token": oneHot("Hello"), position: oneHot(100) }, { "token": oneHot("World"), position: oneHot(101) }, { "token": oneHot("."), position: oneHot(102) }, ] ``` Our goal is to add useful features to the residuals until we have so many that predicting the next word becomes trivial. We might end up with something like this by the end (requires the residual stream to be wide enough): ``` residualStream = [ { "token": oneHot("Hello"), position: oneHot(100), partOfSpeech: "noun", prevToken: oneHot("."), lastMatchingPosition: oneHot(30), whatCameNextLastTime: oneHot("Everyone"), ... }, { "token": oneHot("World"), position: oneHot(101), partOfSpeech: "noun", prevToken: oneHot("Hello"), lastMatchingPosition: oneHot(50), whatCameNextLastTime: oneHot("!"), ... }, { "token": oneHot("."), position: oneHot(102), partOfSpeech: "noun", prevToken: oneHot("World"), lastMatchingPosition: oneHot(10), whatCameNextLastTime: oneHot("Bye"), ... }, ] ``` Feedforward layer is like [`vmap`](https://jax.readthedocs.io/en/latest/_autosummary/jax.vmap.html)ping a function of a single element: ``` type FeedForward = Residual => Residual let feedforward: FeedForward const applyFeedForward = residualStream => residualStream.map(feedforward) residualStream = applyFeedForward(residualStream) ``` As you know, this is sufficient to calculate things like: ``` const getLengthOfToken: FeedForward = ({ token, ...rest }) => ({ token, length: token.length(), ...rest }) ``` but not something like ``` const getPreviousToken: FeedForward = ??? ``` since the previous token is not derivable from the residual. So now we need to bust out the attention mechanism: ``` const getPreviousToken: AttentionHead ``` and we'll see that it'll break down into K, Q, V like you asked. The first step is likely to be picking out the relevant info: ``` type WV = Residual => Value const getTokenValue: WV = residual => residual.token type WK = Residual => Key const getTokenPosition: WK = residual => residual.position ``` And the last step is likely to be storing the result back into the residual stream: ``` type WO = (res: Residual, z: Z) => Residual // assuming we've figured out `z`, which'll be the next part of this answer const recordPreviousToken: WO = (res, z) => { ...res, prevToken: z } ``` Note that these parts don't need to be nearly as wide as the residual stream, since they are just operating on specific features: ``` K = [oneHot(100), oneHot(101), oneHot(102)] V = [oneHot("Hello"), oneHot("World"), oneHot(".")] Z = [oneHot("!"), oneHot("Hello"), oneHot("World")] ``` What's more is, we probably want to do as many of these small operations like `getPreviousToken` in parallel as possible. So this is where we get multi-head attention. Now we have found `W_V`, `W_K`, `W_O`, we only need to find `W_Q` now. Let's try to finish our function. We'll have to draw upon the defining feature of attention, which is: taking the residuals of the other tokens into account. ``` type AttentionHead = (residual: Residual, otherResiduals[]) => Value const getPreviousTokenValue = (residual, otherResiduals[]) => ( sum(otherResiduals.map(otherResidual => getTokenValue(otherResidual) // W_V * scaledSoftmax( // handwaving this part; in reality this would apply over Q * K^T as a whole getTokenPosition(otherResidual) // W_K .dot(getPreviousTokenPosition(residual)) // we found W_Q! )) ) ``` We found `W_Q`! It's: ``` type WQ = Residual => Query const getPreviousTokenPosition = residual => shiftLeft(residual.position) ``` To recap: * `W_V` gets a piece of info ("value") from each residual * `W_K` positions ("keys") the residual in N-dimensional space in a way appropriate for the attention head's task * `W_Q` looks ("queries") in N-dimensional space for other residuals relevant to the attention head's task * `W_O` sums up the info/results ("output") and stores it back into the residual stream As a final exercise, I challenge you to accomplish to challenge the attention head design; that is, can you accomplish the same thing with less than 3 "reads"? (3 is counting K, Q, V) I tried this and I concluded that the general way to express an arbitrary comparison of two vectors (in our case, residuals), is a [bilinear form](https://en.wikipedia.org/wiki/Bilinear_form). And the way to speed that up so that it's not `N_INPUT^2 * D_RESIDUAL` is to perform a [low-rank approximation](https://en.wikipedia.org/wiki/Low-rank_approximation), which is exactly what self-attention does by way of the dimension-reducing maps $W\_K$ and $W\_Q$ approximating a [rank factorization](https://en.wikipedia.org/wiki/Rank_factorization) $A = W\_Q W\_K^T$. But I believe the various "fast attention" algorithms each have a different take on this. BTW, the example I used is from ["In-Context Learning and Induction Heads"](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html). Highly recommended reading for understanding how attention heads work. See if you can come up with similar pseudo-code as I did, but for the rest of the Induction Head circuit. Upvotes: 3 [selected_answer]<issue_comment>username_2: A very very distant connection can be seen between the self-attention layer and the word2vec model. I think that this might be helpful to in order to gain more intution. Starting from the word2vec model, what it does is it provides a separate embedding for every word in your vocabulary. Given an input vector $x\_i$, which is usually a one-hot encoding in the case of word2vec, you multiply it with the embedding matrix $V$ and get the word embedding vector: $v\_i = x\_i V$. Now, if you have a sequence $x = \{x\_0, x\_1, ..., x\_{T-1}\}$, then each embedding vector is calculated without taking into consideration the rest of the sequence, i.e. without taking into account the context. What you want to do is compute an embedding $z\_i$ for every $x\_i$, such that $z\_i$ captures the context and depends on the rest of the sequence. And if we have the same $x\_i$ in a sequence with different elements, then $z\_i$ will be different. The self-attention layer of the transformer proposes the following solution: 1. We will calculate the individual embedding of each of the elements of the sequence and we will call that the *value embeddings* $\{v\_0, v\_1, ..., v\_{T-1}\}$, where $v\_i = x\_i V$. 2. The final embedding $z\_i$ for each of the elements $x\_i$ will be a weighted sum between all of the value embeddings, i.e.: $$ z\_i = \sum\_j \alpha\_{i,j} v\_j $$ The weight $\alpha\_{i,j}$ dictates how much of the value embedding of $x\_j$ should be present in the mixture when calculating $z\_i$. What we want is to have a high value of $\alpha\_{i,j}$ if $x\_i$ is tightly connected to $x\_j$, and a low value otherwise. Thus, we want to estimate something like a proximity score between $x\_i$ and $x\_j$. Usually this is simply done by taking the scalar product between the two $score(x\_i, x\_j) = x\_i x\_j^T$. But then $score(x\_i, x\_j) = score(x\_j, x\_i)$, which is not what we want. We don't actually care how close are $x\_i$ and $x\_j$, rather we want to know how relevant is $x\_j$ to $x\_i$ - one-way relation. Later we would like to know how relevant is $x\_i$ to $x\_j$ and we expect the score to be different. And so the attention score is actually computed as $score(x\_i, x\_j) = x\_i W x\_j^{T}$, where $W$ is an additional embedding matrix, which is actually decomposed into two matrices: $W=QK^T$. Now for the score we have $score(x\_i, x\_j) = x\_i Q K^T x\_j^{T}$, and we can say that $q\_i = x\_i Q$ is our query embedding of $x\_i$ and $k\_j = x\_j K$ is our key embedding of $x\_j$. Now, except for making this nice query-key-value analogy, there is another reason for decomposing $W$ into $W=QK^T$, but I think that this answer is already long enough. Feel free to checkout a blog post that I wrote about the transformer model if you want to read more: <https://username_2.github.io/posts/transformer/> Upvotes: 3
2023/07/26
2,598
9,182
<issue_start>username_0: While fine-tuning a decoder only LLM like LLaMA on chat dataset, what kind of padding should one use? Many papers use Left Padding, but is right padding wrong since transformers gives the following warning if using right padding " A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set padding\_side='left' when initializing the tokenizer." The attention mask will anyways ignore the padding tokens.<issue_comment>username_1: Something I found helpful was "[Transformers for Software Engineers](https://blog.nelhage.com/post/transformers-for-software-engineers/)" - unrolling the matrix multiplications into a funky functional program which maps over vectors. We can follow this approach backwards and see the K, Q, V naturally emerge. I'm going to use pseudo-TypeScript and illustrate the residual stream as a JSON (I'll use the term "residual" to describe each individual element of the array, i.e. the original token plus any information we accumulate about that token in the context of the input): ``` let residualStream: Residual[] = [ { "token": oneHot("Hello"), position: oneHot(100) }, { "token": oneHot("World"), position: oneHot(101) }, { "token": oneHot("."), position: oneHot(102) }, ] ``` Our goal is to add useful features to the residuals until we have so many that predicting the next word becomes trivial. We might end up with something like this by the end (requires the residual stream to be wide enough): ``` residualStream = [ { "token": oneHot("Hello"), position: oneHot(100), partOfSpeech: "noun", prevToken: oneHot("."), lastMatchingPosition: oneHot(30), whatCameNextLastTime: oneHot("Everyone"), ... }, { "token": oneHot("World"), position: oneHot(101), partOfSpeech: "noun", prevToken: oneHot("Hello"), lastMatchingPosition: oneHot(50), whatCameNextLastTime: oneHot("!"), ... }, { "token": oneHot("."), position: oneHot(102), partOfSpeech: "noun", prevToken: oneHot("World"), lastMatchingPosition: oneHot(10), whatCameNextLastTime: oneHot("Bye"), ... }, ] ``` Feedforward layer is like [`vmap`](https://jax.readthedocs.io/en/latest/_autosummary/jax.vmap.html)ping a function of a single element: ``` type FeedForward = Residual => Residual let feedforward: FeedForward const applyFeedForward = residualStream => residualStream.map(feedforward) residualStream = applyFeedForward(residualStream) ``` As you know, this is sufficient to calculate things like: ``` const getLengthOfToken: FeedForward = ({ token, ...rest }) => ({ token, length: token.length(), ...rest }) ``` but not something like ``` const getPreviousToken: FeedForward = ??? ``` since the previous token is not derivable from the residual. So now we need to bust out the attention mechanism: ``` const getPreviousToken: AttentionHead ``` and we'll see that it'll break down into K, Q, V like you asked. The first step is likely to be picking out the relevant info: ``` type WV = Residual => Value const getTokenValue: WV = residual => residual.token type WK = Residual => Key const getTokenPosition: WK = residual => residual.position ``` And the last step is likely to be storing the result back into the residual stream: ``` type WO = (res: Residual, z: Z) => Residual // assuming we've figured out `z`, which'll be the next part of this answer const recordPreviousToken: WO = (res, z) => { ...res, prevToken: z } ``` Note that these parts don't need to be nearly as wide as the residual stream, since they are just operating on specific features: ``` K = [oneHot(100), oneHot(101), oneHot(102)] V = [oneHot("Hello"), oneHot("World"), oneHot(".")] Z = [oneHot("!"), oneHot("Hello"), oneHot("World")] ``` What's more is, we probably want to do as many of these small operations like `getPreviousToken` in parallel as possible. So this is where we get multi-head attention. Now we have found `W_V`, `W_K`, `W_O`, we only need to find `W_Q` now. Let's try to finish our function. We'll have to draw upon the defining feature of attention, which is: taking the residuals of the other tokens into account. ``` type AttentionHead = (residual: Residual, otherResiduals[]) => Value const getPreviousTokenValue = (residual, otherResiduals[]) => ( sum(otherResiduals.map(otherResidual => getTokenValue(otherResidual) // W_V * scaledSoftmax( // handwaving this part; in reality this would apply over Q * K^T as a whole getTokenPosition(otherResidual) // W_K .dot(getPreviousTokenPosition(residual)) // we found W_Q! )) ) ``` We found `W_Q`! It's: ``` type WQ = Residual => Query const getPreviousTokenPosition = residual => shiftLeft(residual.position) ``` To recap: * `W_V` gets a piece of info ("value") from each residual * `W_K` positions ("keys") the residual in N-dimensional space in a way appropriate for the attention head's task * `W_Q` looks ("queries") in N-dimensional space for other residuals relevant to the attention head's task * `W_O` sums up the info/results ("output") and stores it back into the residual stream As a final exercise, I challenge you to accomplish to challenge the attention head design; that is, can you accomplish the same thing with less than 3 "reads"? (3 is counting K, Q, V) I tried this and I concluded that the general way to express an arbitrary comparison of two vectors (in our case, residuals), is a [bilinear form](https://en.wikipedia.org/wiki/Bilinear_form). And the way to speed that up so that it's not `N_INPUT^2 * D_RESIDUAL` is to perform a [low-rank approximation](https://en.wikipedia.org/wiki/Low-rank_approximation), which is exactly what self-attention does by way of the dimension-reducing maps $W\_K$ and $W\_Q$ approximating a [rank factorization](https://en.wikipedia.org/wiki/Rank_factorization) $A = W\_Q W\_K^T$. But I believe the various "fast attention" algorithms each have a different take on this. BTW, the example I used is from ["In-Context Learning and Induction Heads"](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html). Highly recommended reading for understanding how attention heads work. See if you can come up with similar pseudo-code as I did, but for the rest of the Induction Head circuit. Upvotes: 3 [selected_answer]<issue_comment>username_2: A very very distant connection can be seen between the self-attention layer and the word2vec model. I think that this might be helpful to in order to gain more intution. Starting from the word2vec model, what it does is it provides a separate embedding for every word in your vocabulary. Given an input vector $x\_i$, which is usually a one-hot encoding in the case of word2vec, you multiply it with the embedding matrix $V$ and get the word embedding vector: $v\_i = x\_i V$. Now, if you have a sequence $x = \{x\_0, x\_1, ..., x\_{T-1}\}$, then each embedding vector is calculated without taking into consideration the rest of the sequence, i.e. without taking into account the context. What you want to do is compute an embedding $z\_i$ for every $x\_i$, such that $z\_i$ captures the context and depends on the rest of the sequence. And if we have the same $x\_i$ in a sequence with different elements, then $z\_i$ will be different. The self-attention layer of the transformer proposes the following solution: 1. We will calculate the individual embedding of each of the elements of the sequence and we will call that the *value embeddings* $\{v\_0, v\_1, ..., v\_{T-1}\}$, where $v\_i = x\_i V$. 2. The final embedding $z\_i$ for each of the elements $x\_i$ will be a weighted sum between all of the value embeddings, i.e.: $$ z\_i = \sum\_j \alpha\_{i,j} v\_j $$ The weight $\alpha\_{i,j}$ dictates how much of the value embedding of $x\_j$ should be present in the mixture when calculating $z\_i$. What we want is to have a high value of $\alpha\_{i,j}$ if $x\_i$ is tightly connected to $x\_j$, and a low value otherwise. Thus, we want to estimate something like a proximity score between $x\_i$ and $x\_j$. Usually this is simply done by taking the scalar product between the two $score(x\_i, x\_j) = x\_i x\_j^T$. But then $score(x\_i, x\_j) = score(x\_j, x\_i)$, which is not what we want. We don't actually care how close are $x\_i$ and $x\_j$, rather we want to know how relevant is $x\_j$ to $x\_i$ - one-way relation. Later we would like to know how relevant is $x\_i$ to $x\_j$ and we expect the score to be different. And so the attention score is actually computed as $score(x\_i, x\_j) = x\_i W x\_j^{T}$, where $W$ is an additional embedding matrix, which is actually decomposed into two matrices: $W=QK^T$. Now for the score we have $score(x\_i, x\_j) = x\_i Q K^T x\_j^{T}$, and we can say that $q\_i = x\_i Q$ is our query embedding of $x\_i$ and $k\_j = x\_j K$ is our key embedding of $x\_j$. Now, except for making this nice query-key-value analogy, there is another reason for decomposing $W$ into $W=QK^T$, but I think that this answer is already long enough. Feel free to checkout a blog post that I wrote about the transformer model if you want to read more: <https://username_2.github.io/posts/transformer/> Upvotes: 3
2023/07/29
2,921
10,333
<issue_start>username_0: I know this might be a bit general question and concerning a rather active research field, much beyond my expertise, but I do believe there're some answers. The use of NN parameters quantization can span from post-training static/dynamic quantization (PTQ) to quantization-aware training (QAT). Generally the target is cutting down FP-32 weights to UINT-8 whilst retaining overall accuracy; the benefits in performance are often sensible, yielding zero to negligible (depending on application) to few percentage drop in accuracy. However, I hear these statements are true depending on *which* NN is being quantized ([source](https://youtu.be/c3MT2qV5f9w?t=606)). Some models indeed are very forgiving, whereas others are not even using more aggressive strategies. > > *(extracted from PyTorch source)* > > > > > There're some networks that are very forgiving: you can do PTQ and > the end result is as accurate as the FP value. [...] In **some** NN > they're slightly more demanding, they're slightly less forgiving on > the approximation [...] > > > My question are: * Which categories of DNN are more suited for quantization? * What could be one rationale why this would be? Does the size of the network (i.e. number of parameters) has any role, for instance? * What is introducing such demands (this will depend on actual model indeed) in the forward path? Is it to be found in the activation/actual MVP or what? My **guess** would be depending on achievable weights sparsity -- however, this in general (AFAIK) can be tuned forcing training constraints, so it wouldn't really answer the question.<issue_comment>username_1: Something I found helpful was "[Transformers for Software Engineers](https://blog.nelhage.com/post/transformers-for-software-engineers/)" - unrolling the matrix multiplications into a funky functional program which maps over vectors. We can follow this approach backwards and see the K, Q, V naturally emerge. I'm going to use pseudo-TypeScript and illustrate the residual stream as a JSON (I'll use the term "residual" to describe each individual element of the array, i.e. the original token plus any information we accumulate about that token in the context of the input): ``` let residualStream: Residual[] = [ { "token": oneHot("Hello"), position: oneHot(100) }, { "token": oneHot("World"), position: oneHot(101) }, { "token": oneHot("."), position: oneHot(102) }, ] ``` Our goal is to add useful features to the residuals until we have so many that predicting the next word becomes trivial. We might end up with something like this by the end (requires the residual stream to be wide enough): ``` residualStream = [ { "token": oneHot("Hello"), position: oneHot(100), partOfSpeech: "noun", prevToken: oneHot("."), lastMatchingPosition: oneHot(30), whatCameNextLastTime: oneHot("Everyone"), ... }, { "token": oneHot("World"), position: oneHot(101), partOfSpeech: "noun", prevToken: oneHot("Hello"), lastMatchingPosition: oneHot(50), whatCameNextLastTime: oneHot("!"), ... }, { "token": oneHot("."), position: oneHot(102), partOfSpeech: "noun", prevToken: oneHot("World"), lastMatchingPosition: oneHot(10), whatCameNextLastTime: oneHot("Bye"), ... }, ] ``` Feedforward layer is like [`vmap`](https://jax.readthedocs.io/en/latest/_autosummary/jax.vmap.html)ping a function of a single element: ``` type FeedForward = Residual => Residual let feedforward: FeedForward const applyFeedForward = residualStream => residualStream.map(feedforward) residualStream = applyFeedForward(residualStream) ``` As you know, this is sufficient to calculate things like: ``` const getLengthOfToken: FeedForward = ({ token, ...rest }) => ({ token, length: token.length(), ...rest }) ``` but not something like ``` const getPreviousToken: FeedForward = ??? ``` since the previous token is not derivable from the residual. So now we need to bust out the attention mechanism: ``` const getPreviousToken: AttentionHead ``` and we'll see that it'll break down into K, Q, V like you asked. The first step is likely to be picking out the relevant info: ``` type WV = Residual => Value const getTokenValue: WV = residual => residual.token type WK = Residual => Key const getTokenPosition: WK = residual => residual.position ``` And the last step is likely to be storing the result back into the residual stream: ``` type WO = (res: Residual, z: Z) => Residual // assuming we've figured out `z`, which'll be the next part of this answer const recordPreviousToken: WO = (res, z) => { ...res, prevToken: z } ``` Note that these parts don't need to be nearly as wide as the residual stream, since they are just operating on specific features: ``` K = [oneHot(100), oneHot(101), oneHot(102)] V = [oneHot("Hello"), oneHot("World"), oneHot(".")] Z = [oneHot("!"), oneHot("Hello"), oneHot("World")] ``` What's more is, we probably want to do as many of these small operations like `getPreviousToken` in parallel as possible. So this is where we get multi-head attention. Now we have found `W_V`, `W_K`, `W_O`, we only need to find `W_Q` now. Let's try to finish our function. We'll have to draw upon the defining feature of attention, which is: taking the residuals of the other tokens into account. ``` type AttentionHead = (residual: Residual, otherResiduals[]) => Value const getPreviousTokenValue = (residual, otherResiduals[]) => ( sum(otherResiduals.map(otherResidual => getTokenValue(otherResidual) // W_V * scaledSoftmax( // handwaving this part; in reality this would apply over Q * K^T as a whole getTokenPosition(otherResidual) // W_K .dot(getPreviousTokenPosition(residual)) // we found W_Q! )) ) ``` We found `W_Q`! It's: ``` type WQ = Residual => Query const getPreviousTokenPosition = residual => shiftLeft(residual.position) ``` To recap: * `W_V` gets a piece of info ("value") from each residual * `W_K` positions ("keys") the residual in N-dimensional space in a way appropriate for the attention head's task * `W_Q` looks ("queries") in N-dimensional space for other residuals relevant to the attention head's task * `W_O` sums up the info/results ("output") and stores it back into the residual stream As a final exercise, I challenge you to accomplish to challenge the attention head design; that is, can you accomplish the same thing with less than 3 "reads"? (3 is counting K, Q, V) I tried this and I concluded that the general way to express an arbitrary comparison of two vectors (in our case, residuals), is a [bilinear form](https://en.wikipedia.org/wiki/Bilinear_form). And the way to speed that up so that it's not `N_INPUT^2 * D_RESIDUAL` is to perform a [low-rank approximation](https://en.wikipedia.org/wiki/Low-rank_approximation), which is exactly what self-attention does by way of the dimension-reducing maps $W\_K$ and $W\_Q$ approximating a [rank factorization](https://en.wikipedia.org/wiki/Rank_factorization) $A = W\_Q W\_K^T$. But I believe the various "fast attention" algorithms each have a different take on this. BTW, the example I used is from ["In-Context Learning and Induction Heads"](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html). Highly recommended reading for understanding how attention heads work. See if you can come up with similar pseudo-code as I did, but for the rest of the Induction Head circuit. Upvotes: 3 [selected_answer]<issue_comment>username_2: A very very distant connection can be seen between the self-attention layer and the word2vec model. I think that this might be helpful to in order to gain more intution. Starting from the word2vec model, what it does is it provides a separate embedding for every word in your vocabulary. Given an input vector $x\_i$, which is usually a one-hot encoding in the case of word2vec, you multiply it with the embedding matrix $V$ and get the word embedding vector: $v\_i = x\_i V$. Now, if you have a sequence $x = \{x\_0, x\_1, ..., x\_{T-1}\}$, then each embedding vector is calculated without taking into consideration the rest of the sequence, i.e. without taking into account the context. What you want to do is compute an embedding $z\_i$ for every $x\_i$, such that $z\_i$ captures the context and depends on the rest of the sequence. And if we have the same $x\_i$ in a sequence with different elements, then $z\_i$ will be different. The self-attention layer of the transformer proposes the following solution: 1. We will calculate the individual embedding of each of the elements of the sequence and we will call that the *value embeddings* $\{v\_0, v\_1, ..., v\_{T-1}\}$, where $v\_i = x\_i V$. 2. The final embedding $z\_i$ for each of the elements $x\_i$ will be a weighted sum between all of the value embeddings, i.e.: $$ z\_i = \sum\_j \alpha\_{i,j} v\_j $$ The weight $\alpha\_{i,j}$ dictates how much of the value embedding of $x\_j$ should be present in the mixture when calculating $z\_i$. What we want is to have a high value of $\alpha\_{i,j}$ if $x\_i$ is tightly connected to $x\_j$, and a low value otherwise. Thus, we want to estimate something like a proximity score between $x\_i$ and $x\_j$. Usually this is simply done by taking the scalar product between the two $score(x\_i, x\_j) = x\_i x\_j^T$. But then $score(x\_i, x\_j) = score(x\_j, x\_i)$, which is not what we want. We don't actually care how close are $x\_i$ and $x\_j$, rather we want to know how relevant is $x\_j$ to $x\_i$ - one-way relation. Later we would like to know how relevant is $x\_i$ to $x\_j$ and we expect the score to be different. And so the attention score is actually computed as $score(x\_i, x\_j) = x\_i W x\_j^{T}$, where $W$ is an additional embedding matrix, which is actually decomposed into two matrices: $W=QK^T$. Now for the score we have $score(x\_i, x\_j) = x\_i Q K^T x\_j^{T}$, and we can say that $q\_i = x\_i Q$ is our query embedding of $x\_i$ and $k\_j = x\_j K$ is our key embedding of $x\_j$. Now, except for making this nice query-key-value analogy, there is another reason for decomposing $W$ into $W=QK^T$, but I think that this answer is already long enough. Feel free to checkout a blog post that I wrote about the transformer model if you want to read more: <https://username_2.github.io/posts/transformer/> Upvotes: 3
2023/07/31
743
3,015
<issue_start>username_0: I am trying to accomplish the reverse of the typical MNIST in machine learning using a GAN - instead of predicting a number from an image of a digit, I want to reconstruct an image of a digit from a number. The traditional GAN, however, isn't designed for this use case, as it is designed to generate images similar to training data directly without being given an input. One way to work around this issue that I've thought of is to take a train feature digit, connect it to a densely-connected layer `Dense(784)`, reshape it to `(28 x 28 x 1)`, and then proceed with the generator as one usually does for a GAN. However, this seems like "fooling" the neural network to make up weights out of thin air, and I doubt this would work properly. How can I modify a GAN so that it takes single-digit inputs without resorting to the aforementioned approach?<issue_comment>username_1: This is a common use case for GANs, you want the output to be conditioned on some controlled input, as opposed to just random seed data. This Medium article [cGAN: Conditional Generative Adversarial Network — How to Gain Control Over GAN Outputs](https://towardsdatascience.com/cgan-conditional-generative-adversarial-network-how-to-gain-control-over-gan-outputs-b30620bd0cc8) walks through creating your example almost exactly (using MNIST digits and selecting which one you want). The basic changes you need to make to turn your "freely choose class" GAN to a conditional GAN: * Add input to the generator, *in addition to* the random seed, that describes the class you want to generate. For a digit generator, that might be a one-hot array for the digit class you want it to make. * Add the same input to the discriminator, alongside the image input. That may involve having inputs at different layers, so you can combine CNN and fully-connect layers more easily. Typically you would concatenate the class choice to the flattened last CNN layer, and use this concatenated vector as input to the first fully-connected layer. But you could concat the class data to any nn layer before the output. * Train as before, whilst tracking which class is being faked (or real) and ensuring the generator and discriminator are fed the correct class details during training. You could optionally provide incorrect labels for some real or fake images, and score appropriately, but that is not 100% necessary. The discriminator should detect that something declared to be a '1' looks more like a '2' and mark it as fake, without needing to be specifically trained for it. Upvotes: 3 [selected_answer]<issue_comment>username_2: I agree with [@Neil](https://ai.stackexchange.com/a/41564/52501) answer, as I also strongly believe that cGANs are the actual answer of your problem. However, as he suggested, maybe it's worth mentioning that also [GANs inversion](https://arxiv.org/pdf/2101.05278.pdf) can be used to do achieve such results, and it's used when training a new GAN from scratch it's too expensive. Upvotes: 2
2023/08/01
1,411
5,647
<issue_start>username_0: I have a Gym env (env) for which I train with a model using the PPO algorithm with stable-baselines. ``` from stable_baselines3 import PPO from stable_baselines3.ppo import MlpPolicy model = PPO(MlpPolicy, env, verbose=0) model.learn(total_timesteps=10_000,progress_bar=True) obs,info = env.reset() action_prediction=[] for i in range(5000): action, _states = model.predict(obs, deterministic=True) action_prediction.append(action) obs, rewards, done1, done2, info = env.step(action) if done1: break ``` If I use the model.predict routine after training, I see that the model successfully masters the task. However, this is not the fastest solution. For the proposed actions, I see for example that the model does not use the full action parameter range it could use. For a test case, where I know the optimal solution, I see that the prediction needs more steps than needed by the optimal solution. How can I foster the model during learning to fullfill the task with a minimal number of steps? Is there a better way than going down the limbo of optimizing the reward function to balance mission completion and run time by writing my own Environment?<issue_comment>username_1: First, I would like to emphasize that soon as you introduce a function approximator, guarantees of learning the optimal policy are thrown out of the window, moreover if it's a non-convex one: * maybe you get stuck on a local minima * maybe you don't do enough exploration * maybe the reward function does not encode the temporal dependency that you want * maybe you have not trained your model enough You can definitely change the reward function, and that will for sure drive somewhere else your agent in the solution space, as that's the supervision signal that you are giving to it. You can just add a penalization for each step that the agent takes, thus to minimize the cumulative reward, it has to take few steps as possible... however, consider that the agent might trade off penalization with speed: > > if you penalize me -1 for each step, and i can choose between swimming in the river or going on the bridge, even if you give me -50 for swimming in the river because that's not what you want, if that saves me more than 50 steps, I'll do it > > > Upvotes: 2 <issue_comment>username_2: A quick recap: 1. You train a reinforcement learning agent using PPO 2. Your agent learns useful behavior, but this behavior is suboptimal 3. You would like your agent to learn a behavior closer to optimal Let me suggest four strategies how you can improve you agent's learning. You can either use those strategies separately or combined. (1) From the evidence you provide (the model does not use the full action parameter range it could use), it seems that the neural network under the hood is not able to leverage the full action range. A useful thing you can do at this stage is to normalize the action space. For instance, assuming you have continuous action environment ([code source](https://github.com/openai/gym/blob/master/gym/wrappers/rescale_action.py)), ``` from gym.wrappers import RescaleAction env = RescaleAction(env, min_action=-1, max_action=1) ``` Now, the neural network will assume that the actions are between -1 and 1, which will likely improve the situation. (2) Additionally, to enhance your agent training, you may want to normalize your observations and/or rewards. Here is how you can do it with [stable baselines3 vec\_env](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html#stable_baselines3.common.vec_env.VecNormalize): ``` from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize env = DummyVecEnv([lambda: env]) # Wrap the environment with actions normalization env = VecNormalize(env, norm_obs=True, norm_reward=True, clip_obs=10., clip_reward=10., training=True) model = PPO(MlpPolicy, env, verbose=0) ``` Note that during evaluation, no need to update the rolling average of observations/rewards. This means you will need to set `training=False` when evaluating agents. Try those three normalizations (actions, observations, rewards) separately and then combined and see how does it improve the policy or not. (3) If you are designing your own environment, it might be useful to provide the agent more visibility in terms of observation space. This is all about feature engineering to provide the agent more information about its environment. One example would be including previous observations and/or actions into current observations. Note that including too many observations can also increase number of training iterations. The simplest way to use this approach is by creating a custom [gym wrapper](https://www.gymlibrary.dev/api/wrappers/#observationwrapper). (4) One more thing to do is to find better hyperparameters (e.g. using [Optuna](https://optuna.org/)). Beware, this approach is computationally expensive because for each hyperparameter set it will train a new agent. Given that PPO was created with intention to minimize hyper parameters tuning (I briefly explain the PPO algorithm [here](https://username_2.com/neural-networks/beyond/)), hyperparameter search should be used as a last resort only, i.e. after you tried everything else. Finally, I have to point out that as soon as neural networks enter the game (as in PPO), there is no guarantee to learn the optimal policy. In case if you solve a simple task, you may want to use simpler algorithms (ones that do not involve neural networks). Here is [*the* book](http://incompleteideas.net/book/the-book.html) about reinforcement learning. Upvotes: 0
2023/08/02
1,421
5,602
<issue_start>username_0: Given a generative model, G, trained on a dataset D. This generative model can be either GAN or Diffusion based. Supposed each sample, x\_i, generated by G, can be evaluated by a readily available scoring function, S(x\_i). What are the possible ways to navigate the latent space of G to find generated samples which maximizes S(x\_i)? Here are some of the possible ways I have thought: 1. Random samplings from G to form a dataset. For each sample, evaluate it with S, then finetune G to maximise S with loss = -S. Keep repeating until G only produces samples with high S values. This is motivated from RLHF but use backprop instead of RL loss. 2. Fix G, start with a random latent, l\_1 to sample x\_1 from G. Then slowly find a better x\_2 by going into the direction of S, e.g. l\_2 = l\_1 + alpha \* s(x\_1) = l\_1 + alpha \* s( g(l\_1)). Is this possible, is it fair to assume P(l, s(1)) is smooth? 3. Condition the training of G with S, i.e G(X|S(X)). In this way, we can specific any desired score as an input to the generative model. Are any of these ideas possible? Otherwise, are there better or more simpler approaches?<issue_comment>username_1: First, I would like to emphasize that soon as you introduce a function approximator, guarantees of learning the optimal policy are thrown out of the window, moreover if it's a non-convex one: * maybe you get stuck on a local minima * maybe you don't do enough exploration * maybe the reward function does not encode the temporal dependency that you want * maybe you have not trained your model enough You can definitely change the reward function, and that will for sure drive somewhere else your agent in the solution space, as that's the supervision signal that you are giving to it. You can just add a penalization for each step that the agent takes, thus to minimize the cumulative reward, it has to take few steps as possible... however, consider that the agent might trade off penalization with speed: > > if you penalize me -1 for each step, and i can choose between swimming in the river or going on the bridge, even if you give me -50 for swimming in the river because that's not what you want, if that saves me more than 50 steps, I'll do it > > > Upvotes: 2 <issue_comment>username_2: A quick recap: 1. You train a reinforcement learning agent using PPO 2. Your agent learns useful behavior, but this behavior is suboptimal 3. You would like your agent to learn a behavior closer to optimal Let me suggest four strategies how you can improve you agent's learning. You can either use those strategies separately or combined. (1) From the evidence you provide (the model does not use the full action parameter range it could use), it seems that the neural network under the hood is not able to leverage the full action range. A useful thing you can do at this stage is to normalize the action space. For instance, assuming you have continuous action environment ([code source](https://github.com/openai/gym/blob/master/gym/wrappers/rescale_action.py)), ``` from gym.wrappers import RescaleAction env = RescaleAction(env, min_action=-1, max_action=1) ``` Now, the neural network will assume that the actions are between -1 and 1, which will likely improve the situation. (2) Additionally, to enhance your agent training, you may want to normalize your observations and/or rewards. Here is how you can do it with [stable baselines3 vec\_env](https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html#stable_baselines3.common.vec_env.VecNormalize): ``` from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize env = DummyVecEnv([lambda: env]) # Wrap the environment with actions normalization env = VecNormalize(env, norm_obs=True, norm_reward=True, clip_obs=10., clip_reward=10., training=True) model = PPO(MlpPolicy, env, verbose=0) ``` Note that during evaluation, no need to update the rolling average of observations/rewards. This means you will need to set `training=False` when evaluating agents. Try those three normalizations (actions, observations, rewards) separately and then combined and see how does it improve the policy or not. (3) If you are designing your own environment, it might be useful to provide the agent more visibility in terms of observation space. This is all about feature engineering to provide the agent more information about its environment. One example would be including previous observations and/or actions into current observations. Note that including too many observations can also increase number of training iterations. The simplest way to use this approach is by creating a custom [gym wrapper](https://www.gymlibrary.dev/api/wrappers/#observationwrapper). (4) One more thing to do is to find better hyperparameters (e.g. using [Optuna](https://optuna.org/)). Beware, this approach is computationally expensive because for each hyperparameter set it will train a new agent. Given that PPO was created with intention to minimize hyper parameters tuning (I briefly explain the PPO algorithm [here](https://username_2.com/neural-networks/beyond/)), hyperparameter search should be used as a last resort only, i.e. after you tried everything else. Finally, I have to point out that as soon as neural networks enter the game (as in PPO), there is no guarantee to learn the optimal policy. In case if you solve a simple task, you may want to use simpler algorithms (ones that do not involve neural networks). Here is [*the* book](http://incompleteideas.net/book/the-book.html) about reinforcement learning. Upvotes: 0
2023/08/03
625
1,961
<issue_start>username_0: How to expand **reconstruction error** to **mean squared error** when it is $\mathbb{E}\_{z\sim q\_{\phi}(z|x)}[\log p\_\theta(x|z)]$? [reconstruction error] $\mathbb{E}\_{z\sim q\_{\phi}(z|x)}[\log p\_\theta(x|z)]$ [mean square error] $\mathbb{E} \Big [\big(x - p\_\theta(q\_\phi(x))\big)^2 \Big]$ [MSE pseudo code] ``` def reconstruction_loss(y, t): # MSE y:predicted value # t:true value return square(y - t).mean() ``` Is the **reconstruction error** just an idea and is actual formula the **MSE**? Is that all? I mean, why does MSE (or BCE) comes from reconstruction error?<issue_comment>username_1: In a way, you're right. The reconstruction loss is just an idea because you have not yet defined the distribution $p\_\theta$. If you assume that this distribution is e.g. a Gaussian, then you should be able to derive something interesting. See equation 12 from [the VAE paper](https://arxiv.org/pdf/1312.6114.pdf). You can check out the definition of the Gaussian pdf [on Wikipedia](https://en.wikipedia.org/wiki/Normal_distribution). If you're stuck, you could start with the simple regression problem and, if necessary, watch e.g. [this Hinton's video](https://www.youtube.com/watch?v=vEPQNwxd1Y4). Upvotes: 2 <issue_comment>username_2: If you model the distribution $p\_\theta$ as Gaussian then: $\displaystyle p\_\theta(x|z) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}\frac{x-\mu}{\sigma}^2} $ $\displaystyle \log p\_\theta(x|z) = -log \sigma - \frac{1}{2}\log 2\pi -\frac{1}{2} \frac{x-\mu}{\sigma}^2$ If you decide that your neural net will only produce the mean $\mu$ and you will use a constant $\sigma$, then the derivative of $\log p\_\theta$ is the same as the derivative of $MSE$, but scaled with $\sigma$. Now, given that this is the most common choice of model (Gaussian with constant $\sigma$), people just optimize the $MSE$ and adjust their learning rate accordingly. Upvotes: 2
2023/08/03
526
1,858
<issue_start>username_0: I am looking for the name/source for the statement regarding "moving the goalposts" w.r.t. AI. I don't remember the statement exactly, but it was something in the sense that we think of AI like of God: We define a problem as an AI problem. As soon as our computers become capable of solving it, it ceases to be an AI problem. For example (he said), in the 1960s one could earn a PhD in AI on chess-playing programs. Today (in the 1980s), they are just chess-playing programs. This is similar to God, in the sense that we used to believe that God lives up in the sky. Now that we've reached the sky, he lives somewhere else. What I remember (or believe to remember) about that person is: * he used to work at Apple Computers in the early days (1970s - 1980s), probably as a technical writer; * he used to drive a used Rolls Royce; * his last name starts with a T. Any guesses?<issue_comment>username_1: I think you're looking for the term **AI effect**, but this might have been called in different ways. <NAME> wrote about the topic in the book: [Machines Who Think (2nd ed.)](https://monoskop.org/images/1/1e/McCorduck_Pamela_Machines_Who_Think_2nd_ed.pdf). I'll quote her (p. 423). > > Practical AI successes, computational programs that actually achieved > intelligent behavior, were soon assimilated into whatever application domain they were found to be useful, and became silent partners alongside other problem-solving approaches, which left AI > researchers to deal only with the “failures,” the tough nuts that > couldn’t yet be cracked. Once in use, successful AI systems were > simply considered valuable automatic helpers. > > > Upvotes: 2 <issue_comment>username_2: <NAME>ler: [Tesler's Theorem](https://www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html). Upvotes: 3 [selected_answer]
2023/08/04
902
3,671
<issue_start>username_0: In a QTable you keep states and actions for the ongoing decision making, it somehow represents the knowledge of the world and your future decisions for this and any future instance of a game. In the other hand QTraining is considered model-free. Is that contradictory or the QTable does not fully represent a Model of the game?<issue_comment>username_1: No, a model in RL is considered something (you can even consider it as a black box) that knows how to transition from one state to the next one, so for example, if we are playing snake, if I give you the current grid and the action where to move the head, an optimal model will have learnt the next game board, with the snake moved in the direction of the action Then usually you can also learn the reward function with the model, but the model by itself tells you the next state you will be Now, don't confuse it with the environment, the model learns to imitate the environment, but it's not it After all of this, as you can see, $Q$ learning is not able to do that, it only tells you for each state and action what's the expected return, but not "if you are in this state $s\_t$ and you do this action $a\_t$ you'll end up at this state $s\_{t+1}$" Upvotes: 1 <issue_comment>username_2: The key is that Q-learning does not use $p(s' \mid s, a)$ - the transition model - in its standard formulation. The Q-table isn't this model. It's not even an estimate of this model because it doesn't give you any information about how to transition between states. The Q-table gives you an estimate of the expected return, so sum of rewards. So, you can predict the return, and Q-learning uses the estimated return in its update rule, but you cannot predict the next states, unless you augment Q-learning with that, but people don't usually do that in the standard versions. Note: you could estimate $p(s' \mid s, a)$ in Q-learning by keeping track of the times you visit $s'$ when you took action $a$ in $s$. I think the confusion is very understandable. There are many models. The [Q-table is indeed a model too, but it's a model of the Q-function](https://ai.stackexchange.com/a/25460/2444). If it was me, I'd abolish this horrible terminology - the worst part of RL. The on-policy vs off-policy is even worse. Literally, it's easier to understand the 10 lines of code of Q-learning than the terms off-policy and and model-free, which are just fancy names that pump it up, but you will eventually get used to them Upvotes: 1 <issue_comment>username_3: The Q table is a *useful summary* of the underlying Markov Decision Process (MDP) model description of the environment and available choices. A Q table summarises expected results for a single policy - in Q learning this is self-referential in that the policy it predicts for is the one where the agent greedily selects the action that predicts the best expected long-term return. As a summary, the Q table is compressed in a way that is irreversible. It is not possible to derive the full MDP description of the environment from it. The Q table is still a useful tool, because it can be learned from experience even if you don't know the MDP model. This is what model-free methods do. In a general common language sense, the Q table is also a model, in that it can be used to make predictions (if the agent acts a certain way, it should expect to receive a certain amount of reward). However, the MDP model is the fully descriptive model for sequential decision problems (it predicts all individual outcomes, independently of any policy), so in RL when referring "the model", it almost always means the MDP. Upvotes: 3 [selected_answer]
2023/08/05
917
3,216
<issue_start>username_0: For example, [faster-whisper](https://github.com/guillaumekln/faster-whisper#usage)'s `transcribe` function takes an argument ``` beam_size: Beam size to use for decoding. ``` What does "beam" mean?<issue_comment>username_1: ### Beam search The **beam search** is a algorithm to find probable output sequences for an input sequence, so it has been used for *decoding* in the context of sequence-to-sequence tasks, like machine translation. It's an alternative to a **greedy search**, which just samples the most likely token at every step (which is not guaranteed to find the most likely sequence, though), that can keep a set of solutions. I think I first encountered this algorithm a few years ago while reading [this important paper](https://arxiv.org/pdf/1409.0473.pdf), which refers to [this <NAME>' paper](https://arxiv.org/pdf/1211.3711.pdf), which actually contains a description and the pseudocode of a beam search, but I had forgotten about it, so I decided to review it :) A good description can be found [here](https://arxiv.org/pdf/1703.01619.pdf) (section 7.2.3) or [here](https://d2l.ai/chapter_recurrent-modern/beam-search.html#id1). If you're familiar with search algorithms like breadth-first search (BFS), think of beam search as a similar search algorithm, where nodes are the tokens from some vocabulary and the edges are the probabilities of selecting a certain token given the previously selected tokens (important!). In fact, it can be viewed as a generalization of BFS (exhaustive search) - if you set the beam size equal to the vocabulary size, you get BFS. The following image (taken from [here](https://d2l.ai/chapter_recurrent-modern/beam-search.html#id1), which provides a nice explanation) should give you the intuition [![enter image description here](https://i.stack.imgur.com/9kPSV.png)](https://d2l.ai/chapter_recurrent-modern/beam-search.html#id1) ### What is a beam? Just a guess Now, I don't know the origin of the term **beam** in this context or why it's called *beam search*. In some papers I've come across ([example](https://www.eecs.qmul.ac.uk/%7Esimond/pub/2015/SigtiaBoulanger-LewandowskiDixon-ISMIR2015-Chord.pdf)), the beam acts like an object that has properties like the size (beam size). However, it could refer to the paths or the polygon formed by the external nodes at each layer from the first to the last, which is kind of similar to or could remind us of a light beam (image taken from [here](http://saladgamer.com/vlb-doc/comp-effect-pulse/)). [![enter image description here](https://i.stack.imgur.com/9rJxh.gif)](https://i.stack.imgur.com/9rJxh.gif) The terminology is not very important, unless you're only interested in the history of the algorithm, which I assume not Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is <NAME>'s answer at [the issue section of the Git repository](https://github.com/guillaumekln/faster-whisper/issues/392#issuecomment-1660568838):   ["Beam Search" in Wikipedia](https://en.wikipedia.org/wiki/Beam_search):   Additionally, the beam size/beam width is controlling the number of paths that are explored at each step when generating an output. Upvotes: 1