State of GPT | BRK216HFS

Bookmark
Summary
Questions Covered (beta)
x
Main point
Full script

Please welcome. Ai researcher and founding member of openai, Andre carpathy. [applause]. Hi everyone, I'm happy to be here to tell you about the state of gbt and, more generally, about the rapidly growing ecosystem of large language models. So I would like to partition the talk into two parts. In the first part, I would like to tell you about how we train GPT assistance, and then in the second part, we will, we are going to take a look at how we can use these assistants effectively for your applications. So first let's take a look at the emerging recipe for how to train these assistants and keep in mind that this is all very new and still rapidly evolving, but so far the recipe looks something like this. Now this is kind of a complicated slide, so I'm going to go through it piece by piece. But, roughly speaking, we have four major stages: free training, supervised fine-tuning, reward, modeling, reinforcement learning- and they follow each other serially. Show more

Show less
Main point
Full script

Now, in each stage, we have a data set that is that powers that stage. We have an algorithm that, for our purposes, will be a objective and a and over for for training the neural network, and then we have a resulting model, and then there's some notes on the bottom. So the first stage we're going to start with is the pre-training stage. Now, this stage is kind of special in this diagram, and this diagram is not to scale because this stage is where all of the computational work basically happens. Show more

Show less
Main point
Full script

This is 99 of the training, compute time and also flops, and so this is where we are dealing with internet scale data sets with thousands of gpus in the store computer and also a month of training. Potentially. The other three stages are fine-tuning stages that are much more along the lines of a small few number of gpus and hours or days. So let's take a look at the pre-training stage to achieve a base model first. We're going to gather a large amount of data. Here's an example of what we call a data mixture. Show more

Show less
Main point
Full script

That comes from this paper that was released by meta, where they released this llama based model. Now you can see roughly the kinds of data sets that enter into these collections. So we have common crawl, which is just a web scrape, C4, which is also common crawl, and then some high quality data sets as well. So, for example, GitHub, Wikipedia, books, archive, stack exchange and so on. These are all mixed up together and then they are sampled According to some given proportions and that forms the training set for the neural net for the GPT. Now, before we can actually train on this data, we need to go through one more pre-processing step, and that is tokenization, and this is basically a translation of the raw text that we scrape from the internet into sequences of integers, because that's the native representation over which gpts function. Show more

Show less
Main point
Full script

Now this is a lossless kind of translation between pieces of text and tokens and integers, and there are a number of algorithms for this stage. Typically, for example, you could use something like byte care encoding, which iteratively merges little text chunks and groups them into tokens, and so here I'm showing some example chunks of these tokens, and then this is the raw integer sequence that will actually feed into a Transformer. Now here I'm showing two sort of like examples for hyper parameters that govern this stage. So gpt4: we did not release too much information about how it was trained and so on, so I'm using gpt3's numbers. Show more

Show less
Main point
Full script

But gpt3 is of course a little bit old by now, about three years ago, but llama is a fairly recent model from meta. So these are roughly the orders of magnitude that we're dealing with when we're doing pre-training. The vocabulary size is usually a couple- 10 000- tokens. The context length is usually something like two thousand, four thousand or nowadays even 100 000 and this governs the maximnumber of integers that the GPT will look at when it's trying to predict the next integer in a sequence. You can see that roughly the number of parameters is, say, 65 billion for llama. Now, even though llama has only 65b parameters compared to gvt3's 175 billion parameters, llama is a significantly more powerful model and intuitively that's because the model is trained for significantly longer, in this case 1.4 trillion tokens instead of just 300 billion tokens. So you shouldn't judge the power of a model just by the number of parameters that it contains. Below I'm showing some tables of rough number of rough hyper parameters that typically go into specifying the Transformer neural network, so the number of heads, the dimension, size, number of layers and so on, and on the bottom I'm showing some training hyper parameters. Show more

Show less
Main point
Full script

So, for example, to train the 65b model, meta used 2000 gpus, roughly 21 days train of training and and roughly several million dollars. And so that's the rough orders of magnitude that you should have in mind for the pre-training stage. Now when we're actually pre-chaining what happens, roughly speaking, we are going to take our tokens and we're going to lay them out into Data batches. So we have these arrays that will feed into the Transformer, and these arrays are: B- the batch size, and these are all independent examples stacked up in rows, and B by T, T being the maximcontext length. Show more

Show less
Main point
Full script

So in my picture I only have 10.. The context length, so this could be two thousand, four thousand, Etc. So these are extremely long rows, and what we do is we take these documents and we pack them into rows and we delimit them with these Special end of text tokens, basically turn the Transformer where a new document begins. And so here I have a few examples of documents and then I've stretched them out into into this input. Now we're going to feed all of these numbers into Transformer and let me, let me just focus on a single particular cell, but the same thing will happen at every, every cell in this diagram. So let's look at the green cell. The green cell is going to take a look at all of the tokens before it, so all of the tokens in yellow, and we're going to feed that entire context into the Transformer neural network and the Transformer is going to try to predict the next token in the sequence, in this case in red. Show more

Show less
Main point
Full script

Now the Transformer. I don't have too much time to, unfortunately, go into the full details of this. Neural network architecture- is just a large blob of neural net stuff for our purposes and it's got several- 10 billion parameters typically, or something like that. And of course, as they tune these parameters, you're getting slightly different predicted distributions for every single one of these cells, and so, for example, if our vocabulary size is 50, 257 tokens, then we're going to have that many numbers because we need to specify a probability distribution for what comes next. So basically, we have a probability for whatever may follow. Now, in this specific example, for the specific cell, 513 will come next, and so we can use this as a source of supervision to update our Transformers weights, and so we're applying this basically on every single cell in parallel and we keep swap, swapping batches and we're trying to get the Transformer to make the correct predictions over what token comes next in a sequence. So let me show you more concretely what this looks like when you train one of these models. This is actually coming from a New York Times and they trained a small GPT on Shakespeare, and so here's a small snippet of Shakespeare and they trained their GPT on it. Now in the beginning, at initialization, the GPT starts with completely random weights, so you're just getting completely random outputs as well. Show more

Show less
Main point
Full script

But over time, as you train the GPT longer and longer, you are getting more and more coherent and consistent sort of samples from the model. And the way you sample from it, of course, is you predict what comes next. You sample from that distribution and you keep feeding that back into the process and you can basically sample large sequences and so by the end you see that the Transformer has learned about words and where to put spaces and where to put commas and so on, and so we're making more and more consistent predictions over time. These are the kinds of plots that you're looking at when you're doing model pre-training. Effectively, we're looking at the loss function over time as you train, and low loss means that our Transformer is predicting the correct is giving a higher probability to the correct next integer in the sequence. Now what are we going to do with this model once we've trained it after our month? Show more

Show less
Main point
Full script

Well, the first thing that we noticed- we the field- is that these models- basically in the process of language modeling- learn very powerful General representations and it's possible to very efficiently fine-tune them for any arbitrary Downstream task you might be interested in. So, as an example, if you're interested in sentiment classification, the approach used to be that you collect a bunch of positives and negatives and then you train some kind of an NLP model for for that. Show more

Show less
Main point
Full script

But the new approach is: ignore sentiment classification, go off and do large language model pre-training, train a large Transformer and then you can- only- you may only have a few examples and you can very efficiently fine tune your model for that task. And so this works very well in practice. And the reason for this is that basically, the Transformer is forced to multitask a huge amount of tasks in the language modeling task because just just in terms of predicting the next token, it's forced to understand a lot about the structure of the of the text and all the different concepts they're in. Show more

Show less
Main point
Full script

So that was GPT one. Now, around the time of gpt2, people noticed that actually even better than fine-tuning, you can actually prompt these models very effectively. So these are language models and they want to complete documents. So you can actually trick them into performing tasks just by arranging these fake documents. So in this example- for, for example, we have some passage and then we sort of like do QA, qaqa, this is called a few shot, prompt, and then we do q and then, as the Transformer is trying to complete the document, it's actually answering our question. Show more

Show less
Main point
Full script

And so this is an example of prompt engineering. A base model making the belief that it's sort of imitating a document and it getting it to perform a task, and so this kicked off, I think, the era of, I would say, prompting over fine tuning and seeing that this actually can work extremely well on a lot of problems even without training any neural networks, fine-tuning or so on. Now, since then, we've seen an entire evolutionary tree of Base models that everyone has trained. Not all of these models are available. For example, the gpt4 base model was never released. The gpt4 model that you might be interacting with over API is not a base model, it's an assistant model and we're going to cover how to get those in a bit. Gpt3 based model is available via the API under the name DaVinci, and gpt2 base model is available even as weights on our GitHub repo. Show more

Show less
Main point
Full script

But currently the best available base model probably is the Llama series from meta, although it is not commercially licensed. Now, one thing to point out is: base models are not existence. They don't want to answer to, they don't want to make answers to your questions, they just want to complete documents. So if you tell them, write a poem about the brand cheese, it will. Just you know it will answer questions with more questions. It's just completing what it thinks as a document. However, you can prompt them in a specific way for base models that that is more likely to work. Show more

Show less
Main point
Full script

So as an example, here's a poem about bread and cheese and in that case it will autocomplete correctly. You can even trick base models into being assistance, and the way you would do this is you would create like a specific few shot prompt that makes it look like there's some kind of a document between a human and assistant and they're exchanging sort of information and then at the bottom you sort of put your query at the end and the base model will sort of like condition itself into being like a helpful assistant and kind of answer. Show more

Show less
Main point
Full script

But this is not very reliable and doesn't work super well in practice, although it can be done. So instead we have a different path to make actual gbt assistance, not just base model document completers, and so that takes us into supervised fine-tuning. So in the supervised fine tuning stage we are going to collect small but high quality data sets and in this case we're going to ask human contractors to gather data of the format of the form, prompt and ideal response, and we're going to collect lots of these, typically tens of thousands or something like that, and then we're going to still do language modeling on this data. So nothing changed algorithmically. We're just swapping out a training set- so it used to be internet documents- which is a high quantity, local, four, basically QA, prompt response kind of data, and that is low quantity, high quality. Show more

Show less
Main point
Full script

So we still do language modeling and then, after training, we get an sfd model and you can actually deploy these models and they are actual assistants and they work to some extent. Let me show you what an example demonstration might look like. So here's something that a human contractor might come up with. Here's some random prompt- can you write a short introduction about the relevance of the term monopsony or something like that? And then the contractor also writes out an ideal response. And when they write out these responses, they are following extensive labeling documentations and they're being asked to be helpful, truthful and harmless. Show more

Show less
Main point
Full script

And those labeling instructions- here you probably can't read it another can I, but they're long and this is just people following instructions and trying to complete these prompts. So that's what the data set looks like, and you can train these models and this works to some extent. Now you can actually continue the pipeline from here on and go into rlhf- reinforcement learning from Human feedback. That consists of both reward modeling and reinforcement learning. So let me cover that and then I'll come back to why you may want to go through the extra steps and how that compares to just sft models. So in the reward modeling step, what we're going to do is we're now going to shift our data collection to be of the form of comparisons. So here's an example of what our data set will look like. I have the same prompt, identical prompt on the top, which is asking the assistant to write a program or function that checks if a given string is a palindrome, and then what we do is we take the sft model, which we've already trained, and we create multiple completions. So in this case we have three completions that the model has created and then we ask people to rank these completions. Show more

Show less
Main point
Full script

So if you stare at this for a while- and, by the way, these are very difficult things to do to compare some of these predictions and this can take people even hours for a single prompt completion pairs. Show more

Show less
Main point
Full script

But let's say we decided that one of these is much better than the others, and so on, so we rank them. Then we can follow that with something that looks very much kind of like a binary classification on all the possible pairs between these completions. So what we do now is we lay out our prompt in rows and the prompts is identical across all three rows here, so it's all the same prompt but the completion of this vary, and so the yellow tokens are coming from the sft model. Show more

Show less
Main point
Full script

Then what we do is we append another special reward readout token at the end and we basically only supervise the Transformer at this single green token and the Transformer will predict some reward for how good that completion is for that prompt, and so basically it makes a guess about the quality of each completion and then, once it makes a guess for every one of them, we also have the ground truth which is telling us the ranking of them, and so we can actually enforce that some of these numbers should be much higher than others, and so on. We formulate this into a loss function and we train our model to make reward predictions that are consistent with the ground truth coming from the comparisons from all these contractors. So that's how we train our reward model and that allows us to score how good a completion is for a prompt once we have a reward model. Show more

Show less
Main point
Full script

This is we can't deploy this, because this is not very useful as an assistant by itself, but it's very useful for the reinforce reinforcement learning stage that follows. Now, because we have a reward model, we can score the quality of any arbitrary completion for any given prompt. So what we do during reinforcement learning is we basically get again a large collection of prompts and now we do reinforcement learning with respect to the reward model. So here's what that looks like: we we take a single prompt, we lay it out in rows and now we use the sft. We use basically the model we'd like to train, which is initialized at sft model, to create some completions in yellow, and then we append the reward token again and we read off the reward according to the reward model, which is now kept fixed. It doesn't change anymore- and now the reward model tells us the quality of every single completion for each for these prompts, and so what we can do is we can now just basically apply the same language modeling loss function. But we're currently training on the yellow tokens and we are weighing the language modeling objective by the rewards indicated by the reward model. So, as an example, in the first row the reward model said that this is a fairly high score in completion, and so all the tokens that we happened to sample on the first row are going to get reinforced and they're going to get higher probabilities for the future. Show more

Show less
Main point
Full script

Conversely, on the second row, the reward model really did not like this completion- negative 1.2- and so therefore every single token that we sampled in that second row is going to get a slightly higher probability for the future. And we do this over and over on many prompts, on many batches and basically we get a policy which creates yellow tokens here and it basically all of them, all of the completions here, will score high according to the reward model that we trained in the previous stage. So that's how we train, that's what the rohf pipeline is now, and then at the end you get a model that you could deploy and so- and as an example, chat GPT is an rlhf model, but some other models that you might come across, like for example of the kuna 13B and so on, these are sft models. So we have base models, sft models and rlh models, and that's kind of like the state of things there. Now why would you want to do rlhf? So one answer that is kind of not that exciting is that it just works better. So this comes from the instructor gbt paper. According to these experiments a while ago now, these PPO models are rohf and we see that they are basically just preferred in a lot of comparisons when we give them to humans. So humans just prefer out basically tokens that come from our religious models compared to sft models, compared to base model. That is prompted to be an assistant, and so it just works better. But you might ask why? Why does it work better? And I don't think that there's a single like amazing answer that the community has really like agreed on. But I will just offer one, one reason potentially, and it has to do with the asymmetry between how easy computationally it is to compare versus generate. So let's take an example of generating a haiku. Suppose I ask a model to write a haiku about paper clips. If you're a contractor trying to give training data, then imagine being a contractor collecting basically data for the sft stage. Show more

Show less
Main point
Full script

How are you supposed to create a nice haiku for a paperclip. You might just not be very good at that, but if I give you a few examples of haikus, you might be able to appreciate some of these haikus a lot more than others, and so judging which one of these is good is much easier task, and so, basically, this asymmetry makes it so that comparisons are a better way to potentially leverage yourself as a human and your judgment to create a slightly better model. Now, our launch of models are not strictly an improvement on the base models in some cases. So, in particular, we've noticed, for example, that they lose some entropy, so that means that they give More PT results. They can output lower variations, like they can output samples with lower variation than the base model. So base model has lots of entropy and we'll go with lots of diverse outputs. So, for example, one one kind of place where I still prefer to use a base model is in a setup where you basically have N Things and you want to generate more things like it, and so here is an example that I just cooked up: I want to generate cool Pokemon names. Show more

Show less
Main point
Full script

I gave it seven Pokemon names and I asked the base model to complete the document, and it gave me a lot more Pokemon names. These are fictitious. I tried to look them up. I don't believe they're actual Pokemons, and this is the kind of task that I think base model would be good at, because it still has lots of entropy. It will give you lots of diverse, cool kind of more things that look like whatever you give it before. So this is what this is, number. Having a. Said all that, these are kind of like the assistant models that are probably available to you at this point- that there's a team at Berkeley that ranked a lot of the available assistant models and gave them basically ELO ratings. So currently, some of the best models, of course, are gpt4 by far, I would say, followed by Claude, gbt 3.5 and then a number of models- some of these might be available as weights, like the kuna, koala, Etc. And the first three rows here are all. They're all rohf models and all of the other models, to my knowledge, are sfd models, I believe. Okay, so that's how we train these models on the high level. Now I'm going to switch gears and let's look at how we can best apply a GPT assistant model to your problems. Now I would like to work in a setting of a concrete example. Show more

Show less
Main point
Full script

So let's let's work with a concrete example here. Let's say that you are working on an article or a blog post and you're going to write this sentence at the end: California's population is 53 times that of Alaska. So for some reason, you want to compare the populations of these two states. Think about the rich internal monologue and Tool use and how much work actually goes computationally in your brain to generate this one final sentence. So here's maybe what that could look like in your brain. Okay, for this next step, let me blog for my blog. Show more

Show less
Main point
Full script

Let me compare these two populations. Okay, first, I'm going to obviously need to get both of these populations. Now I know that I probably don't know these populations off the top of my head, so I'm kind of like aware of what I know or don't know of my self-knowledge, right. So I go, I do some tool use and I go to Wikipedia and I look up California's population and Alaska's population. Show more

Show less
Main point
Full script

Now I know that I should divide the two, but again, I know that dividing three, 9.2 by 0.74, is very unlikely to succeed. That's not the kind of thing that I can do in my head and so therefore, I'm gonna rely on the calculator. So I'm going to use a calculator, punch it in and see that the output is roughly 53.. And then maybe I do some reflection and Sanity checks in my brain. So does 53 make sense? Well, that's quite, quite a large fraction. But then California has the most popular state, so maybe that looks okay. So then I have all the information I might need. And now I get to the sort of creative portion of writing. So I might start to write something like California has 53, x times greater. And then I think to myself that's actually like really awkward phrasing, so let me actually delete that and let me try again. And so as I'm writing, I have this separate process, almost inspecting what I'm writing and judging whether it looks good or not, and then maybe I delete and maybe I reframe it and then maybe I'm happy with what comes out. Show more

Show less
Main point
Full script

So basically, long story short, a ton happens under the hood in terms of your internal monologue when you create sentences like this. But what does a sentence like this look like when we are training a GPT on it? From gpt's perspective, this is just a sequence of tokens. So GPT, when it's reading or generating these tokens, it just goes and each chunk is roughly the same amount of computational work for each token, and these Transformers are not very shallow networks. They have about 80 layers of reasoning, but 80 is still not like too much, and so this Transformer is going to do its best to imitate, but of course the process here looks very, very different from the process that you took. So, in particular, in our final artifacts, in the data set that we create and then eventually feed to llms, all of that internal dialogue is completely stripped, and, and unlike you, the GPT will look at every single token and spend the same amount of compute on every one of them, and so you can't expect it to actually like well, you can't expect it to do- to sort of do- too much work per token. Show more

Show less
Main point
Full script

So, and also in particular, basically, these Transformers are just like token simulators, so they don't know what they don't know like, they just imitate the next token. They don't know what they're good at or not good at, they just tried their best to imitate the next token. They don't reflect in the loop, they don't sanity check anything, they don't correct their mistakes along the way by default, they just sample token sequences. They don't have separate inner monologue streams in their head. Right, they're evaluating what's happening. Now they do have some sort of cognitive advantages, I would say, and that is that they do actually have a very large fact-based knowledge across a vast number of areas, because they have, say, several, 10 billion parameters, so it's a lot of storage for a lot of facts. Show more

Show less
Main point
Full script

But and they also, I think, have a relatively large and perfect working memory. So whatever fixed into the, whatever fits into the context window, is immediately available to the Transformer through its internal self-attention mechanism, and so it's kind of like Perfect Memory, but it's got that finite size, but the Transformer has a very direct access to it and so it can like losslessly remember anything that is inside its context window. So that's kind of how I would compare those two, and the reason I bring all of this up is because I think to a large extent, prompting is just making up for this sort of cognitive difference between these two kind of architectures, like our brains here and llm brains. Show more

Show less
Main point
Full script

You can look at it that way, almost. So. Here's one thing that people found, for example, works pretty well in practice: especially if your tasks require reasoning, you can't expect the Transformer to make to, to do too much reasoning per token, and so you have to really spread out the reasoning across more and more tokens. So, for example, you can't give a Transformer a very complicated question and expect it to get the answer in a single token. There's just not enough time for it. These Transformers need tokens to think, quote, unquote, I like to say sometimes, and so this is some of the things that work well. You may, for example, have a few short prompt that shows the Transformer that it should like show its work when it's answering questions, when it's answering the question, and if you give a few examples, the Transformer will imitate that template and it will just end up working out better in terms of its evaluation. Additionally, you can elicit this kind of behavior from the Transformer by saying: let's think step by step. Because this condition is the transformer into sort of like showing its work, and because it kind of snaps into a mode of showing its work, it's going to do less computational work per token and so it's more likely to succeed as a result, because it's making slower reasoning over time. Show more

Show less
Main point
Full script

Here's another example. This one is called self-consistency. We saw that we had the ability to start writing and then, if it didn't work out, I can try again, and I can try multiple times and and maybe select the one that worked best. So in these kinds of approaches you may sample, not just once, but you may sample multiple times and then have some process for finding the ones that are good and then keeping just those samples or doing a majority vote or something like that. So basically, these Transformers in the process, as they predict the next token, just like you, they can get unlucky, and they can. Show more

Show less
Main point
Full script

They could sample and not a very good token, and they can go down sort of like a blind alley in terms of reasoning and so, unlike you, they cannot recover from that. They are stuck with every single token they sample and so they will continue the sequence, even if they even know that this sequence is not going to work out. So give them the ability to look back, inspect or try to find, try to basically sample around it. Here's one technique also: you could. Show more

Show less
Main point
Full script

It turns out that actually llms like they know when they've screwed up. So, as an example, say, you asked the model to generate a poem that does not rhyme, and it might give you a poem, but it actually Rhymes. But it turns out that, especially for the bigger models like gpt4, you can just ask it: did you meet the assignment? And actually gpt4 knows very well that it did not meet the assignment. It just kind of got unlucky in its sampling and so it will tell you: no, I didn't actually meet the assignment. Here's let me try again. But without you prompting it. It doesn't even like it doesn't know. It doesn't know to revisit and and so on. So you have to make up for that in your prompt. Show more

Show less
Main point
Full script

You have to get it to check. If you don't ask it to check, it's not going to check by itself, it's just a token simulator. I think, more generally, a lot of these techniques fall into the bucket of what I would say recreating our system too. So you might be familiar with the system one, system two, thinking for humans. Show more

Show less
Main point
Full script

System one is a fast, automatic process and I think kind of corresponds to like an llm just sampling tokens, and system two is the slower, deliberate planning sort of part of your brain. And so this is a paper actually from just last week, because this space is pretty quickly evolving. It's called tree of thought, and in tree of thought the authors of this paper propose maintaining multiple completions for any given prompt, and then they are also scoring them along the way and keeping the ones that are going well, if that makes sense, and so a lot of people are like really playing around with kind of prompt engineering to to basically bring back some of these abilities that we sort of have in our brain for llms. Now, one thing I would like to note here is that this is not just a prompt. Show more

Show less
Main point
Full script

This is actually prompts that are together used with some python blue code, because you don't you actually have to maintain multiple prompts and you also have to do some tree search algorithm here to like figure out which prompt to expand Etc. So it's a symbiosis of python glue code and individual prompts that are called in a while loop or in a bigger algorithm. I also think there's a really cool parallel here to alphago. Alphago has a policy for placing the next Stone when it plays go and its policy was trained originally by imitating humans. But in addition to this policy it also does multicolored research and basically it will play out a number of possibilities in its head and evaluate all of them and only keep the ones that work well, and so I think this is kind of an equivalent of alphago, but for text, if that makes sense. So just like tree of thought, I think more generally, people are starting to like really Explore More General techniques of not just a simple question answer prompt but something that looks a lot more like python glue code stringing together many problems. So on the right I have an example from this paper called react, where they structure the answer to a prompt as a sequence of thought, action, observation, thought, action, observation, and it's a full rollout, a kind of a thinking process to answer the query, and in these actions the model is also allowed to Tool use. Show more

Show less
Main point
Full script

On the left I have an example of Auto GPT and now rogbt, by the way, became is a project that I think got a lot of hype recently and I think- but I think I still find- it kind of inspirationally interesting it's. It's a project that allows an llm to sort of keep task list and continue to recursively break down tasks, and I don't think this currently works very well and I would not advise people to use it in Practical applications. I just think it's something to generally take inspiration from in terms of where this is going, I think, over time. Show more

Show less
Main point
Full script

So that's kind of like giving our model system to thinking. The next thing that I find kind of interesting is this: following survive, I would say almost psychological Quirk of llms is that llms don't want to succeed. They want to imitate. You want to succeed and you should ask for it. So what I mean by that is when Transformers are trained. They have training sets and there can be an entire spectrof performance qualities in their training data. So, for example, there could be some kind of a prompt for some Physics question or something like that, and there could be a student solution. That is completely wrong. But there can also be an expert answer. That is extremely right. And Transformers can't tell the difference between like look, I mean, they know they know about low quality Solutions and high quality Solutions, but by default they want to imitate all of it because they're just trained on language modeling and so at test time you actually have to ask for a good performance. So in this example- in this paper it's- they tried various prompts and let's think, step by step was very powerful, because it's sort of like spread out the reasoning of remaining tokens. Show more

Show less
Main point
Full script

But what worked even better is let's work this out in a step-by-step way to be sure we have the right answer, and so it's kind of like a conditioning on getting a right answer, and this actually makes the Transformer work better, because the Transformer doesn't have to now hedge its probability Mass on low quality Solutions, as ridiculous as that sounds, and so, basically, don't be feel free to ask for a strong solution. Say something like you are a leading expert on this topic, pretend you have iq120, Etc. But don't try to ask for too much IQ, because if you ask for IQ of like 400, you might be out of data distribution or, even worse, you could be in data distribution for some like sci-fi stuff and it will start to like take on some sci-fi or like role-playing or something like that. Show more

Show less
Main point
Full script

So you have to find, like, the right amount of IQ. I think it's got some U-shaped curve there. Next up, as we saw, when we are trying to solve problems, we know we are good at and what we're not good at, and we Lean On Tools. Computationally, you want to do the same potentially with your llms, so in particular, we may want to give them calculators, code interpreters and so on, the ability to do search, and there's a lot of techniques for doing that. One thing to keep in mind again is that these Transformers by default may not know what they not don't know. So you may even want to tell the transformer in the prompt you are not very good at mental arithmetic whenever you need to do very large number addition, multiplication or whatever. Instead, use this calculator. Here's how you use the calculator. Show more

Show less
Main point
Full script

Use this token combination, etc. Etc. So you have to actually like spell it out, because the model by default doesn't know what it's good at or not good at necessarily, just like you and I. You and I might be next up. I think something that is very interesting is we went from a world that was retrieval only all the way the pendulhas swung to The Other Extreme, where it's memory only in lens, but actually there's this entire space in between of these retrieval augmented models and this works extremely well in practice. As I mentioned, the context window of a transformer is its working memory. If you can load the working memory with any information that is relevant to the task, the model will work extremely well because it can immediately access all that memory, and so I think a lot of people are really interested in basically retrieval, augmented generation. Show more

Show less
Main point
Full script

And on the bottom I have like an example of llama index, which is one sort of data connector to lots of different types of data, and you can all, you can make it, you can index all of that data and you can make it accessible to llms. And the emerging recipe there is: you take relevant documents, you split them up into chunks, you embed all of them and you basically get embedding vectors that represent that data. Show more

Show less
Main point
Full script

You store that in the vector store and then, at test time, you make some kind of a query to your vector store and you fetch chunks that might be relevant to your task and you stuff them into the prompt and then you generate. So this can work quite well in practice. So this is, I think, similar to when you and I solve problems. You can do everything from your memory, and Transformers have very large and extensive memory. But also it really helps to reference some primary documents. So when you, whenever you find yourself going back to a textbook to find something, or whenever you find yourself going back to the documentation of a library to look something up, the transform transformers- definitely when I do that too, you you have a mem, you have some memory over how some documentation of a library works, but it's much better to look it up. So the same same applies here. Next, I wanted to briefly talk about a constraint prompting. I also find this very interesting. This is basically techniques for forcing a certain template in the outputs of llms. So guidance is one example from Microsoft actually, and here we are enforcing that the output from the llm will be Json and this will actually guarantee that the output will take on this form. Show more

Show less
Main point
Full script

Because they go in and they mess with the probabilities of all the different tokens that come out of the Transformer and if they clamp those tokens and then the Transformer is only filling in the blanks here and then you can enforce additional restrictions on what could go into those blanks, so this might be really helpful and I think this kind of constraint sampling is also extremely interesting. I also wanted to say a few words about fine tuning. It is the case that you can get really far with prompt engineering, but it's also possible to think about fine-tuning your models now. Fine-tuning models means that you are actually going to change the weights of the model. It is becoming a lot more accessible to do this in practice and that's because of a number of techniques that have been developed and have libraries for very recently. So, for example, parameter efficient fine-tuning techniques like Laura make sure that you're only Trend, you're only training small, sparse pieces of your model. So most of the model is kept clamped at the base model and some pieces of it are allowed to change, and this still works pretty well empirically and makes it much cheaper to sort of tune only small pieces of your model there's also. Show more

Show less
Main point
Full script

It also means that because most of your model is clamped, you can use very low Precision inference for computing those parts, because they are not going to be updated by gradient descent, and so that makes everything a lot more efficient as well. And in addition, we have a number of Open Source high quality based models. Currently, as I mentioned, I think llama is quite nice, although it is not commercially licensed, I believe right now, some things to keep in mind is that basically, fine-tuning is is a lot more technically involved. It requires a lot more, I think, technical expertise to do. Show more

Show less
Main point
Full script

To do right. It requires human data contractors for data sets and or synthetic data pipelines that can be pretty complicated. This will definitely slow down your iteration cycle by a lot and I would say on a high level, sft is achievable because it is just your continuing language modeling task. It's relatively straightforward. But rlhf, I would say, is very much research territory and is even much harder to get to work, and so I would probably not advise that someone just tries to roll their own rlh implementation. These things are pretty unstable, very difficult to train, not something that is, I think, very beginner friendly right now and it's also potentially likely also to to change pretty rapidly still. So I think these are my sort of default recommendations right now. Show more

Show less
Main point
Full script

I would break up your task into two major parts. Number one: achieve your top performance. And number two: optimize your performance. In that order. Number one: the best performance will currently come from GT4 model. It is the most capable model by far. Use prompts that are very detailed. Show more

Show less
Main point
Full script

They have lots of task contents, relative relevant information and instructions. Think along the lines of: what would you tell a task contractor if they can't email you back? But then also keep in mind that a task contractor is a human and they have inner monologue and they're very clever, Etc. Llms do not possess those qualities. So make sure to think through the psychology of the llm almost and cater prompts to that retrieve and add any relevant context and information to these. Prompts basically refer to a lot of the prompt engineering techniques. Some of them are highlighted in the slides above, but also this is a very large space and I would just advise you to look for prompt engineering techniques online. There's a lot to cover there. Experiment with few shop examples. What this refers to is: you don't just want to tell, you want to show whenever it's possible, so give it examples of everything that helps it really understand what you mean. If you can, experimental tools and plugins to offload a tasks that are difficult for llms natively. And then think about not just a single prompt and answer. Show more

Show less
Main point
Full script

Think about potential chains and reflection and how you glue them together and how you could potentially make multiple samples and so on. Finally, if you think you've squeezed out prompt engineering, which I think you should stick with for a while, look at some potentially fine-tuning a model to your application, but expect this to be a lot more slower and evolved. And then there's an expert fragile research Zone here, and I would say that is rlhf, which currently does work a bit better than sft if you can get it to work. But again, this is pretty involved, I would say. And to optimize your costs, try to explore lower capacity models or shorter prompts and so on. Show more

Show less
Main point
Full script

I also wanted to say a few words about the use cases in which I think llms are currently well suited for. So, in particular, note that there's a large number of limitations to llms today, and so I would keep that definitely in mind for all your applications. Models- and this, by the way, could be an entire talk, so I don't have time to cover it in full detail- models may be biased. They may fabricate, hallucinate information. They may have reasoning errors. They may struggle in entire classes of applications. They have knowledge cutoffs, so they might not know any information above, say, September 2021. They are susceptible to a large range of attacks, which are sort of like coming out on Twitter daily, including prompt injection, jailbreak attacks, data poisoning attacks and so on. So my recommendation right now is: use llms in low stakes applications, combine them with- always with human oversight, use them as a source of inspiration and suggestions and think co-pilots instead of completely autonomous agents that are just like performing a task somewhere. Show more

Show less
Main point
Full script

It's just not clear that the models are there right now. So I wanted to close by saying that gpt4 is an amazing artifact. I'm very thankful that it exists and it's. It's beautiful, it has a ton of knowledge across so many areas. It can do math, code and so on, and, in addition, there's this thriving ecosystem of everything else that is being built and incorporated into into the ecosystem some of some of the things, these things I've talked about, and all of this power is accessible at your fingertips. So here's everything that's needed in terms of code to ask GPT for a question, to prompt it and get a response. Show more

Show less
Main point
Full script

In this case, I said: can you say something to inspire the audience of Microsoft build 2023? And I just punched this into Python and verbatim, gpt4 said the following- and, by the way, I did not know that they used this trick in the keynote, so I thought I was being clever, but but it is really good at this. It says like: ladies and gentlemen, Invaders and Trailblazers, Microsoft build 2023, welcome to the Gathering of Brilliant Minds like no other. You are The Architects of the future, The Visionaries molding the digital realm in which Humanity thrives. Embrace the Limitless possibilities of Technologies and let your ideas soar as high as your imagination. Together, let's create a more connected, remarkable and inclusive world for generations to come. Get ready to unleash your creativity, canvas the unknown and turn dreams into reality. Your Journey Begins today, okay. Show more

Show less
Main point

Show more

Show less