00:00 Alright, welcome to this video. We're upgrading our app, a script that we're doing open AI prompt calls to. Currently we're using text DaVinci 3 model, and in this video we're gonna ultimately upgrade to 3.5.
00:15 So I have the open AI documentation here, and this is an example. We're gonna do a couple of simple API response, what it's gonna look like once we make the change.
00:25 Why do we have to make this change? Here's a couple reasons why. If we want to take our open AI prompt and have this modeled and just change this to, what was it?
00:37 We need to change it to GPT 3.5. Turbo. It's just not going to work. The get request is gonna probably stay the same, but down here where we return choices zero, which is a bracket notation dot text, this will probably change.
00:55 But there's also one extra thing that I realize while reading through this documentation is that GPT 3.5. And also four is chat, right?
01:04 And so two things change. One, they have messages. You can include distinctly different messages within the API call, which that's how chat works is like every message you send, it includes the past messages and then it sort of adds that together.
01:21 And sometimes it just like let the longer in the past, the first ones will leave just to get under that token limit.
01:33 But that is a difference. That is a big difference. Another difference is that we include a system message. Into our prompt.
01:43 This is new from DaVinci 3, where it was literally just text and then we needed, and the result was completing that text.
01:52 So that's essentially what that AI was doing. It was like here's an included text and complete this text. We were sort of using it as question and answer and there were sort of examples how to use it.
02:01 And one of the things that I recommended earlier was to get the best prompts is to include examples. Well, this is interesting because we can include distinct examples in those messages.
02:15 So we'll need to change that. So let's get going. We're going to get an error right now. Actually, I need to, oh, wait.
02:24 I need to get the API key. I'm going to delete this API key eventually. But as we had before, we're just going to name a new sheet API key.
02:32 In A1, we're going to paste this. And again, I'm going to delete this before you see this video. And, and let me go through actually what this was before.
02:40 So you can see sort of how it works. If you don't want to go back and see that other video, but we have this function openAI.
02:48 And we're just going to go here, go equals openAI. We're going to write a prompt. What is the, we're going to do really, what is the French translation to?
02:59 And we can sort of include. Some word here, right? We're going to do plus B one and we're going to do cat.
03:11 Might be wrong. We have an error. We get some response back. B one. Yeah. Let's see if that's going to change.
03:23 I realize what I mean. Not plus N. M percent. There we go. That is. Yep. And now let's just see what we literally, what literally happens when we change this model to GP.
03:42 T 3.5 turbo. I think we're going to get some error. Let's see what the response ends up being. We get literally error requests failed for API return 404 truncated message response error.
03:53 This is a chat model and not supported in the V one completion. So what we need to do actually first off.
04:00 So we have our model GP T3.. We need to change. Let's open this. Can we open it in playground? So it looks like here from the examples for chat.
04:15 For the API reference, we need to do a post to this URL. So we were getting this error and we did and it actually did give us the.
04:23 Did you mean to use V1 chat completions? So let's go over here. Let's make sure we. Copy paste that correctly syntax there.
04:34 And let's see what happens now. We're going to get a different error. We probably need to also add a new message.
04:43 So now we get a code 400 messages is required. Perfect. So we need to change this also to post this method from get to post.
04:54 And then we're going to where's our prompt? Our prompt is here. How they do it in the API reference. Is.
05:05 Have our model and then this messages. So let's look at this messages in our. Script so instead of a prompt, we're going to have messages.
05:15 Messages. Is. Just going to leave prompt there and change this to messages. Some messages. And probably I'm wondering in this content if that's going to be where we add up two pluses.
05:32 Oops. Let's get rid of those quotes two pluses and inside the plus. Oh, we need to wrap the don't need to wrap anything.
05:42 Yeah. Plus, nope, don't wrap anything plus prompt. Let's see if that is all we need to get a correct request failed.
05:57 None is not type of string. Messages.zero.content type invalid request error truncated server error. So we're getting some, so this is interesting, right?
06:11 We have the correct URL, we have a check completions, we're now just figuring out how to change this messages part to role system.
06:28 I added a role here, which we can always change later to like a open AI system comma prompt if we want to.
06:36 But this role is going to be the system message, and then our role as a user, we're going to have a prompt, I think.
06:44 This might change later, but what ended up happening is we're sending this, and we had been getting errors before, but now we are getting a blank.
06:53 This is actually very good because that means that we are not getting an error, we're getting some response back, and now we go to our return and we're like, okay, we're not getting anything.
07:02 So let's look at a bare response. And let's just change this return to response and see if we get any text here.
07:10 Our hope is that we get some text here. Perfect. Let's wrap it. Okay, so now we're getting this JSON. Let's go to a JSON beautifier.
07:25 And it's not valid JSON because it has some quotes around it. Let's just take those quotes off. I have an empty object.
07:34 We have ID here. It's because we have these let's try to copy-paste. Ah, we have some extra quotes. Okay, so because we have already know our answer is laychat, we can see it right here.
07:55 Let me try to make it a little bit bigger. We have role, assistant, content, laychat. So this is what we want to get out of all of this.
08:03 We want the content inside choice. So, let's try to parse this. Instead of response, we will return JSON. I think we need choices.
08:18 Let's do just choices and see what we get out of that. We'll change. In a moment, okay, we get nothing.
08:29 Let's see, choices zero. If we do that again. That'll be the, in an array of choices, it'll be the first thing.
08:40 And it's looking like we still have nothing. Maybe we need to do choices one. Or let's go back and see what that looked like.
08:51 There was probably something else to that. Okay, I just went through Google Doc first, pasted it there. Now we have an actual JSON.
08:58 Now we can see ID, object, created model, usage, choices one. Good. Let's see. See if we can one, then there's a zero, choices one, zero.
09:12 Maybe that's what we need to do. Let's see, choices, JSON.choices, zero. Dot, let's go back. Here to message content. Let's find it.
09:38 See if that gets us something. We have an error. Can I read properties of undefined reading zero? Great. Let's do this.
09:49 Just delete that zero. Reading message is great. Alright.. We're getting errors that's good that we're getting errors that tell us something.
10:04 Okay, we got nothing. No. Now we're back. I think it's because I said. Messages and it's actually message. Let's see dot message.
10:15 Alright, so I got to JSON dot choices with a bracket zero dot message dot content gets us exactly what we need.
10:25 Like check. Wow. Now we are able to put in some prompt. We also have this system message here that we do need to be changing from time to time.
10:37 Obviously when we're we create some system message, I'm not having any message there. We'll change this. This will ultimately change.
10:53 Hopefully, maybe. So this is interesting that it gives us not lay chat, but just chat. So we can do something like, and this is without the system message that you're a translator.
11:04 It's just including the prompt, which is here. What is the French? Translation two. We can actually even take this away and say B1 and C1.
11:18 Let's do this. What is the French translation of cat? Let's see if that does anything. And now it's LeChat. So now this is all we're adding.
11:35 The text that you see here is the only text that we have. We are using the prompt. We are using model GPT 3.5.
11:44 We're adding a system role here if we want it, but right now it is blank. And then we just do role user content prompt.
11:53 And now we're running this prompt into this model and getting out some answer. We can also fetch a lot of information about this as I was reading through this reference, as we can get sort of how many tokens it is.
12:08 As we can get a lot more information if we want. But right now in this video we're just getting this prompt done.
12:13 And what we did is we changed this openAI prompt from GPT 3 to 3.5 turbo. And we're using this chat sort of interface, these messages.
12:24 We should be able to just use GPT 4 as well with, Exactly this setup, we should be able to. And before we get to GPT 4, if you do want to use GPT 3.5, it even says in general here 3.5 does not pay strong attention to the system message.
12:42 And therefore important instructions are often better placed in the user message. So that's even better. We were sort of playing around with no system.
12:51 Message. That's actually really good that we can just put in the user message what we want. You are a French detective.
13:02 Like this doesn't matter, but detective. Detective who is solving crimes well. Translating English to French. What is it? I don't know if this is gonna matter at all.
13:19 I don't think that matters, but yeah, just gonna tell us some extra words here, right? Hotel, motel, holiday in. Let's see.
13:32 Well, it even tells us that the translation stays the same in French, so we're getting a lot more information, but we can also just ask what is the French translation.
13:41 We can just ask a question, right? What is the French translation of hotel, motel, holiday, and, Okay, so what I did is I copied this function, open AI, I copied and pasted it right here, and then added this number four in front, and all I did between the two models, these two app scripts, is I just
14:04 changed this model from GPT 3.5 Turbo to GPT 4, just to see if there was any difference if we got, an error message or anything, and we didn't get anything, we got the same exact answer, a couple characters difference from the cat ate the food, cat food, we can change this while going to, while in the
14:31 billiard. And we can see how these two perform next to each other, we can have much more different prompts, right?
14:39 This is literally asking what is the French translation of this, what is the French translation of this, and they're a little different between GPT 3.5 and GPT 4, but this is great, you know, right?
14:51 Write a three sentence summary of benefits and features of this company, and then we go a cat food company. And we can see if this is different at all.
15:19 So we got the first one back 3.5. GPT-4 is a little tiny bit slower, but we did do it second.
15:26 We'll see if that comes out. This cat food company offers not all natural high quality ingredients. Yeah, just a little different eco friendly.
15:35 Yeah, just a different thing. Three bullet points. See if that changes anything. Obviously it will change something, but we'll see what it does.
15:49 It seems like 3.5 turbo is much faster than GPT-4. We got our bullet points back, which actually are numbers. One, two, three, one, two, three, and not bullet points.
15:58 Three bullet points. It really wants us to know three bullet points are one, two, three. Let's double check this is, yeah.
16:19 See that 3.5 turbo was way faster. This four is going to take a while. And now we have actual bullet points.
16:26 We limited it to 30 words. Let's do count. Let's do characters. We can actually count that really fast. 100 characters total.
16:50 And in D com we can do len a. 1 and len. Ah, it's 140. That's so funny. But let's say a 4 actually does a better job of getting in that limit of 100 character.
17:02 Oh, it's not under 100 characters. Let's see if we change this to 50 characters. That makes a big difference. This GPD 4 is actually getting faster.
17:13 I don't know why. But yeah, okay, it is not getting it within these character counts at all in any way whatsoever.
17:20 It is far off, but like it's generally in the area, right? Right. Like that could be promote cat life. I don't know, cat long-term.
17:31 We can edit this down to 50 characters, right? If we wanted to. But this is really cool that we're now able to use open AI's GPT 3.5 and also GPT 4 with the exact same app script.
17:44 We literally just need to change this model. We can also add a system role if we want to get a little more creative or interesting.
17:52 We can also add more prompts. We can add more messages in here. I think sort of possibilities for other videos or other things you might be able to do is add much more chat chat interface, right?
18:05 Go, hey, include these past messages that we have user and answer. We have our examples in here. Or there was also the possibility of adding assistant messages.
18:17 So the system message here, user messages from us and assistant messages from the AI itself is like, here's the example, assistant message.
18:26 You can add that here, but this video is probably too long already. And you got what we needed. All met all better sheet members, all members of better sheets can grab this AI.
18:37 Apps Script right here down below in the sheet. Bye.