I remember we had an early meeting and I was talking about this vision and Clay said, yes. But Steven, the thing you have to remember is that we are not normal. Like we can't design the product like too much for that use case. It turned out that building a tool that in the very early days was really centered on the the needs of one idiosyncratic user, me, ended up being kind of the fastest way to get to a tool that was actually useful for millions of people. He came up with a great slogan for notebook which is make anything interesting.
Second episode of AI to the DNA and we have the hottest tool on the AI planet at the moment which is NotebookLM. It's number 13 on the top 50 list the moment we record this video. We met the co founder of NotebookLM, Stephen B. Johnson. And the interesting part, he is not a scientist.
He is also not a machine learning expert. He is an author originally. And the story of how he came to Google and what he designed with the tool and with a team from Google in mind is really really interesting. And if you're new to this podcast, my name is Christoph Magnussen. This is originally made for the YouTube channel.
You'll find the YouTube channel under my name Christoph Magnussen and all the episodes. But if you're here on the podcast channel, subscribe and let us know if you have special episodes that you want to listen to. Great. Sitting here with Steven Johnson, the co founder of Notebook LM. To me, at the moment, the hottest tool when it comes to using generative AI and I say that in brackets and very carefully because there is more to this tool.
Well, that's very nice for you to say. I'd love to hear that. Thanks. It's great to be here.
And, Steam, we had a nice nice chat already when you did a little bit of insight and presentation of how you use Notebook LM. We will share some of that. Mhmm. This is a podcast and a video, obviously, both. What was in the beginning of this tool a moment that you realize, okay, that that it all I tried didn't work out.
I'm stuck here.
Yeah. I mean, we really there were a lot of moments like that. You know, when you're trying to do something that's genuinely new, like you always kind of run into those points and part of the, I don't know, the essential toolkit as a founder is to just keep powering through those moments. But we kind of made a very conscious decision in the early days of Notebook to in a sense design the software to be ahead of where the models were in terms of their capabilities. And so we built you know, there's this core idea at the center of Notebook, which is source grounding.
You know, you're you're you're giving the model the documents to the project you're working on, and it becomes effectively an expert in those documents and answers questions faithfully based on those documents and you should be able to put in 50 documents or a 100 documents, sources as we call them. And the truth is like while we had that vision in the in when we started working on it in 2022 and early twenty twenty three, the models just weren't capable of doing that in any level. You know Google's models or anybody else's models really. And so we kind of deliberately assumed that the underlying AI technology would progress to enable us to do this magical stuff that we had in our minds rather than trying to design the app for where the models were at that particular moment in time. And that meant that the in some sense the like product was kind of broken for the first year of internal testing with it.
And then even in some of the period where we were kind of a public experiment, it just it was good but it wasn't nearly as good as we wanted it to be. And I remember like distinctly this moment in in kind of the 2024 when we had just switched over to Gemini 1.5 Pro. And that was the point at which it really started to sing a little bit. And I remember one of my colleagues, Simon, who now is one of the lead product leads at Notebook but at the time was working on other things. And I saw him in the hallways at Mountain View and he was like, Steven, notebook finally works.
I was like, well, I mean, it kind of worked before. But yeah, I get your point. Know, it definitely was a point of like, okay, we've been waiting for this. And so I think that's a generally like a good principle to when you have an underlying foundational technology that's improving at such rapid rates to to really think about in your early stage designs like where it's going to be in a year and design for that. Because otherwise, you you know, you won't be ambitious enough, I think.
And here comes the the the part of the story that excites me the most. You need to be a physicist, a deep research expert, a developer for that, or would you say it helps to be and come from a different profession? Maybe you give some insight about your
next time.
It is kind of interesting. Yeah. New York Times article I remember especially.
Yeah. So I I had this very unusual background I think for new products being developed at Google that came directly out of the philosophy of labs, which is the kind of new division that Notebook still still belongs to where we're it was founded a couple of years ago, three or four years ago to do more cutting edge work with new technologies largely AI that were focused on actual applications rather than open ended research. Like Google has an amazing research division but this was a bit more like let's do edgy experimental things with an eye to like turning them into shipping products. And so it complements a lot of what research does but does it in a slightly different way. And one of the early ideas for labs was that they would bring in people from other fields who were not necessarily software developers or designers and have them in in the room where it happens as it were like helping to build products.
And in a sense, I was the first guinea pig for that for that agenda. And I had you know, I spent most of my career as a writer and journalist. I've written you know more than a dozen books on innovation and science and technology. But I had always been very interested in using technology, using software to help me write those books and to research those books. So so for instance Google Search was a huge I'm old enough that like when Google Search came along, it was a huge breakthrough for me as a as a researcher.
Google Scholar was a huge breakthrough for me as a researcher. But I also used a bunch of other tools like Dev and Think, which I believe is actually a German kind of knowledge management software and Scrivener for writing and things like that. So I'd always like been an early adopter and kind of an evangelist for using software to help me think and and come up with ideas and manage my research and stuff like that. And so the folks who had kind of founded Labs, this guy Clay Bavour who since left and Josh Woodward who now runs Labs and also runs the Gemini app, they had been reading my my books and my articles over the years and they read this article that I wrote in the New York Times Magazine about language models in the 2022. And Before before ChatGPT was very controversial.
All he was saying was that language models are a really big deal and we should take them seriously and they have a lot of opportunity and people hated on it for weeks after that article came out. But Clay and Josh read the article and apparently Josh turned to Clay one day and said, hey, what if we brought Steven into labs maybe part time? He's been dreaming of this like ideal research software his whole career. He's clearly obsessed with language models like maybe he could help us build something new. And so they cold called me out of the blue basically, Clay did and said, hey, we've got a crazy idea.
What would what do you think about coming and taking a part time position at Google and we'll give you a couple engineers and a designer and maybe we'll make something. And so I said, yes, that sounds great. And like three years later And here you here we are.
On the on the top 20 list actually of the hottest AI tools.
I I think we were last list I saw we were the thirteenth biggest AI tool. Yeah. Thirteenth. Which is pretty crazy and I That was that before the back to school. We have a lot of students and so we've been growing a ton in the last month and a half with students coming back from vacation.
So I wouldn't be surprised if we're higher on that list now, but anyway, we'll see.
What was the original starting point if you take us back? Because like Yeah. To me, it's all about turning, as I said, from a tooltourist into Champions League of users being, like, really at the cutting edge. And you mentioned that that you still have it from the very past where you said, hey, I I was always an early adopter. So you always understood Yeah.
Intuitively to leverage a tool. But it's a much different thing to build a tool than to just use it.
Yeah. Well, I would say one thing that I had that I brought with me to Google that I had been kind of maintaining for twenty years as a as a serious knowledge management nerd is that I've been collecting quotes from books that I read as part of my research.
Mhmm.
And I started doing this literally in the late nineties where I would actually like have to type up like I would highlight a passage in a print book and then I would type up or I would sometimes have a research assistant type them up. Because I was like just having the quotes searchable, just like command f searchable like was valuable. And then you know some tools came along to let you do slightly more advanced versions. But basically by the time I got to Google, I had a collection of 8,000 quotes from books that I'd read over the last twenty years. And that quote collection is like that's an incredible snapshot of my intellectual influences.
Like the ideas from other people that shaped who I am today as a writer and a thinker and stuff like that. And so in the very early days, I kinda had this thought of like could we build a tool where the AI would have effectively read all 8,000 of those quotes And I could query it and say you know, hey, I'm writing something new about the human memory system. Have I read anything that's relevant to that? And it would give me a distillation of my reading because I've forgotten you know 95% of what I've read. Right?
Like all of us. Right? And so it was really this idea that could somehow I get a tool that was an extension of my memory and could help me kind of just make new connections to things that I've long since forgotten but that at some point I thought were important. And add to that all the things that I've written. Right?
Like I have this idea like what if I could give the AI every single thing that I've published which is you know millions of words and because I also forget things that I've written, know. You're like human memory is very fallible. And so that was kind of like in the back of my mind like day one at Google is like could we build a tool that would do that? And it turned out that while most people don't like I remember this great conversation I had with Clay, the guy who hired me. Both Clay and Josh and I, all three of us are quote collectors as it turns out.
And we you know, you could kind of it got a lot easier to collect quotes because you could just read in an e reader and highlight the passage and then you know, you could consider. And so Clay actually had more quotes than I did. He had like 13,000 quotes. And I remember we had an early meeting and I was talking about this vision and Clay said, yes. But Steven, the thing you have to remember is that, you know, we are not normal.
Like, most people are not walking around with 8,000 quotes, you know. So we just like, we can't design the product too much for that use case. And I think I said at the time I was like, yes, if that's I know that's technically true. But I think there are a lot of people out there that are what if you if those 8,000 quotes, I think there's something 2,000,000 words of of information. There are a lot of people out there.
Anybody who works is a knowledge worker of some kind or another has a kind of a body of work that is probably about as high word count. It's probably millions of words and stuff that their work is based on. Like if you are a if you're a documentary filmmaker, all the transcripts of all the interviews you've done. If you're a lawyer, all the briefings that you've written and all the legal precedent that you need to do your job, that's that's a million words, 2,000,000 words, whatever. If you know, if you're a student just thinking about like everything you have to read over the course of a semester and all the notes that you've taken all the stuff that might be that size as well.
So there's actually a huge audience of people who are carrying around all this information that it's hard to organize, it's hard to search, it's hard to get access to this kind of personal knowledge base. And yeah, you don't have to be a quote collector for that to be relevant. And so it turned out that building a tool that in the very early days was really centered on the the needs of one idiosyncratic user, me, ended up being kind of the fastest way to get to a tool that was actually useful for millions of people and tens of millions of people so far.
How did you design the team to build such a tool? Because this is something I always find interesting to look at a tool and think like, what did they think by designing this tool in order to understand the nature of the technology?
It's a great it's a great question. So I came into this kind of very naive. Like I had not I had done a couple of startups in the past, so I had some software history and I think that was helpful. But I was definitely not I'd never worked inside a large company before and so I have learned a lot about how Google works and how Google kind of organizes its products and things like that that I just had to learn on the fly. But Labs deliberately kind of like keeps its teams very small and very flexible.
We kind of not by design, but just by lucky accident. We had a lot of people who were actually like humanities majors on the team for a software product. Like Adam Vignel who was one of the early engineers is also a science fiction author and very literary. And he's a brilliant coder but he also is very he's a big fan of like postmodern fiction. And and and so I think there's a there's a kind of literary literary quality and kind of scholarly quality to the way that Notebook was designed that you can see reflective of the team that that built it in the early days.
But another big thing is that we had a structure at Labs that was incredibly helpful in that there were lots of other projects happening at Labs that were literally you know 10 feet from us or like 50 feet from us. Like we're so we're like hanging out with these people who are working on other really brilliant ideas. And and at several critical moments, we've basically like absorbed some of those ideas from other teams. And the best example of this is the story of audio overviews which is really how we That would first broke out.
The next questioning.
Yeah yeah yeah. So that story is it's just it's it's interesting one in the hit terms of the history of Nutbuckle LM, also I think it's a instructive one in in kind of how innovation works in a way. So it's a very funny story too.
And it's not your voice, is it?
It's not my voice. It was not my voice. So so interestingly, in the early days of notebook like one of the early things that I did is I created a style guide Mhmm. For what the AI should sound like. Like you it's almost like I was starting a magazine and I wanted like you had to a style guide for what the magazine should sound like.
Well, we gotta make some editorial decisions by like what we want this model to sound like. And I had this thing that Notebook actually still kind of largely adheres to in in text chat, which is the model should not pretend to be your friend. And it shouldn't even I kind of discouraged it from having a subjective first person voice even. So it shouldn't say, I'd be happy to help you, Steven. Like it should just give you the answer.
Yeah. Right?
And I'll say great And it's all
about that Steven. I'm so happy to be your servant. Know, like I just didn't want any of that stuff. And so we had a very like austere I mean, I think it was alienating for some people who wanted to have a friend but whatever. That was our house style.
So so we had kind of stuck to that. And in the early days, sometime at some point in like '20 early twenty twenty four, there was another project at Labs that was called GemFM. And the idea was you could give the AI some source material and it would generate this amazingly realistic AI podcast between two people. And it was relying on a breakthrough, an underlying technological breakthrough that Google DeepMind had created which is a basically a conversational audio model which crucially is trained on two people in conversation. Like it's not two different voices that are trained and you put together in a script.
It's actually like two people who sat in a studio talking to each other for x hours being recorded and modeled. And so it had this unbelievably lifelike sense of two English speakers in conversation with each other. It's one of the reasons why it took so long to bring it to other languages because you have to train it on every language on Earth interrupts itself in conversation in a slightly different way.
Right. German
people banter differently from English people and you can't just like translate it from one to the other and so we had to have conversational models in all the languages to to make it realistic. But anyway, I'm getting ahead of myself. So so we had this underlying technology from GDM. And then this brilliant team had built this tool to turn it into podcasts. And when I first heard it, they they had very extreme personas for the for the hosts.
So it was supposed to be the the sample I first heard was an education focused podcast about physics targeting kind of high schoolers, like young maybe like teenagers. And so each host had a fake physics name and so the tone of the podcast was, hi there learners. I'm Captain Kinetic and I'm here to teach you about physics. And so I heard it and I was like, oh that's amazing. That's really cool.
How fun. That's it. So lifelike. That's incredible. Didn't think about it for notebook at all.
Like never occurred to me there would be something to to put into notebook. But I was impressed by it. But and and I could see that there was a lot of creativity in in how they were doing it but didn't think about Notebook at all. And then we were about to go to IO, our big annual conference and there had been this question about were we going to show something from Notebook in the Notebook had been kind of was still an experiment but it was public at that point. And they wanted to show this demo of audio overviews, gem.fm as it was called then, but it wasn't attached to a product in any way.
And with a week to go, somehow Josh Woodward, the head of labs and Sundar I believe like had this brainstorm where they're like, what if we put this gem.fm feature into notebook.lm? And then then at least it's not just free floating experiment. It actually belongs to a product and it seems more real. And so there was an emergency call of the team on a Sunday like it literally I think was eight days before IO. And they said they said, hey
Exactly the same.
We're gonna we're gonna like we're we're we think we want to put audio overviews into notebook. Can we make it a real demo that shows it working? And my first thought was Captain Kinetic. Like I was like, that's not like I but the whole point of notebook is that that you don't have personality in the AI. This is the exact opposite of what I was thinking.
Like that doesn't make any sense. And and then I took a step back and I thought, wait, Notebook is a tool for understanding things. Mhmm. Like that's our mission. We wanna help you understand the material that you need to understand to get whatever job you're doing done.
And there's a reason why people like listening to podcasts and there's a reason why people learn from listening to podcasts because people have been learning through listening to people have conversations for, you know, hundreds of thousands of years. Whereas people have been like reading articles for four hundred years. So listening to conversations and learning through conversations is a very deep part of what it means to be human. And so if we have an ability to take people's source material and turn it into an interesting engaging stimulating conversation that people can listen to at the gym or while they're driving to work like that is so Notebook's mission. Like that totally fits Notebook's mission.
We just need to dial the personalities
down a
little bit like they shouldn't have fake names and in fact they shouldn't have names at all. But so I was like, as long as I can convince everybody of that, then it's the most brilliant idea ever. And so then basically I was like, I'm sold and I just wanna get out of the way of like let these wizards like build this tool and I'll just let them go off and and you know and that was the when it launched like four months later, it was the most viral thing I've ever been involved with in my life and it just like took off in this amazing way. So so it was a great example of my instincts being absolutely wrong when I first heard the idea and yet you know, hopefully, thankfully, we ignored my instincts. Yeah.
And at
the at the same time, I mean, that's that's the interesting part of that story, and I didn't know that story. It's a very interesting backstory to understand the nature of the tool because you mentioned it now, how important it is to not have have a personality, but still using that and turning content into a new format. And I always use notebook as that example. You showed one feature earlier with the new video options. And for many people, like in our company, many people use it heavily in order to take the source material and then work with, and that's the mode how we call it.
You work with the AI Yeah. To generate something new. And that's we have three work modes that we call catch up, work with, grow beyond. And and to me, it's Notebook and Adam is about work with and grow beyond. Especially video, mind maps, and also the cards, these are features.
And tell me about the video feature you were, like, really like like a little boy stumbled upon something new. You were
like Still saying new things. So, yeah, we have I'll say two things. I wanna say something about mind maps too because
there's
an interesting thing about mind maps. So with video overviews, we rolled that out this summer and it kind of converts your sources into almost like a mini lecture, almost like a mini TED Talk with slides. And it will take images from your sources and things like that. But a lot of times people don't have images in their sources. And so the slides were sometimes visually just a little less interesting.
And as you may have heard, we have an incredible new image generating And so basically and this is really the same team that did audio reviews has has been doing video reviews. They're just wonderful creative people. It's amazing. And so they basically said like let's what what if you could have a state of the art like illustrator for your slide presentation that would do like handcrafted illustrations for each slide? And we're like, we can do this now.
And so, yeah, there's an amazing one which we can probably show some images for for folks watching this on video. That is a setting that kind of does cut paper illustrations. And so I've been researching a book idea on the Gold Rush and I have this video overview that I generated with it and it just like I just flip through the slides and it's just this magical world of like every single one is just so beautifully crafted. It's just it's incredible. And it's a great example again of like as you say, like kind of working with like, I'm curating the ideas.
I'm curating. I'm directing the in a sense what the slideshow should be, what the lecture should be. It's all shaped by my vision. I'm trying to I decide what the style of the illustration should be. So the things that I'm good at like writing about history, organizing ideas about history, researching history, turning that into an interesting compelling narrative that has a message, I'm still doing that.
The thing that I can't do is create beautiful crafted like cut paper illustrations for like every side. And so great. I now have an AI collaborator that will do that part of my job for me.
What And that and that's just so exciting creatively and intellectually. What really struck me when I was I do a lot of speeches and keynote and at the same time working on projects. And we have notebook in the company because you guys release it very fast as a workspace
Yeah. Yeah.
Tool, which is very important. So then you can use it with the company data, with my data. And what struck me was in areas where I wasn't too excited about to get an overview, I used it, put the material in, and generated it in a different content format Yeah. And made it more entertaining for myself in order to become more curious on the topics. How big is that field of boring office work turning it into a magical research experience?
Yeah. Well, Simon, who who I mentioned earlier, was the one who said, notebook finally works. He when he joined the team and kind of became the head of product for us, he came up with a great slogan for Notebook which is make anything interesting. And it's kind of exactly what you say like the the hosts of the podcast are specifically instructed to like whatever you give them, their job is to pull out the most interesting bits and try to make it interesting. And you know, it's one of the queries that I often suggest people do particularly in student or researcher mode is which which works beautifully just in text.
In fact, the models have been very good at doing this from from for two years now I would say, which is upload some new source material and say, what are the most surprising things in this document? Yeah. And they the models have an Gemini in particular, I think it's a really amazingly sophisticated sense of surprise and interestingness. Mhmm. And on some level, I I I was initially surprised by that as were.
But the more I thought about it, it kinda makes sense because the models, the underlying fundamental kind of math of the models is all about prediction. And so that that's how they learn is by predicting the next token and the next string of tokens. And so surprise is by definition when your prediction gets you know foiled or you think it's gonna go this way but it turns out to go this way. And so much of the human brain is designed to kind of learn through failed predictions. Like you pay attention.
This is this is how the dopamine system works. Right? When you have you're predicting the world is going this way and actually the world goes another way that triggers a flush of dopamine that causes your brain to like remember that deviation from your predictions. That is like a huge essence of learning. And so because the models are prediction engines are actually quite good at then sensing where like things deviated in a surprising way and sharing that with you now.
And so that that way of getting into material I think is like as a scholar, as a researcher is really is was just never it was never possible to, you know, command f search for surprising. Like you could search for that word, but that's not what you wanted. You want the concept of surprisingness. And now you can do it.
What can you share about and I I'm very curious on that. And you have to tell me if you can share because, again, to me, this is at the core. How important is the model design or fine tuning the model for a certain case? And I mean, the the Google models are very good at retrieval. Surprise.
Yeah. Is there a specialized model? Is there a team developing the model underneath the product? How does that work together?
We mostly just benefit from the underlying Gemini models as they come out. In the early days, we were really I think the first public product that did any kind of source grounding. I think we were the first you know, chat with your documents product to be announced or released, you know, because we announced it in in 2023, like mid twenty twenty three. Which is
quite difficult to do that with a technology that is made for generating stuff.
Yeah. And what it was really the biggest limitation was the context window was so small. So you though fundamentally what is going on behind the scenes is Notebook is taking your sources, putting it into the model's context window which is effectively like it's short term memory
Mhmm.
And saying, hey based on this information, answer this question or respond to this query from the user based on this information that's in the context window. So it has its general knowledge and its training data that happened you know months or years before. And then you have the short term like focus on this and answer this question. And you know when we first when I arrived at Google, the context window was basically like 2,000 words long. So you could put you know, there was no point in using AI because you could just read those 2,000 words.
Like that was like not really very helpful. You know, now we have contexts that are more than a million words so you can get a lot of information in in in the context window of the model. So in the early days, we helped a lot with the development of Gemini in terms of giving them examples of source grounding.
Mhmm.
And we we would kind of like send them our evals that we would do and we were kind of helping them a little bit. And so I think that there was some nice like two way feedback there that helped Gemini get to be I think really the best model for source grounding. It's always been very good at like sticking to the facts and the sources you give it. And then we just ultimately then they just went off to the races and they're amazing at what they do and they did we rarely can give them advice on how to build Gemini. What happens is, you know, they'll release a new model and then we will do a lot of testing because you know, these models are unpredictable.
Right? They they are it's it's truly like emergent technology and you kind of come out with a new model and you think it's good at a lot of things but you never really know. And so every time we shift to a new kind of version of the underlying models, we we have to figure out what has changed in terms of this the house style. So like one of the things we're always battling is some of these models really like to put things in bullet points.
Oh yeah.
And so and so we
have You solved it?
We we have a I mean we like bullet points. We think the bullet points can be good. And so our the kind of default instructions that you see that you don't see but that happen every time you interact with Notebook LM.
Like the system instructions. The system
instructions like we have our own custom system instructions that that that in the old days I used to write all of those, now we have more experienced people but it was kinda nice to have a writer doing those in the early days. That was one of my favorite parts of the job is like figuring out how to talk to the model. But we give we give the model instructions to say if it's complicated and if it seems appropriate, use bullet points to explain the material to the user. And generally that would generate good results. But every now and then we would switch to a model and it would just whatever you did it, it would you'd be like write a poem and it would be like, sure, bullet point number one.
I'm like, no, I said poem. There should be no poems in a bullet points in a poem. And so there are things like that. Or like the model will get very terse or it will write much longer. And so you're always kind of you're kind of trying to like on the fly adjust the system instructions to to get the same output that we think it was the Notebookhouse style as the model changes a little bit.
And the other but the other key interaction there is that they you know, they come up with some new wizardry like Nano Banana
Mhmm.
Or like Vio. And and we get you know, one of the great things about this job is like we get an advanced peek at what they're working on and then we're always kind of like, oh, how could we how would that new feature that that new capability work inside of the Notebook OM product product experience? And you know, the the integration of those images into video overviews is a great version of that. Like, we're like, oh, wait. Hold on.
I think it could actually illustrate every single slide in a really creative way now with these new image tools. Like, why don't we add that to video overviews? And here we are.
How is it with context window length of the models? I mean, Gemini models have the biggest context window. There's also research that slightly indicates that that the intelligence of the model is weaker when you have more context. So how do you balance it? I mean, you have a lot of sources.
Yeah. And I guess for many people, it's hard to understand that you cannot just drop the whole company into the sources and go with it. I mean, it's not useful to do that, I guess. But at the same time, that's about it. Do you have a team?
Do you manage that? Is it like with the with the instructions that you work on? How do you do that?
Yeah. We now have a whole it's amazing. We have a whole team of like kind of quality people who are constantly testing the new models, the new kind of options for context. And we interestingly like we haven't we've chosen to not expose what is actually happening in terms of the context, how much you're using. So you can have, as you said, you can have a notebook with 30,000,000 words in it and we will use tools like Rag to pick the most relevant passages and present them in a way that hopefully expresses the overall meaning of the documents.
And so that it fits in the context that we're using at any given time. So and so we have an amazing internal team that's constantly trying to figure out like what's the best model to use, what's the best context size to use, how can we best kind of pack that context if the user goes over. But we don't ever show there there's there's no kind of slider that says you are exceeding the context with this additional source or you're 10 x the context, you might wanna make it smaller. And we've had a lot of debate about whether we should show that. And honestly I think I've I would say I've historically been on the side of wanting to expose more of that to the user.
And I think maybe we're actually coming around to that.
We'll see what happens in the next couple of months. We we might expose a little bit more of that because we feel like people can now understand that and it's it's relevant to the quality. But the the the one key thing that is a really important feature talk about turning from a tool tourist into a what was the pro?
In in German, I call it so Champions League of
Champions League. Yeah. Champions League. Okay. So one Champions League tip, you'll see in the source panel where you have all your sources loaded Mhmm.
There are check boxes
Yeah.
Next to every source. And when you uncheck a source there, it's it's as if the model can no longer see that source.
Mhmm.
And so or if you in reverse, if you check a source, select the source, the model now will see that source when you ask a question that will answer based on that source. And so one thing I do a lot and and we see users doing this a lot. It's a kind of an obscure feature but actually like our power users use this all the time. Is when I'm trying to if I've got a notebook with a lot of sources, if I'm asking a question where I really know the question is ultimately about like these two sources, I deselect everything and I just select those two. And I just kind of ask my question based on those two sources.
So it you'll often get the right answer if you don't do that, but if you really wanna make sure that the model is just focused, it's it's literally well, you should think about it as focus. Like Yeah. Imagine you're sitting there with your research assistant and you're like, I really need to understand the information in these two documents. Would you hand your research assistant like 50 documents and say read all of these but I really want you to just focus on these two. No.
You just hand them the two documents you wanna focus on. So so I, you know, I think dynamically using that focus feature is is always a way to feel confident that you're getting the absolute best results.
You shared one other thing in in the presentation that I liked a lot and and many people overlook. You can turn a note. Do you have the three pillars? You have the sources, you have the chat, and you have the creation. And you can turn maybe explain a little bit also what you created into a source and why it's important Yeah.
To play with these two sides. To me, this is really pro user stuff.
Yeah. It's I use this all the time. It is an undernourished part of the application right now. I want to invest more in it in the next year particularly for students who take a lot of notes. So you can write your own notes inside of Notebook, Ellen.
It is a notebook. And so you can there's a button down at the bottom right hand corner says add note. If you do that, you just get a little mini text editor and you can write whatever you want. You can copy stuff in there if you want, whatever it is. Those notes are stored generally in the studio panel.
And when when you create a note, the model does not know anything about the note that you've created that you've written. So you have to click on the little three dots to the right of the note and there you will see an option to convert this note into a source or convert all of your notes into a source. And I use this feature all the time.
Yeah.
So I'm researching a new book. I've got all my notes in here for like ideas, for characters, for structure, quotes from chapters that I read, know, things that I put in there. And every now and then I just do convert all notes to source and that brings creates basically a copy all those notes, brings it over to the source panel. And at that point, the AI knows what I've been thinking as well as like what I've been reading effectively. Right?
It can follow my own thinking. That I can say this very bluntly, is an incredibly stupid way for the software to work. And it's it's just one of these things is left over from the random choices we made in the early days of the architecture and we just haven't gotten around to fixing it yet. Someday your sources will just be automatically grounded if you want them to be grounded and probably they'll live over in the left hand side with your sources. Your notes will be automatically grounded.
But until that time you have to use this little hack. But once you do that, you know, I I was showing you earlier I have this research notebook for a book about the gold rush that has all my notes in it. And I can say like, okay, create a chronology of the main events of the book that I'm thinking about writing. And there are many many events in the sources that are not related to the book that I'm thinking of writing. But because Notebook can see all those sources and it can see my notes and knows what's I found interesting, it can generate this chronology of all the events which is incredibly useful to me as a writer.
And that would take a week to do if I tried to do it on my own. And Notebook will now do it in thirty seconds. But that's because I've been one taking my notes internally and two, converting them to a source.
Awesome. This is really turning into Champions League for sure. Last sentence from from you as the co founder. What is the future of notebook. Lm?
Yeah. I mean Take us somewhere where you say like
do you have a We didn't even get this shows you how much versatility there is in this product. We didn't even get to the thing that I'm really passionate about now, which is notebook is a publishing platform for sharing knowledge. Right? Like what if what if for my next book, I publish it in addition to a hardcover and an audiobook. What if I publish it as a notebook that people could buy and have conversations with that and turn in an interesting chapter into an audio overview or whatever.
With your book. With my book. Like I think as a as a way of publishing knowledge, there's something extremely powerful that we've just started to scratch the surface up with Notebook. So I think I'm gonna be here at Google working on this thing for a long time. I have so much to do.
Very happy if we have a second episode.
Yes. Please. Let's do with that
deep dive. Steven, thank you so much for behind the scenes.
Thanks for
having me.
Yeah. Thank you very much. It's such a such a treat.