Edtech Insiders

New Year, New Ideas with Google Part 3: Shantanu Sinha, Jennie Magiera, and Steven Johnson on AI in Education

• Alex Sarlin and Ben Kornell • Season 10

Send us a text

Shantanu Sinha, VP & GM of Google for Education, leads the development of tools like Google Classroom and Read Along, serving over 150 million educators and students globally. Previously, as founding President and COO of Khan Academy, he championed free, personalized learning on a global scale. Shantanu combines deep expertise in computer science, math, and cognitive sciences from MIT with strategic consulting experience from McKinsey.

Jennie Magiera, Global Head of Education Impact at Google, is a bestselling author, TEDx speaker, and advocate for equity in education. At Google, she focuses on elevating marginalized voices and creating empowering tools for teachers and learners. A White House Champion for Change and ISTE Impact Award Winner, Jennie brings extensive experience as a teacher, district leader, and digital learning innovator.

Steven Johnson, Editorial Director of NotebookLM and Google Labs, is a bestselling author of 14 books on innovation and technology, including Where Good Ideas Come From. An Emmy-winning television host and tech entrepreneur, Steven shapes tools that redefine learning and research while advocating for the power of collaboration in driving transformative ideas.

💡 5 Things You’ll Learn in This Episode:

  1. How AI enhances tools like Google Classroom and Read Along.
  2. The impact of NotebookLM on personalized research and learning.
  3. Creative ways educators worldwide are integrating AI into classrooms.
  4. How AI supports teachers by saving time for deeper human connections.
  5. The future of assessment and personalization with AI.

✨ Episode Highlights:

[00:02:19]
Shantanu on AI in Google Classroom and teacher superpowers.
[00:09:46] Jennie shares inspiring AI use cases from global educators.
[00:19:24] Steven explains how NotebookLM is transforming research.
[00:29:22] Discussion on LearnLM and personalized pedagogy.
[00:41:50] Insights into NotebookLM’s podcast-style audio feature.
[00:55:36] How AI enables creativity and emotional engagement in learning.

😎 Stay updated with Edtech Insiders! 

🎉 Presenting Sponsor:

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.

[00:00:00] Jennie Magiera: That respect the user value at Google is so prevalent. It's not just a thing that's on the website. It's the way people read and live here. Yeah. And in respecting the user and practicing that humility. AI is both new and not new. You hear all these things about it. It's like suddenly here and it's been here all along. 

And while it has been around for decades, the sudden explosion in the rapid growth and expansion and use of it in the classrooms is something that we're being really respectful of. 

[00:00:30] Shantanu Sinha: I think ultimately the importance of Having high quality content, having high quality grounded content remains with AI, right? 

And I think that's still going to be true, right? When you're working with an AI chatbot, you will be far more confident if you were doing physics and you knew it was grounded in the physics OpenStacks book than if you're not. And I think that role is going to really remain, but I think it's also going to make This stuff much more accessible for a lot of different people to consume in a lot of different ways. 

[00:01:00] Steven Johnson: We should put quotes around training because it is not actually training the model. Training is a long process that takes time and that would then expose whatever information you share to other potential users of the AI. So all we are doing is taking your information and putting it in basically the short term memory of the model, the context window of a model. 

And we kind of show it to the model and say, Answer this question or follow this instruction from the user based on this information we're showing you here. And we've built a system so that you can have in just one notebook, you can have up to 25 million words of information. Oh, wow. Welcome to EdTech 

[00:01:40] Alex Sarlin: Insiders, the top podcast covering the education technology industry and funding rounds to impact AI developments across early childhood, K 12, higher ed, and work. You'll find it 

[00:01:52] Ben Kornell: all here at EdTech Insiders. Remember to to the pod, check out our newsletter and also our event calendar. And to go deeper, check out ed tech insiders plus where you can get premium content, access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. 

Hope you enjoyed today's pod. 

[00:02:19] Alex Sarlin: We at ed tech insiders had the great privilege recently to be behind the scenes at Google's. AI summit in Mountain View, California, we got a chance to sit down and interview all eight of Google's learning leads across the organization. And these episodes are the full interviews with two leads at a time. 

Enjoy these incredibly interesting interviews. Honestly, these are some of the most interesting and compelling interviews I feel like I've gotten a chance to do in my entire time at a tech insiders is just so full of rich and interesting dialogue. And so many of these Google leads have been at Google for a long time. 

They've seen the space in from a lot of different angles. So these are really interesting. And I think it's a fantastic glimpse into the future of AI and learning. Enjoy. In our next conversation, we talk to Shantanu Sinha and Jenny Magera. Shantanu is the VP and GM of Google for Education. His organization develops products like Google Classroom and Read Along for over 150 million teachers and learners around the world. 

Before Google, Shantanu was founding president and chief operating officer at Khan Academy, where he helped make personalized learning globally accessible and free. Shantanu is also a founding board member of Khan Lab School, where his three children attend school. Shantanu has leveraged his consulting experience at McKinsey, along with his computer science, math, and cognitive science background from MIT, to make an impact on education. 

Jenny Magiera is the Global Head of Education Impact at Google. She's also the best selling author of Courageous Edventures and the founder of the non profit Our Voice Alliance, whose mission is to elevate marginalized voices and perspectives to improve equity and empathy in education. She also serves on the board of Saga Education. 

Previously, she was the Chief Innovation Officer for the Dave Plains School district, the digital learning coordinator at a USL, a Chicago public schools teacher, and a research assistant in Carol Dweck's Columbia university lab, a white house champion for change ISTE impact award winner. She runs it working mothers of the year and TEDx speaker. 

Jenny works to improve education globally. TWG for the U S public schools. Department of Education's National Ed Tech Plan, the NAEP Delivery and Technology Panel, Teach AI Advisory Committee, and has been featured on NBC's Education Nation, C SPAN's Reimagining Education, TEDx, and NPR. Her proudest achievement is her daughters, Lucy and Nora. 

We're here with Jenny McGarrett, the Global Head of Education Impact, and Shantanu Sinha, the VP of Google for Education. Welcome to the podcast. Thanks 

[00:05:09] Jennie Magiera: for having [00:05:10] Shantanu Sinha: us. [00:05:10] Alex Sarlin: Great to be [00:05:10] Shantanu Sinha: here. 

[00:05:11] Ben Kornell: So let's just start a little bit with your journey. Shantanu, you've been on the pod before. So, Jenny, can you tell us a little bit about your journey here at Google and how has your program evolved over time in terms of impact? 

[00:05:22] Jennie Magiera: Oh, gosh, that's a lot. The TLDR is I always want to be a teacher. I started off in New York city, public schools, Chicago public. I taught for about a decade in the classroom, and then I did the traditional path of building leader, school district leader, instructional coach, all the things in between. I taught higher ed for a bit. 

I helped work on the national ed tech plan as the technical working group for under the Obama administration, I kind of like sat in all these different seats, PK to 20. And then along that path, it kind of found my way to Google for education. And as a, you know, it's like the old thing, like, I'm not only a member, I'm a youth. 

So like, to be able to be someone who was uplifted and found. Such a light and support as an educator and an education leader through Google communities and solutions to now be able to serve those same communities is a gift. And so what my team really does is we look after all of the educator community as a practice around the world, connecting them, uplifting them, celebrating them, and then creating pathways for us to make sure that educators, education leaders, and now students understand how to best get the most impact out of our tools through training and professional development, online resources. 

So I get to serve my community that I love so much. 

[00:06:40] Alex Sarlin: I will get to you in a second, one quick question for you. So I know that, you know, Google classroom certification is a really interesting part of that world. I think I'm curious how you're thinking about AI certification before educators. I know you have a lot of work to support educators in sort of understanding and You know, demystifying AI. 

I'm curious what it's looked like when you're working with worldwide communities. 

[00:07:02] Jennie Magiera: That's such a good question, right? We're always getting the question, like, how are we certifying and enabling AI? And I think that's something I love about working with Google over like anywhere else in the world is at Google. 

There's so many brilliant folks who are trying to do good and do well by the users that we serve, but with deep humility. And so that respect the user value at Google is. So prevalent. It's not just a thing that's on the website. It's the way people read and live here. Yeah. And in respecting the user and practicing that humility, AI is both new and not new. 

You hear all these things about it's like suddenly here and it's been here all along. And while it has been around for decades. The sudden explosion in the rapid growth and expansion and use of it in the classrooms is something that we're being really respectful of. And so for us to say like, this is a certification where you're a master of artificial intelligence pedagogy is not honest. 

And so instead, I think we're trying to take a step back and be like, why do people want the certification? And I think it's because educators, parents, school leaders, system leaders, they want to know that as people are taking hold of these really powerful tools, that they're doing so in a way that is informed, that they've taken the time to understand it, like a driver's license of sorts. 

Yeah. And so while we're not saying like, this is a certification, the same way that we might have with like Google Classroom, which we more deeply understand. Is as we're building the tool, we want to be articulate 

and transparent about what we intend for the tool to do and then to help people say, are you using it the way that was intended? 

Here's how you use it. And then naming that. So system leaders and school leaders can say, like, okay, you did the due diligence to understand the risks and opportunities. Now you can use it, but I want to be really conscious that like the reason that we are not going full out, like we are with our certifications for educators around all of the other workspace tools we create is because of that respect in the user and operating humility. 

[00:09:03] Alex Sarlin: I think that matches how people are seeing AI, you know, cautious. And for you, you don't want to, you know, outpace the market and. Push people. Nobody knows what great AI pedagogy quite looks like yet. It's a great point. 

[00:09:14] Ben Kornell: Yeah. So combining that with kind of our first conversation with you, Shantanu was really all of the new feature sets that you were bringing to Google classroom that leveraged AI. 

It felt like a new moment. It felt risky. It was there was a lot of excitement. There was the open sex partnership, and now there's been a drumbeat of new capabilities and tools. You're almost kind of getting into the marathoners pace here. Can you tell us a little bit about how Google Classroom is evolving since last spring and then also where it's heading going forward? 

[00:09:46] Shantanu Sinha: Yeah, great question. So, you know, I think a little bit what Jenny was hitting out when we think about breaking AI, it's about understanding the educator and learn use cases and where can we turbocharge and where can we give that educator those superpowers. So when you think about where people are using it, they're using it a lot of different ways. 

That's one thing that's really interesting. Kind of surprised me and been really amazing to see whether it's just brainstorming ideas of what to do in classroom or brainstorm ideas on what to write on an essay, whether it's taking content and transforming it from one form factor to another form factor generating quizzes like there's so many different things that a teacher can do with A. 

I. So when we think about the role that Google can play, we're thinking about a few different ways when we want to make our core models really good at this stuff. Which is why we have LearnLM and we're really trying to infuse pedagogy into that fine tuning to make sure that we're really bringing that into the model. 

Two, we're trying to make sure that all of our products, Classroom, Docs, Slides, Gemini, are making those use cases that many people are already coming to these products for and making it as easy as possible to leverage AI. So in Classroom, we're creating tools where a teacher can do things like, you know, take some content and maybe relevel the text. 

So that it's appropriate for a third grade student in the products like Gemini, we're making it and you know, we recently rolled out that it's now available for teens, but we're making things like gems where you can have different personalities and now with gems, they can actually be grounded on content. 

So you can create a gem. And you can connect it with a textbook or with a specific side of curriculum and add a personality to it. And that just opens up all kinds of different possibilities. And then we also have, you know, 

across Google products like notebook LM, which I'm sure you've seen or heard about where you could put a bunch of content in there and create totally different form factors like an audio podcast. 

And it's surreal when you listen to this stuff, just take any content and Really make it interesting and, and make it so much more digestible. So I think there's so many different ways across Google where AI is innovating in really exciting ways and we're focusing on how do we bring that to education users in a way that really ensures that it fits their, their user journeys in a natural way. 

[00:12:04] Ben Kornell: Yeah, it's such a fascinating point. My son Nico made a gem for speech and debate. Yeah. Where one gem is the pro and affirmative and one is the con negative, and then he could practice. Debating them. And it was really eye opening around the diversity of use cases. And it's almost challenging ed tech orthodoxy, which ed tech orthodoxy is find a single pain point and drive value on that pain point. 

And from what you're saying, it sounds like. Everyone's finding value in different pockets. And it's almost like the flexibility of the AI is a feature. And so I think this creates real dilemmas for us implementing when there are so many examples of use cases and at the same time, it's far easier to implement. 

Cause the time to value for everyone is so ready made. So especially across all of your surfaces, it just, it's really interesting that the Google expanse is actually well suited from a go to market standpoint to the AI tech. 

[00:13:03] Shantanu Sinha: Yeah, no, absolutely. I think that is where AI is just such a different technology because it is mastering language. 

And like, you think about that and that's just like mind blowing, right. Where you're mastering language and that starts to open up all kinds of different possibilities of what you can do, they're not necessarily the, I wouldn't have predicted like a year ago or two years ago that these are exactly the use cases, these are exactly the ways that people are using it. 

You have to create. Space for creativity, space for people to try out different things, but do it in a way that's controlled and safe and responsible, because in many cases, people don't know what the technology is fully capable of and where it's not, and in particular in educational context, you need to make sure that you aren't throwing it out there before it's really ready, which is part of the reason we really want to focus on the educators here who can bring in that judgment and really make sure that when why People are using it. 

They have that oversight. They have that visibility. They're sure this actually being done in kind of the most pedagogically correct way. 

[00:14:04] Alex Sarlin: Yeah. I mean, we've been trying to think about the sort of education AI tech stack or the education stack. And what we increasingly are realizing is that the user, especially the teacher is really an element of the. 

product. I mean, they're not being handed something from a product the way if you watch Netflix, you're sort of just absorbing. They are shaping the use case every day, doing all sorts of original, interesting things and tailoring it to whatever they need to be doing. I'm curious, Jenny, so you speak to educators all over the world. 

I'm curious for some of those who are really embracing that role of Oh, I'm going to do amazing things with this. What are some of the things that you're seeing them sort of jump into and say, Oh, you know, I'm not just a passive consumer of this. I'm going to do something amazing for my students. What kind of things have you 

[00:14:48] Jennie Magiera: seen? 

So many. I love this question. I was actually thinking when I was listening to you talk to them through about like how prevalent all the different use cases are and how we're seeing like this wide range of. Like we don't even know how they're going to use it. Mm, totally. And I remember when I was a fourth grade teacher on the south side of Chicago using Google Docs, when it was consumer Google Docs, before it was workspace. 

Like that's such like an open, it's literally a blank canvas. Literally. I open a Google Doc and it literally a blank canvas point. And I was a math teacher. I taught self-contained fourth grade math using Google Docs. And it just felt so magical. And I think it's because educators are a species of MacGyvers, right? 

We're so under resourced that you can give us a light bulb, a piece of duct tape and a toothpick and we'll build a car, right? And so like, you know, AI all of a sudden is so much more than a toothpick and a roll of duct tape. It's a Ferrari. So it's like, instead of saying like, build a car, it's like, okay, here's a fully built Lamborghini. 

Now, what are you going to do? And what they're doing is incredible. We have this pilot network of educators around the world that we're calling the Google AI fellows. And the, how might we statement we went into this with was If we equip educators with the most powerful versions of LearnLM, of Gemini practice sets that we have, and then give them support, community, resources for three months, what will come out the other end? 

And we started it because we've realized that as a global education team serving the world sometimes, Because our headquarters are in the United States. Sometimes we start in the U S, but let's be more equitable. So we started in Japan and Korea, which they've been doing some really wild things there with AI forever. 

And I don't say that just as a Korean person, we went at the end of the showcase and we're in Tokyo and there's a music teacher and the music teacher was like crying, sharing their showcase. And they were saying how they have been teaching the same music class for decades. And in the time, a lot of the time bandwidth they get because they have to teach composition and music, they get so much work on the technical bits. 

They never get to the human music composition. When they brought Gemini into the classroom, they said, okay, let me use Gemini to short circuit this path to technical mastery. And then let's like, get right to the human part that AI can. And they said that in the decades of teaching music, they have never, ever seen more human, emotional pieces of music than when they brought AI in. 

And AI wasn't creating the emotional bits, those were the humans, the students. And I was giving them a shortcut, so they had time within a very concrete set. You have X weeks of this course to dive more deeply into it. And that wasn't the like, when I, you know, I don't know, I wasn't there when they invented Gemini, but I don't think that they were like, let me help grade seven music teacher in Tokyo get human compositions. 

And it was one of the most powerful moments because I was not expecting to go fear from the music teacher during the same showcase. 

[00:18:00] Ben Kornell: Yeah, there's a way in which the AI is actually creating the space to be more human than we've been able to be at top middle school, 150 papers to grade. By the time you're on the 20th paper. 

You kind of have some repetitive feedback for people, and this is really opening up educator, you know, capabilities, by the way, for those of you playing at home, we've got the hair club for men reference, and we've got the MacGyver reference. So if you're got your bingo card, just be ready. I'm really fascinated at this conference, talking about this reverence for educators and their role in the process. 

Another thing that people have really been debating is the role of content. 

[00:18:42] Jennie Magiera: And 

[00:18:43] Ben Kornell: when open education resources were kind of launched, More than a decade ago, many predicted this is the end of paid content. You know, content had been King and now all the textbook companies are going away and in a weird twist, it was the reverse, like content remained incredibly valuable and now with AI generated content, similar predictions are happening and in so many ways, what you've built a Google classroom and across the Google ecosystem is an incredible delivery system for content. 

But how do you think about. The importance or lack thereof, or the role of content in education going forward, especially as you're supporting schools. 

[00:19:23] Shantanu Sinha: Yeah, great question. And I think content is really interesting because I think there's a few different trends that you're starting to see. One is. The kind of much more accessible creation of content, which is, I think that's the trend we started to see with OER and with YouTube, for example, right. 

Anybody can put a video and start to teach something. And that just continued every time, you know, short form video now, like you're seeing more and more people participating. Podcasts, etc. And honestly, teachers are some of the most prolific content creators out there every day. How much content are they creating, but often, you know, Being able to share that with the world, being able to get it out there wasn't all those tools, and now more and more of those tools are coming. 

Now I think with AI, you're seeing a different dimension to this, which is you have transformation of content, right? We can take a textbook, or we can take a PDF article, and you can transform that into a podcast. Right. And AI can allow you to do that. And it can allow you to move content in different form factors. 

And I think we're in the very early stages of that. There's a few amazing demos that blow our minds away when we see it. But if I think about where the next few years are headed, you really can imagine a world where. All content, you can move from text to video to interactive dialogue to all of this seamlessly, so it's going to create a world where the creation of the consumption is much more democratized in a lot of different ways. 

Now, what does that mean for the role of content? I think ultimately the importance of. Having high quality content, having high quality grounded content remains with AI, right? And I think that's still going to be true, right? When you're working with an AI chatbot, you will be far more confident. If you were doing physics and you knew it was grounded in the physics, open stacks book, then if you're not, and I think that role is going to 

really remain, but I think it's also going to make this stuff much more accessible for a lot of different people to consume in a lot of different ways. 

[00:21:28] Alex Sarlin: It's a great answer. And yeah, you mentioned the sort of grounded content and this isn't a theme throughout the day. I think it's relevant to everything we're saying here, which is that, you know, When Gen AI first came on the scene, people were saying, Oh, but it hallucinates and it's based on the whole internet. 

So it's going to do all these weird things. And it starts to feel like we're entering a really new phase where you can have content grounded in various types of things. It could be grounded with LearnLM based on all this pedagogical research. It could be grounded on an educator's individual Pre existing material or the standards or, you know, the idea of being able to transform content with the context of exactly what you're trying to accomplish without it just being general purpose feels very rich and it feels like something that's happening across the Google ecosystem. 

So I'm curious about the grounding that you see, Jenny, for, you know, teachers can say, I want this lesson in this format, it can be a podcast, I want it to be in a image, I want it to be a game. They can also say, and it has to meet these standards that I'll upload here. And they can say, and here are things that my students love. 

So I'll also upload this. You can sort of combine things in a way that we've never seen before and create new content that juxtaposes all sorts of different aspects. I'm curious how you are thinking about that. Like you say, the teachers have a Ferrari. They can do things they've never even imagined before. 

How are you thinking about that as a team? And how are you seeing educators think about that capability? 

[00:22:51] Jennie Magiera: I think that it's really interesting for me, as you were saying, like, AI is like, you know, trying to master language. And the other thing I think that's really compelling about AI is AI is what we feed it, right? 

Right. Like if we starve it, it's, it's less good. If we feed it more, it's even better. If we feed it Not so great stuff. It's garbage 

[00:23:10] Alex Sarlin: and garbage out. It's just the cookie monster reference. 

[00:23:14] Jennie Magiera: I think that it's both a risk and an opportunity at this time. While we're just going down the road of pop culture, I was reading a book about a winery recently and in the winery, I don't know a lot about wine, but while we're in the Bay area, let's just say. 

Go there. Apparently like they have like the starter that like makes the wine you have to and like every starter is different. That's why every wine is different and you can create it with different things. Everything from like the pollen in the air, whatever. And like each starter is like this like specific alchemy. 

And for any vintners out there are like wine people who are like, that is not actually how it works. It's not a metaphor. Yeah. But it made me think about the beginning of the school year. Yep. And at the beginning of the school year, I remember every year we would start with like our core curriculum. And so I'd sit down with my 

grade four math teachers and it was like a three inch binder and we'd like, boom, it on the table, like, all right, we're teaching lattice multiplication. 

Like, so. Let me bring out the same lesson I've taught for a decade and then like over time we got like super savvy and we'd like open up our google drive but it was still like yoink. It was the same thing we printed but now it was in google drive. Which you could do a little bit of edits to but like imagine if like the intermediate math teams got together in this way. 

Pre service teachers days, and we were like, okay, let's like feed the starter. Like, let's bring in like the latest research on mass education. Like, Oh, I just read this. Like, I just listened to this amazing ed tech insiders podcast about education. Let me feed it into the model. Let me feed this. I just went to NCTM. 

Let me feed that into the model. Oh, I just learned about Singapore math. Boom. And then now I'm creating this like mixology of this like really powerful engine that I'm going to interact with all year. That's based on like the most magical rich up to date current stuff. And I think that content providers like the big box publishers can play a role in that. 

They can be part of like, what's Like, you know, cooking the stuff to feed the initial mix, because we need the good stuff. Without the good stuff, we have nothing to feed them. So we need content creators to like, help us make that alchemy. But then I think it's the teachers to put the recipe together and to work collaboratively to do that. 

[00:25:22] Ben Kornell: Amazing. I mean, we see people on a spectrum of creator, Compliance and, you know, often, you know, I was in a no child left behind scripted curriculum district. It felt so far skewed to compliance that we couldn't personalize it for the learners. And some of that was practical. Like, we just had a lot of learners. 

It's very hard to personalize. So it does feel like. The new technology is not just a breakthrough in what the learners can learn, but it also where on that spectrum. The educator can sit. 

[00:25:57] Jennie Magiera: Can I say something about that? Is that okay? Yeah, [00:25:59] Ben Kornell: yeah, do it. 

[00:26:00] Jennie Magiera: So, like, I was also in an NCLB district, big school districts, and I had people come into my classroom and monitor. 

They'd be like, Oh, you're supposed to be on less than 4. 3. They check my 

[00:26:08] Ben Kornell: page. Yeah, you'd be like, 

[00:26:09] Jennie Magiera: Oh, you're on 4. 4 demerit. And I think part of that was about control because there's no progress monitoring way to do that at scale. So the only way they can understand quality control with a school district of like, 500 schools and like X, you know, was. 

This is something I can monitor now, while I can help me as an educator, be more creative, it can allow system leaders, because then I used to be like, oh, my God, my fascist overlord school district administrator was like, being like, I'm like, oh, now I get why they do that. Like soul and humble people, but like, this is how we're trying to help. 

But like now, like with Google building out more tools, like I look at the opportunity of AI as a school leader as well, to be able to be like. How do I progress monitor teaching and learning in a way that is less fascist overlord and more what we're all actually trying to do as school system leaders, which is just supporting the instructors in our system 

[00:27:03] Ben Kornell: and best practices and scale what works. 

Yeah. That really resonates with me. What it brings to mind is probably my favorite topic in education, which is assessment. And there's a degree to which. Google classroom has resisted getting into the assessment wars and Google has tried to be open and now you have quiz features and now you again, respecting the educator to like bring assessment into the fold. 

But do you see a future where AI enabled assessment is part of the tools for educators going forward? 

[00:27:35] Shantanu Sinha: Yeah, I definitely think when you go forward, if you look at how learning and education is evolving, we really do have to rethink how do we ensure we're assessing the right skill sets, right? Earlier, I was saying this analogy around a bicycle, right? 

When my son was learning to ride his bicycle, we put training wheels on it. And he got from point A to point B really great. He thought he was great at the bicycling, but as soon as they took him off, he couldn't bike at all, and one of my friends came and told me, well, pedaling isn't the hard part of biking, balancing is the hard part of biking. 

You're totally focusing on the wrong skillset here. It's not about the destination from getting to point A to point B. It's about the skills you're exerting along the way. And I think AI is. doing that in many ways, right? A lot of people can get to the end point. They can write an essay that looks good. 

They can get an answer on a sheet of paper, but that wasn't what learning was about. Learning was about the skill sets. It was the balance. It's the skill sets that you're trying to exert along the way. And so we're still assessing people. If they get to the destination, we're assessing them wrong, right? 

We're not assessing the actual skills we're trying to And it's really important that as we evolve, because people are going to look at this and they're going to try to say, how do I make sure I'm developing that critical thinking? How do I make sure I have that communication skill? How do I make sure the group work, all of that is really happening? 

I do think that is where AI is going to be. Has a potential to help us rethink that as well because the assignment can now be very different, right? Instead of if you're in a language class Instead of you're looking at an essay five paragraph essay at the end of it. Maybe you have A whole conversation debating a topic in French with another AI and you record that and it synthesizes it that is clearly exerting the skills that you're really trying to get at using AI in a much more creative way to get there. 

So I do think that is how it is going to evolve. It's going to take time. We're going to have to figure out all the different ways to apply it, but it is so important for us to. Focus on the skills that matter and make sure assessments are actually assessing the skills that matter. 

[00:29:48] Jennie Magiera: I agree completely. 

And I think like, that's what gets me so excited, right? Like, I don't know if you know, there's something about my interview to work at Google, like, you know, I had my last interview of me and she was like, okay, Jenny, if you had a magic wand wish for what Google could build, what would it be? And this was, you know, five years ago now is before all this, you know, new wave came through. 

And I said, I would love. For Google to build something that disrupts. assessment. And as an educator, I felt this like magnetic pull towards testing. And it was like, not actually what I cared about my students learning, but it was the only way to assess. And I was like, but the stuff I actually cared about, I didn't have a way to assess at scale. 

And you know, it's like that Andy Hargreeves quote, like, don't value what you can measure, measure what you value. But we didn't have the tools to measure what we value. And now with AI, we can measure exactly what you're saying, like not the five paragraph essay, but the conversation In French or, or like the process of it. 

And that to me, like that gets me so stoked. 

[00:30:50] Ben Kornell: Well, and a theme of the conference has been metacognition and, you know, with the calculator showing your work now, let's take that to the logical extreme. What I think product wise is so unique about this moment is before you had to choose one assessment system, but now you could essentially ingest Any assessment system or make a concoction of assessment system. 

So the product opinion is going to have to be about our own workflow and less about like, it's this state standard or that state standard, or we're ingesting, you know, these frameworks or those frameworks, because the AI is so good at adapting those frameworks to any system. So I think we're just seeing, uh, Like 

[00:31:36] Alex Sarlin: a renaissance and assessment dad that I think Google Docs has been a step in this direction for a long time because it has version control. 

You can see the entire history of the creation similar to your sort of assessing the process and it has collaboration built in, right? So you can have, I mean, as a teacher using Google Docs for math, I'm sure this was, you know, people can work together and the entire process is structured. Do you imagine a world in which, you know, Yeah. 

Somebody at AI can take the entire history of a Google doc, and that's the process. It's not the end product. It's not the essay at the end. It's everything they did, including conversations with AI part of the way through, including collaborations, including how they respond to the teacher's comments halfway through. 

I mean, it feels like it's there for the taking. 

[00:32:21] Shantanu Sinha: Yeah, I think one of the most exciting applications of AI that I see is the ability to pull insights. From large amounts of data that humans could do, but never had the time to do, right? And there's so many examples of this. I mean, even at Google, there are demos around, like, how you could take maybe your Nest video footage, right? 

And say, well, who moved the ball? Right, exactly. Maybe I could watch all that to figure it out, but now with AI, you can do that, right? Google Photos does really interesting things of that form. And I think similarly, In education, being able to pull insights from all the assignments, the data that we are collecting now that a lot more has been digitized, but bringing it in a way that a teacher can quickly understand what's happening there. 

I think is really, really, really powerful. And I think 1 of the really exciting ways that I think the challenge with that is. It is very important that when it's pulling insights like that, you put it into the hands of a, of an educator that understands the limitations, like defying it correctly. Cause the last thing you want to do is pull things. 

Maybe you didn't have all the data or maybe you didn't like see the full story there, but I do think it can help people. Identify the places to look deeper in a much, much more compelling way. 

[00:33:39] Alex Sarlin: And that human in the loop, teachers staying at the center of the experience, has also been a theme of the day. I think we've heard that from everybody all day that, you know, there's absolutely no desire to be between teachers and, you know, it's really about supporting teachers. 

Educators. 

[00:33:53] Ben Kornell: Well, and it feels like we're in an unlock moment and it feels like we have these kind of free technology, pedagogic challenges that we finally have the technology intersecting with the education. You know, we always call it ed tech. Cause the ed comes first and the tech comes second, and we finally have some real opportunities to meet the challenges in education. 

Well, this has been a fascinating conversation, a great way to end the day. Jenny Shantanu, thanks so much for joining a tech insiders. Great. 

[00:34:22] Jennie Magiera: Thank you for having us 

[00:34:24] Alex Sarlin: in this next conversation. We had the privilege of talking to Steven Johnson, who's the editorial director of notebook LM and Google labs described by the wall street journal as quote, one of the most persuasive advocates for the role of collaboration in innovation. 

Johnson is a bestselling author of 14 books on science technology and the history of innovation, including. Where Good Ideas Come From, The Ghost Map, and his latest, The Infernal Machine. He's also the co creator and host of the Emmy winning PBS BBC television series, How We Got to Now, and Extra Life, and is a contributing writer at the New York Times Magazine. 

In addition to his work as an author and television host, Johnson co created the first web only online magazine, Phenomenon. Feed, the Webby Award winning community site plastic. com and the hyper local news service outside. in, which was acquired by AOL in 2010. He was awarded Newhouse School Mirror Award in 2009 for 

his journalism and he recently received the Pioneer Award in positive psychology from the University of Pennsylvania. 

He lives in Marin County, California and Brooklyn, New York with his wife and three sons. We're here with Stephen Johnson, author extraordinaire, also editorial director of NotebookLM and Google Labs. Welcome to the podcast. Thank you. Good 

[00:35:46] Steven Johnson: to be here.
[00:35:46] Alex Sarlin: So first off, NotebookLM really came out of, absolutely came out of your mind. 

And I think your process, or that's the sense I got. Tell us about the origin of NotebookLM. It's going viral now. 

[00:35:57] Steven Johnson: Yeah, it's been really fun to see. I mean, it came out of a bunch of people's minds, but it did have some roots in my Decidedly not normal research and writing habits and there's a kind of joke on the team that the vision of notebook was to try and take all of my strange habits and turn them kind of more mainstream so that other people can do them 

[00:36:17] Alex Sarlin: as safe as it can be. So, 

[00:36:19] Steven Johnson: so basically I've spent as much as like, I've written a lot of books over the over the years. And I've always been really interested in using software to help me with that process. And so anytime there are new tools that help you with research or with writing and new, you know, experimental word processors, like I was always the early adopter for it, dating back to when I was in college and HyperCard came out for the Mac and like late eighties, like that's how old I am. 

And so over the years I'd written, I'd had kind of like a side hustle writing about the tools I was using to write. And I would write I had kind of deliberately started this practice in the late 90s of, like, capturing digital versions of all the quotes that I was using as research for my books, because I wanted to just be able to, like, search them, like, Command F search them. 

And over time, you know, new tools came along that helped you kind of organize. Notes and quotes and things like that. And I would use those and write about that. And so it had become a, you know, kind of like a little, I had a kind of side reputation as like a, uh, a research. Suffered so in early 2022, I wrote a piece for the times magazine that was about basically about GPT 3 and kind of saying, like, people like computers have mastered language. 

Like, this is. So don't forget about AGI and all these other things, like just this fluency is going to be revolutionary for everything we do with computers and which was a bizarrely controversial piece at the time. I get a lot of pushback for it, but two people at Google who just co founded really Google labs played before who since left and Josh Woodward now runs labs had read a bunch of my books and they read that piece and they were like, I wonder if we could get Steven to come to labs part time, Basically like build this software that he's always wanted. 

He's been chasing his whole life now powered by language models. Amazing. So it just kind of like cold called me out of the blue, like cold emailed me and we're like, Hey, they pitched me on this idea. And I said, that sounds really fun. Like, where, like, where do I report? So I started. Part time in the summer of 2022, like kind of five months before the chat should be T moment. 

And we just had like basically three of us in the early days, just building a little prototype. It was this idea that's part of the labs culture, actually at Google, which is to like co create with people who are not necessarily technologists. So like, if you're going to make a tool for thinking and writing and research, like have a writer in the room at the beginning. 

So suddenly different way to develop the software. And yeah, we've just been kind of like. Hacking away at it ever since I ended up like after about a year. I was like, okay, I'm thinking about this 120 percent of the time. I'm a full time employee here. It's a strange twist of fate, um, but it's been really fun. 

[00:39:02] Ben Kornell: You know, notebook LM has started to have a life of its own. You know, online people are liking it to Gutenberg printing press as like a moment in time. That might be a little overstatement, no offense. But, you know, when did you know that it was taking off and that this wasn't just a fun beta, but actually something that was providing real value to millions of users lives? 

[00:39:24] Steven Johnson: Yeah, it was really like that. It's exactly the kind of evolution of my thinking, which is like, when I first got here, I was like, could we make a prototype that might influence Google, like internally and also be like fun for me to use, you know, that was kind of that we did very quickly. And then at some point last year, I think we were all like, could we make something that. 

You know, millions of people would use and find like helpful and help them have better ideas or learn faster and things like that. And that really is, it started to have, we went international part of this is just completely, it's, you know, it's native in a hundred languages. And so you can have a conversation and, you know, Spanish about documents that are written in Japanese and, you know, so we did two things in kind of June, we moved on to the new Gemini pro model, we rolled out these inline citations, so we should. 

Say that, like, you know, notebook is all about. You upload your own documents, whatever material you need to do your work and the AI then effectively becomes an expert in the material that you've uploaded and every answer is grounded in that source material. And we have 1 of the features that really, we have a state of the artist. 

We have these inline citations, so that every answer has little footnotes and you can click directly on the footnote and go directly to the original passage and read it. So you're always like, it's a deeper way of exploring other material rather than just like. I don't know where the model got that information. 

It sounds plausible. Like, so in June we had kind of rolled out all those features and that was the point at which we were like, Oh, it's really working. We started to see it go viral. It went viral in Japan. First, like Japan was for a brief period of time, our like biggest market, which was crazy. And we were just hearing from like lots of different kinds of people were figuring out how to make it, you know, like there were corporate uses or obviously student, you know, educator uses role playing game enthusiasts, really. 

Putting their like Dungeons and Dragons campaigns in there and using that way. So you could see like it was starting to resonate, but the problem with it was. That the best way to really appreciate how useful it was, was to, you know, load a complicated set of documents, ask a very nuanced question, get a very sophisticated answer, click on the citations, all that stuff, which is very powerful when you do it. 

It is not something that plays very well on TikTok. Right. Like, it's not a viral thing. And so over the summer, we started developing this audio overviews feature that will, instead of answering questions about your sources, it will turn them into an engaging 10 minute. Podcast style conversation. Yeah, we 

[00:41:50] Ben Kornell: call it the EdTech Insiders Killer , uh, feature. Thank you. Thank you for that. That helps. 

[00:41:56] Steven Johnson: We can get into that. We can get into that. But so we started testing it internally inside of Google, and people were just like, what the hell? You know? And so this summer as we were developing it, we were like, okay, we were pretty confident it was gonna be ahead. I think we did not anticipate that. 

Become quite the global phenomenon that it was, that it became, but that was, yeah, I think when that started, when that dropped in early September, and we were suddenly like being talked about it, like late night talk show, you know, we're just like in the zeitgeist in this way. And I think. You know, there were a couple of things happening with the audio overview. 

So it's interesting. It's an interesting innovation story actually, because like if you had, I'm like a big believer in the jobs to be done kind of philosophy, like figure out what the user unmet needs are and build around that. We do a lot of that with notebook, but you could have interviewed like a thousand people who were talking students or whoever, and ask them like, what do you need? 

And no one would have said, like, I need a simulated podcast about my material. Like I just wouldn't have come up. So it turns out to be one of those places where like. The technology actually drove the exposed a kind of new possibility that ended up being really useful and magical in this way. I think part of it is obviously that the underlying audio tech is really good. 

Like the voices, the intonation of voices, all the subtle things that it does with the voices, like what I'm doing right now, that, you know, basically no computer in the world could do until these models came along, the voice models came along. But the other thing I think people were experiencing, like most normal people had not mainstreamed. 

Consumers had not actually experienced an AI that was grounded in their information. And so it was actually a combination, I think of audio tech. And then this idea of like, I gave it my journals and it generated this very sophisticated conversation about like me. And so that was, I think a big kind of eye opener for people as well. 

And so now like, So now we do have millions of users. Now our like ambition is like, you know, we really want to, we really think that notebook is a genuinely like kind of new kind of platform for interacting with AI and with ideas and maybe hopefully a way of. It could become a new kind of marketplace for it. 

Like what, you know, what happens if you sell kind of compilations of knowledge that can be explored or transformed in various ways inside a notebook. So our biggest problem right now is like, we just have too many things that we want to do. 

[00:44:25] Alex Sarlin: Yeah, I can imagine. I want to double click on that idea you just said, which I think was so interesting about people hadn't experienced, you know, LLMs that were really trained on a subset of information that was specialized to them that they had chosen, uh, literally a notebook. 

You can sort of check and uncheck any individual document and say, I want this to be included or not. And, you know, you've written many books, you've written books about ideas, about innovation, and I'm sure you've had about piracy, you know, I'm sure you have such an interesting model of bringing together lots. 

Of things, lots of different primary research, you know, capacity of trying to put it together and synthesize it into a book. It feels like this podcast is a different way to synthesize information that could be incredibly useful for learners. So a couple of related questions here. One is, so notebook LM is built on top of learn LM. 

[00:45:11] Steven Johnson: No, no, it's not, it's not, it's just, it's built on top of Gemini. Got it. And learn on is very cool too, but we basically have built our own kind of underlying Gemini model. There's, there's this underlying audio model that supports the audio overviews, but basically like we've built our own kind of set of prompts that control the flow of information inside of mobile. 

[00:45:31] Alex Sarlin: Got it. Got it. So when a learner or a. Journalist or a journal keeper, you know, anybody uses don't book LM, they're literally sort of training its mind to be thinking. In the ways that they want it to be thinking, to be referencing the articles they've read, the books they've read, their own journals. I'd never even thought about that use case. 

So that feels like a step change in people's sort of understanding of what LLMs can do compared to open models. 

[00:45:57] Steven Johnson: Yeah. So that's exactly right. Except that. We should put quotes around training because it is not actually training the model training is a long process that takes time. And that would then expose whatever information you share to other potential users of the AI. 

So all we're doing is taking your information and putting it in basically the short term memory of the model, the context window of the model. And we kind of show it to the model and say. Answer this question or follow this instruction from the user based on this information we're showing you here. And we've built a system so that you can have in just one notebook, you can have up to 25 million words of information. 

Oh, wow. In the notebook. So you're seeing so effectively, so you think about that quote collection that I've been creating, I have a single notebook where I have. You know, 8, 000 quotes from books that I've read over the last 20 years. And I have now the full text of like six of the books that I've written. 

Eventually that notebook will have like all those quotes and everything I've ever published. Sure. And so the AI in that notebook is. In a way, it's an incredible rendering of, like, my intellectual, like, Your mental model, yeah, you're And in some ways, it's better than my mental model internally, because, like, it remembers things better. 

Good point. And so, the new kinds of things that are possible, that you're kind of alluding to, are like, I can go to that notebook and I do whenever I'm like, exploring a new idea. I'm like, Oh, I'm thinking about something about like AI and urban planning. Like, what do I have in here that would be interesting or relevant to that? 

Like, let's kind of brainstorm about this. And it's like, Oh, well that, you know, that Jane Jacobs quote that you wrote about 25 years ago is kind of relevant here. And that other thing from Brian Christian and the alignment problem, like he talks about cities and AI, you might, and, and it will just weave together and make all these associative connections. 

And so it really does feel like a true collaborator in this sense. But what's crucial is that I curated that. So I set up all those documents. I did the work of like creating that. So it's really kind of like a three way conversation. It's like me currently in the real time, whatever past me, who spent all this time curating and then this new form of intelligence in the middle. 

[00:48:06] Ben Kornell: You know, the thing, so of course, Alex and I have built custom GPTs, custom gems, all of this, and in a way, my first interaction with notebook LM, it felt like a derivative product of a custom gem because you provide the training material while I'm fascinated about the technical side and also, you know, the revolution of early days, we thought, you know, it's going to take five years for the context window to be this big and then here we are and it's essentially going to infinity. 

I actually think of like a really compelling story of notebook LM is the user interface as the unlock. And if you look at chat GPT, part of its success was the user interface. Notebook LM feels like a leap forward in that the guided user experience at the beginning, when you go in, not only can you upload documents, but it suggests use cases for you. 

And I feel like this is where humans want to be in the loop. We want to help. train it, but we also want to be guided to use cases and solutions. How did you kind of crack the code on that? And it's not just a podcast. I mean, there's a number of use cases that it prompts people with. 

[00:49:16] Steven Johnson: Yeah. Thank you for that. 

That's a great way of describing. I think, yeah, I was saying very early on, I was like, there's underlying technology here, but it's really a like UI thing. And so you got to remember, like when we were first working on it, Even after Chucky, he, everything was like, The way you interact with language models is through a, like, simulated text messaging, right? 

You know, it's like, surely there are different. And the other thing that we had are kind of advantage was that the source grounding was built into it from the very beginning. That's what it's the very 1st thing that we had was like, with a very limited context when there was like, take some paragraphs from this tech questions based on. 

So we knew that was going to be there from the beginning. So everything we've built. Yeah. Has been assuming that they're going to be documents associated with it. So just being able to view the documents that you are using as a source is incredibly important. It is. You want to be able to jump back and read the original text and do things with that. 

And just over time, we started thinking like, okay, maybe there are things like we created kind of the tool that predates audio overviews. Is notebook guide where when you upload your sources, notebook guide pops up. And then there are a bunch of options that are right there for you. Like I can turn this into a study guide. 

One click, turn this into a briefing doc, one click now, one click into an audio overview. And so we wanted to like, we were like, we can show people the capabilities. And then the other thing we got from talking to students actually early on are the suggested questions. So, you know, it's always taking your sources. 

Looking at your conversation history, if you have some and saying, Hey, another follow up question might be this. And that came from just, you know, user research with students where they were like, I don't know what to ask. And it was one of those things where like, that had never occurred to me because I'm journalist, like I have a million questions. 

And so, but I was like, Oh, smart. And I was like, we can just use the model to figure out what questions are answerable and relevant in these sources and post. And so we're like constantly thinking about like, what is the right surface for interacting with this model. And in fact, we, We're about to roll out, we've under invested in the kind of like writing and note taking component of it. 

And that is a part of the UI that's like the jankiest and the most kind of embarrassing because we just haven't had time to clean it up. But we have a, really a new, pretty like radically new surface that is coming out very soon that will be even better. So there's kind of a model we're developing of like, you have the sources or kind of like the inputs that you want to shape the, Intelligence to a model and the expertise of the model. 

And those are there on the left hand side in the middle is kind of like where you have conversations. And in this new model, like on the right hand side or in a sense, the outputs. So you want to take these inputs, talk about them, explore them, but then you want to create an audio thing or an FAQ, or maybe some new formats down the line. 

And so there's going to be this kind of left to right flow. 

[00:52:13] Ben Kornell: Well, and then of course you're integrating the full Google suite. So it could be a Google doc or Google slides or. So there's a way in which that, you know, embeds. I think the UI element often gets dismissed as like a, a wrapper. And I really feel like notebook LM is a great example of how UI adds value, but also create stickiness. 

Cause once you have a number of notebooks that you've created, You're not going to be going anywhere else. It takes a lot of effort to, like, build your, you know, mental context on an AI. And documents 

[00:52:46] Alex Sarlin: uploaded [00:52:46] Ben Kornell: as 

[00:52:46] Alex Sarlin: well. If you start to have it sort of replicate, you know, a school subject or the research you've done or your journals or all your Dungeons and Dragons campaigns, you don't want to upload those again to another place. 

So there's lots of different things that you've mentioned several features that are really student friendly. That translating content into study guides, very directly student friendly into notes, into an audio podcast as a different way to study to literally say, instead of reading this chapter of a textbook, I want to listen to a podcast about it. 

You mentioned that, you know, it's not something people might've asked for, but it's something they would have loved to have and obviously do love to have. We talked to a lot of educators who, you know, are on every different. side of the table when it comes to AI, this feels like it could be a game changer for educators as well, particularly because you can really control the sources that it's using. 

That could be a curriculum that can be pedagogical research. I'm curious what you've seen in that space. 

[00:53:36] Steven Johnson: Yeah, I love that use of it. So one of my favorite. Kind of demos to do I actually may do later today is using notebook as a lesson plan build it and to the example I give is because a lot of people think of the sources that you upload is like the source of truth But they can actually be used in a bunch of different ways. 

So the example I give is Someone is trying to make, want to create a project based learning curriculum on urban planning based on the work of Jane Jacobs. Yeah. And so they upload some quotes from Death and the Life of Great American Cities and a YouTube interview with Jane Jacobs before she died. So that's the content, like that's, you know, what the course is going to be about. 

And then they upload some handwritten notes they have. Roughly sketching out an outline for the course, and it's like my handwriting, like, you know, it'll just read handwriting. Like it's, you know, it's fully multimodal 

[00:54:23] Alex Sarlin: amazing. So 

[00:54:23] Steven Johnson: you upload those handwritten notes and then you upload like a kind of a standard stock or like, you know, curriculum guidance thing from the state or whatever, like, you know, project based learning, you know, high quality project based learning document, whatever. 

And that's not so much the content as a set of like. Yeah. You know, kind of best practices. Yeah, exactly. And so the query is build a first draft of a lesson plan based on this content, based on this handwritten outline that is compliant with this document and explain why it's compliant with each section of a class, whatever, and notebook is just like 30 seconds later. 

There it is. Like, it's kind of like. That was just like five hours of work. Yeah, just, you know, I got to a first draft and 30 seconds. And so that kind of stuff is really, really cool. And you can think about it too, in terms of like personalization with the students too, you're kind of like, okay, I want to do this kind of creative class. 

I have these 10 students who have these interests. Here's a list of the students and their individual interests. Here's the list of the content of the class. Create a custom curriculum for each of those students based on their interests, based on this material, it'll just do it. And so that kind of like creative remixing of the sources and using them in different ways is really, 

[00:55:36] Alex Sarlin: it's really powerful. 

And the interest don't even have to be things that teachers are educators. No. I mean, if the kids all love Pokemon, you upload the Pokemon guide. If they love Taylor Swift, you upload all the lyrics. You don't have to know them and it'll help put it all together. Yeah. Yeah. 

[00:55:49] Ben Kornell: This is where I feel like a guided.
 Educator facing UI version, like almost like select different UIs based on your role could be really, really 

powerful. What 

[00:56:00] Steven Johnson: working on that and, and, 

[00:56:02] Ben Kornell: and just to go one step further, imagine a student's IEP and all of the, you know, So a classic challenge in education is kids move school to school or kids move on in grades and they have different teachers and you have all of their educational plans and resources, whether it's special education or general ed, and it's impossible to query that source of truth and gather insights and gather Intel. 

I think this is why we're so on the long haul, like techno optimist. Can you tell us a little bit about challenges that you're experiencing and friction in the market or use cases? You know, Google obviously is excited about AI, but also has a risk tolerance that threshold. How are you navigating all of the nefarious things one can do with AI? 

[00:56:50] Steven Johnson: Yeah, I mean, you know, source grounding buys us a lot in the sense that, you know, it reduces the hallucination risk dramatically, right? It lets you personalize and, you know, does all those kinds of things. It lets you, for instance, in the classroom, you could theoretically say, okay, you can have a notebook. 

You can fill it with these sources that I've assigned to you. But, you know, the AI will only restrict it will exclusively restrict itself to the information in those sources when you're in that book. And so you have the kind of confidence that the student, if they can use AI, but they can use it in a grounded way. 

So it helps with a lot of that stuff. But, you know, we have basic low level kind of safety, you know, guarded guardrails in the product. So if you try and upload something really offensive. You won't be able to do it. If you're trying to ask dangerous questions, you won't be able to do it. But if you upload the flat earth society manifesto or something, there is such a thing. 

And, you know, the AI will be grounded in the beliefs of the earth is flat. And that's actually interesting. It's possible that the model would kind of be like. Um, 

but, but to some extent, you know, like we have this issue with audio overviews with politics, right? Like it seemed as though there was. Without any feedback from us, it seemed as though the host had a tiny left wing bias to them. And so we added. Instructions saying like, Hey, if anything seems political in this material, kind of announced that you're not taking sides, but just report empirically on what the document says. 

So if you upload project 25, they should just say, like, we're not taking sides here, but this is what this document, 

[00:58:23] Jennie Magiera: which we felt
[00:58:24] Steven Johnson: like was the right choice to make instead of being like, we can't cover anything 

political or taking sides, which we obviously didn't want to do. So there is a sense of like. 

Source grounding means that you are dependent on the quality and the perspective of the sources that you upload and if you want to upload a lot of nonsense and learn more about that nonsense for the most part, notebook will help you do that. I just don't. necessarily think there's a way around that. I don't think it makes the problem of nonsense in the world. 

[00:58:52] Ben Kornell: In some ways it's better because the human is in the loop on whether they're co creating the nonsense or they're co creating meaning. So in some ways I can actually see this in educational context being lower risk because of that ability to set context in the context. It could be a 

[00:59:08] Alex Sarlin: tool to understand disinformation. 

You say, Oh, if you're creating something based on these sources, it's going to come out this way. If you're creating something based on these sources, it's going to come out this way. And I mean, you can see the huge difference. 

[00:59:18] Ben Kornell: And as a teacher, I would want to do the flat earth society thing, right? So kids could see how Could finally 

[00:59:24] Steven Johnson: realize the why.
[00:59:27] Ben Kornell: But could see how, like, misinformation creeps in and could make. So it's actually a, 

yeah, like the logic, the 

[00:59:32] Alex Sarlin: logic chain between different sources. You mentioned different formats. Google is the number one video source in the world. I've heard this. Yeah. It's called YouTube. Do you see, maybe there's a rhetorical question. 

Do you see, and if so, maybe when might we expect video overviews to come out of notebook? 

[00:59:52] Steven Johnson: Yeah. Well, the question is kind of like, what would it be? I mean, I can imagine almost like many, many, Documentaries, like we did for my book, where good ideas come from a million years ago, we did a animated kind of whiteboard animation thing that the Royal Society of the Arts did. 

And I remember my publisher was like, we're going to create a little five minute animated video for the book. And I was like, great. I'm sure all six people who see it. And that thing is like, 7 million times on YouTube. It's like it totally took off and it's just very inventive. It kind of maps my ideas as I'm talking and, you know, you can watch it and it's really memorable and it's cool looking and stuff like that. 

And so I don't know, like, that would be a [01:00:29] Alex Sarlin: great way to 

[01:00:30] Steven Johnson: do it. Get a model to do that, which seems, I don't think they can do that quite yet, but you could. Certainly see on the path we're on right now, that that would be something they'd be able to do. So creating a kind of like animated mind map as you're listening to something. 

[01:00:44] Alex Sarlin: We're ready to be the test. Yeah, exactly. That sounds incredible. It could also splice together YouTube videos potentially. I don't know that that's I like your idea better, but it could do that 

[01:00:54] Steven Johnson: missing piece. I think that we're really interested in focusing on is, you know, thinking about. The company we were part of, you know, we have all these features, almost everything in notebook LM is dedicated to helping you understand the material you've uploaded, but it's a tool for understanding. 

That's the way I can describe it. And, but the number of features we have that help you discover new sources to aid your understanding or to enhance your understanding is exactly zero, right? Right. There was no source discovery in the product at all. That's a great point. It turns out that Google. It's actually quite good to have this other feature, which no one told me about until like, I think I've worked on that for a while, you know, so we obviously think, and, you know, think about something like scholar, like scholar, like this have been for my whole career, practically like this, like sacred thing that is such an amazing gift to the world of research. 

And so. We are very interested in like how you can discover things and bring them in from within the application instead of just always like bring your own sources. So 2025 is going to be a big year for 

[01:01:58] Alex Sarlin: that left side where you're bringing in sources could be a search bar. It could be bringing in scholar or YouTube or Wikipedia. 

[01:02:07] Steven Johnson: You might just have a Figma model. Exactly. Exactly. 

[01:02:11] Alex Sarlin: Which makes tons of sense. And then you get to the student use case, you get even deeper because they don't have to rely entirely on what they've been given by a teacher. They can say, I'm interested in X, Y, Z, and it could say, I'm going to find you material and then translate it into a format you can actually understand, which is really exciting. 

And language.
[01:02:26] Ben Kornell: Yeah,
[01:02:27] Alex Sarlin: I would just say this 

[01:02:27] Ben Kornell: also touches on the theme of metacognition, which is overarching in the conference today, which is, you know, as you deepen understanding, even the process of creating your notebook is metacognitive and displays a learner's thinking or one's own thinking, you know, You've had such an incredible journey here at Google in such a short time, stepping back big picture, what's making you most excited as you think about the next decade, about the work and areas that you've been fascinated about and the impact on the world. 

[01:03:00] Steven Johnson: Yeah, I mean, I personally, I'm just really excited to write the next book with this technology, right? Like, I actually have a, I have a notebook right now called the next book. I always have, like, 10 ideas that are floating around and now I have them all curated in this one. So when I read something, I'm like, oh, that could be a Wikipedia article on this thing. 

I can put that in there. And so I am like. You know, I'm building the ideas for that book in collaboration with Notebook and then I will write that book eventually in partnership with Notebook and presumably like publish it in some ways through Notebook as well. So that is something I'm really looking forward to. 

Yeah, I think that the idea of really having both on individual level and an organizational level. That having an AI, hopefully through the user experience of notebook LM, that knows your whole individual intellectual history or personal history, or knows your organizations. 

[01:03:55] Jennie Magiera: Yeah. 

[01:03:57] Steven Johnson: So like, think about, think about, you know, using the metaphor of like cities, like think about like the town, you know, mid sized town, like all the information about what's going on in that town, all the zoning, all the regulations. 

I'm
[01:04:08] Ben Kornell: on a school board thinking about my whole school district. All 

[01:04:11] Steven Johnson: that is in one notebook. And that notebook, maybe we're starting to talk about, like, slightly altering the prompt so that they can also, instead of being kind of a research assistant, it's as much a, like, a kind of a decision coach in a way, like, okay, we're thinking of, like, building a park here, like, help us scenario plan based on all the things you know about the city and all the constituents are here. 

Like, let's workshop some ideas. And you could 

[01:04:34] Ben Kornell: have multiple users in the same notebook dialoguing. Yeah. There's a way in which there's a convergence. Some of the cloud features are also like heading that way. Some of the chat. So I think everyone is seeing that unlock as the context window opens. And as our models get more advanced, 

[01:04:50] Steven Johnson: I'm just about, I believe, I think maybe tomorrow I'm going to put out, I have this long essay quoting Kamala Harris called you exist in the long context. 

[01:05:00] Jennie Magiera: And 

[01:05:01] Steven Johnson: it's basically like people do not understand anybody who says like, Language models, they seem to be plateauing, if not paying attention to the context window. Because like, the context window went from like, in the last two years, like, 8, 000 tokens, to like, 2 million, to maybe 10 million right now. And the number of real world things that you can do with that kind of context is as important as overall intelligence gains in the model. 

Completely. Um, and notebook might be the like, the poster child for why that is. Like, we just couldn't do all the things that we could. Wanted to do when the context was 8,000 tokens, but we knew it was gonna get bigger and so we kind of designed the thing to be kind of deliberately broken for a year so that we'd have all the pieces in place to intersect. 

Yeah, and we didn't even know that the million, we were thinking about like a hundred thousand tokens. We were like, we could do a lot with a hundred thousand tokens. And then like Deep Mind is like, oh, would you like a million? Or like, yes, we would. 

[01:05:53] Alex Sarlin: I love the way you put it before that. The context window is really the sort of additional information. It's not trained on the context window. It has all the previous information, but the context window is all the new information you're adding. And you know, one of your books that always stuck with me was the coffee houses. 

Yeah. The creation of coffee houses is a place where people to connect. I'm curious if you see that context window like you just mentioned, Ben, as a shared. Context from which people can work from the coffee house of the future. Exactly. Well, 

[01:06:19] Steven Johnson: there's another so yes, 100%. So imagine, you know, you would have a shared notebook and shared history of your organization or your team or your school district or your whatever class, you know, so all that could fit in the context. 

And so you effectively have. An AI that knows all the things that your team or your citizens need to, you know, do their work or lives in their city. There's another coffeehouse version of this. So the part of the point of the coffeehouse was it was an intellectually diverse space. It was like people with different hobbies and interests who were kind of chatting over coffee and, and that was the spark came from like all these different worldviews colliding. 

So imagine something like audio overlays where you're like, I have two hosts that are going to say interesting things and have a conversation about my material. Yeah. Now you're like, I'm a town planner. Here's my stuff. I would like to listen to a debate About the new park. Yep, and I would like host number one to be a hardcore environmentalist Number two to be you know a sociologist and I would and I'll represent this other thing. 

Let's have a conversation It's amazing and that's just like I mean, like, we can just do that now. Like, that's not, like, I know how to do that. We just haven't had time to do that yet. Like, but that's the kind of thing that is 100 percent like, you literally just like have different knowledge bases for each of the hosts rather than a shared knowledge base. 

And 

[01:07:36] Ben Kornell: notebook LLM could become agentic and you could have different notebooks debating different things on basic fact context. I mean, my son's doing speech and debate and he created a gem. As a speech coach. And he has one that's affirmative and he has a different one that's negative, but you know, eventually that'll all be in your notebook. 

[01:07:54] Steven Johnson: Yeah, you can, you know, we added this ability to customize the audio overviews, like that was the first thing we added like three weeks after they came out and you can just write these little instructions, like 300 characters, but no instructions. And so I uploaded this piece that I wrote a couple of years ago for the times magazine, and I was like, I want you to criticize the writing of this piece. 

In the style of an insult comic at a roast. 

[01:08:19] Jennie Magiera: I love it. 

[01:08:22] Steven Johnson: It's masochistic. My goodness. It's enthusiasm. Like people are putting their TVs in and being like, Oh my gosh, my career has been so successful. They're so excited. So I was curious whether you could undo that. And you can, and they were just like, Does Johnson even do any research? 

That's amazing. It is actually really useful because you think about it that way. You're like, I'm going to workshop this thing that I'm doing. Totally. Just say, Hey, go ahead and be a little more critical. I'm looking for criticism. 

[01:08:52] Alex Sarlin: You could upload your own short story and then Raymond Carver's short story and Alice Monroe and say, imagine them reading this and telling me how they would improve. 

I mean, you do some crazy stuff. Yeah. 

[01:09:01] Ben Kornell: Yeah. We've been taking so much time here, but it's so fascinating. We'll just say that. Ed tech world is the light with notebook. So thank you so much. And, you know, we continually are inspired by learners. Some of them, you know, 14, 15 years old who are doing amazing things that I'm sure weren't even part of the original vision. 

So it's very exciting to see this in the world. Thank you so much for joining EdTech Insiders today. Thank 

[01:09:27] Alex Sarlin: you for having me. Thanks for listening to this episode of EdTech Insiders. If you liked the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more EdTech Insider, subscribe to the free EdTech Insiders newsletter on Substack. 

People on this episode