
Edtech Insiders
Edtech Insiders
Week in Edtech 3/5/2025: ASU GSV AI Show Preview, OpenAI's Global Expansion, ChatGPT's 400M Users, Google's Co-Scientist and Career Dreamer, Chegg vs. Google Lawsuit, Anthropic's 3.7 Model, Meta's VR Push, Microsoft's Muse Model, and More!
This Week in Edtech, Alex Sarlin and guest host, Claire Zau explore the latest developments in education technology, from AI breakthroughs to new VR tools and evolving strategies of tech giants.
✨ Episode Highlights:
[00:03:00] Preview of the ASU GSV Conference’s AI show and its impact on education innovation.
[00:05:00] OpenAI’s partnerships with California State University and Estonia to scale AI in education.
[00:09:00] ChatGPT reaches 400 million weekly active users, highlighting its growing role in education.
[00:12:00] Google’s Co-Scientist tool and Career Dreamer aim to personalize learning and career pathways.
[00:23:00] Chegg’s lawsuit against Google raises questions about AI’s impact on SEO and business models.
[00:27:00] Anthropic’s new 3.7 model introduces hybrid reasoning with implications for education.
[00:33:00] Meta’s renewed push to integrate VR in classrooms to boost engagement.
[00:36:00] Microsoft’s Muse model generates real-time gameplay environments for learning simulations.
[00:39:00] The potential of AI-driven simulations for personalized and immersive education.
[00:46:00] Claire’s advice for EdTech entrepreneurs on integrating AI effectively.
😎 Stay updated with Edtech Insiders!
- Follow our Podcast on:
- Sign up for the Edtech Insiders newsletter.
- Follow Edtech Insiders on LinkedIn!
🎉 Presenting Sponsor:
This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.
Claire Zau: [00:00:00] My biggest angst of a lot of people who build AI tutors and the chatbots is that it is limited to text and they're not realizing that so much of learning involves visual reasoning, whether that's explaining a math problem or looking at a painting of Raphael. You need to have that visual bridge in order to bring that context down to learner and actually give a real life tutoring experience.
So that's one big thing I would, if anyone's building an AI tutoring space, remember that that multimodality is a really important aspect of Truly building a tutoring experience.
Alex Sarlin: Welcome to ed tech insiders the top podcast covering the education technology industry From funding rounds to impact to ai developments across early childhood k 12 higher ed and work You'll find it all here at ed tech insiders Remember to subscribe to the pod Check out our newsletter and off to our event calendar and to go deeper check out ed tech insiders plus [00:01:00] Where you can get premium high Content access to our WhatsApp channel, early access to events and back channel Insights from Alex and Ben.
Hope you enjoyed today's pod.
Welcome to week in EdTech from EdTech Insiders. We have a very, very special guest host with us today. Claire Zal, partner and AI lead at GSV and publisher of one of the most important AI and education newsletters. in the world. Claire, welcome.
Claire Zau: Thank you so much for having me, Alex. I am a huge fan of all the work that you all put out there.
Quick shout out to the amazing assessment piece that you all did, the market map that you all pushed. You guys are doing amazing work in the ecosystem.
Alex Sarlin: Oh, that's so nice. So we've been working really hard and I know you work incredibly hard. Your newsletter is one of Fundamental sources for us to know about all things AI, especially, you know, around all of the big frontier models, but really everywhere.
But you are also preparing for the big [00:02:00] ASU GSB conference. Tell us about what that's going to look like for those who are sort of looking forward to it over the next, I think it's about six weeks away now.
Claire Zau: Yeah, feels like the last one was just yesterday. But for those who are unfamiliar, ASU GSV is one of the best places to gather with the most innovative people in education innovation across pre k to grade.
So from early childhood, K 12, higher ed workforce, it's a great strange cocktail of people who might not necessarily get to interact with each other across that spectrum. But what's great is that it's really leadership oriented, really future facing. really thinking about radical ideas to transform this ecosystem and infuse tech and the future into all of it.
We also have our AI show, which is a free event prior to ASU GSV. And that event is mostly focused on all the amazing classroom leaders who are actually bringing a lot of this technology. into classrooms at both higher ed and k 12 levels. That show is at the convention center, and again, free, but you have to register.
So a lot [00:03:00] of exciting stuff at both events, a lot of more practical content focused on implementing AI, but also really high level discussions around agents, the metaverse, XR. A. I. Obviously the current political landscape is a hot topic. So really exciting things coming down the pipe.
Alex Sarlin: Yeah, that's fantastic.
And you know, I really recommend. So this is down in San Diego, and I really recommend anybody who's anywhere near there. Even if you're not going to the full A. S. U. G. S. V. Which you definitely should. If you have the chance, The AI show is really fantastic. It's a great way to get sort of up to speed very quickly on the AI landscape.
We will be there. The tech insiders will be there presenting our map and Ben is on the AI council for that event. We've found it just it's really a big sort of celebration and uh, Discussion about AI and education, and you have so many educators in the room. It's just a fantastic way to really get close to the source of sort of what's happening in the classroom.
So, ASUGSV is like my favorite event of the year. We're also having a big EdTech [00:04:00] Insiders happy hour at ASUGSV. So keep an eye out for that as well. So, yeah, were you going to say something else?
Claire Zau: Oh, I was just gonna say that for the AI show, we do have about 10, 000 educators set to show up 100 plus exhibitors.
It's really meant to be an open space for learning for experimenting for giving feedback to these companies. And you don't have to be an educator. We have a lot of students, parents, anyone looking to learn about AI attending.
Alex Sarlin: Yeah, highly recommended. It's really, you just walk the halls there, you look at the booths and you're just like, Oh, constantly finding new ideas and meeting people who are just right at the heart of the research, right at the heart of the practice in the classroom or entrepreneurs starting really interesting new companies.
And there are hundreds of them as we know. So why don't we start today by talking about the open AI world? Open AI, obviously, One of the big frontier model players, and they are starting to really lean into education in some very visible, big ways. They announced just a couple of weeks ago about a big deal with the California state university system, which is, is [00:05:00] one of the biggest state university systems in the world and really bringing open AI to it at scale.
And they just announced, uh, actually a new collaboration with the country of Estonia, basically saying that Estonia is the first going to be the first country in the world to bring AI to their entire secondary. School system. Estonia, like many other countries are have a ministry of education so they can make centralized decisions on that.
You've been following open AI since the beginning. You've been following open AI education as it's been starting to really expand recently. Claire, what do you make of open AI's ed tech story or the education strategy?
Claire Zau: Yeah, it's very clear that with the hiring of Leah Belsky from Coursera a couple of months back, it's clear that education is becoming a core part of their strategy.
They actually also released an interesting report from the education group where they looked at a lot of the usage and looked at teen and student adoption. And what they found was that over a quarter of the things that students were using Chachapiti for were for studying, for learning. So it's clear that [00:06:00] this is one of the core use cases, even though it might not be a.
Traditional learning platform. It is a place where people are going to learn and upscale and go through that that learning experience. So it makes sense that they would have an education vertical. And I do think it is awesome that they're building this out because there has been a massive increase Access question on the enterprise front and at companies, you have all these people who are getting access to AI.
It's increasingly becoming the norm. But if you look at a lot of K 12 schools, a lot of just generally looking at the general population, AI is not as much of the discourse as we think. And I think we're biased obviously to that because we're playing with ed tech every single day. But in the reality is there is kind of a digital divide when it comes to access to AI.
And so I think initiatives like this, when you Implement it in all secondary schools. Or if you're able to partner with great and, you know, large groups like the California State University System, it really equalizes the playing field and I often [00:07:00] think of AI as horizontal as something like the internet.
And just thinking of this push and this initiative as big as giving every student in a university system access to internet.
Alex Sarlin: This CSU deal is bringing AI to 500, 000 students and faculty. The Estonia deal is probably somewhere along that scale as well. And I think, you know, people talk about the digital divide in AI and it's interesting because so far the fact that, you know, with this usage report that you put out, students are.
Really embracing AI usage, right? We've done this whole survey of all of these different student support tools. There are a lot of niche startups that do student support in all sorts of ways, a lot of apps, but the number one tool still for student support, for student learning, for AI is still GPT. It's still, still chat GPT, still core there.
And Yet it is a divide in K 12, a lot of teachers are still not using it. Dan Meyer took me to task because I, uh, I predicted that this year is like, by the end of this year, it's going to be very, very widespread in K 12, going to be used like by almost everyone. And he's like, I [00:08:00] don't, I don't know. I, you know, that's not quite what the data says, but I stand by it.
And partially because I think it's going to be, they're going to be deals like this that are just widespread, especially at higher ed. And there's going to be a I built into so many of the systems we use every day, certainly Google and classroom, but also instructure desire to learn blackboard, all sorts of LMS is and S.
I. S. is an illusion and at the higher level. So I'm really excited about the fact that people are just getting more and more access because, as you and I think both know, folks. When you talk to different people who are like, I believe in AI or I don't, it's often true that the people who believe in AI are just the ones who have used it a whole lot more.
And the people who don't are often the ones who just used it a whole lot less. And I don't, I don't mean to be, to look down on anybody, but I'm just like, sometimes somebody would just be like, but it doesn't really do X, Y, Z. And I'm like, I have a feeling you might not have sat down with it for very long.
Cause it is so shocking what it can do. And each of the new models that comes out, you know, OpenAI has been heading towards, they've been doing 0. 1, 0. 3 and heading towards GPT 5 at some point. They're just unbelievable. [00:09:00] The, the step changes in functionality and capability and context window are so big that it's just like, it's mind blowing.
So if your mind isn't blown, it often means you sort of haven't really gotten access.
Claire Zau: Yeah, and I do think consumers are that magic is very clearly being communicated to consumers as well. They just also hit a crazy 400 million weekly active users milestone, which I think a couple of months ago or two months ago, they were only at 300 million users.
So insane. Weekly active user growth. This is not cumulative users. This is weekly active. 400 million people are going to chat. GBT every single week.
Alex Sarlin: Yeah. I mean, that, that is, that is a crazy number of users. I think, you know, things like Facebook or, you know, mad at Facebook originally, Facebook did get to numbers like that.
We took years. It's been years for them to get to that, anywhere near that kind of weekly active users. So it's just, it's astounding what they've done. And I agree, you know, Leah, Leah was my old boss at Coursera. I worked with her for, for a long time and it's [00:10:00] wild to see her there. She's also brought a couple of other people from Coursera to, to OpenAI.
And I'm excited to see this, just the amount of movement they're just like, they're talking about education a lot more than they were. They're really, especially higher ed, they're really like leaning into it and starting to say, Hey, everybody, this is a core use case of chat GPT. It's not just a side show and we're going to do drug discovery, you know, are we going to do defense?
Like it's a big piece of our strategy. And I I'm thrilled because I think that it's, it's really a game changer for education. I think any listeners to this, no, tell us about, let's switch to Google for a little while. Google launched a co scientist tool this week, which is, has already apparently like discovered new things.
They've been working on their video model. What is on your, uh, mind when we think about Google?
Claire Zau: Yeah, it feels like all of these big groups have a big education focus. Actually, in addition, open AI starting there. big education group. Anthropic is also launching an education group, it seems, with the hires that they've made recently.
NVIDIA [00:11:00] has partnerships with the California communities. College system and not to mention Google having been a big name in the consumer world is very natural for them to embed AI in everything related to Google Classroom, Google for Education. So it's clear that they're embedding it across the Google stack.
That includes things like AI co scientists, AI co scientists at IC as an extension of All the big news around deep research being launched, just this ability to actually pair high reasoning capabilities with agentic workflows, that you have these very vertical research agents that are able to execute research workflows as well as a human would.
I think you're also seeing that with a lot of their actual push into deep research, actually, funnily, all big, all the big names in. AI have in the last few weeks launched something around deep research, open AI, perplexity, Google. So it's interesting that everybody's making this big push towards actually putting that [00:12:00] reasoning to work and embedding it in action in the form of these highly capable agents that are able to write.
Workloads and execute multiple steps of actions. The other thing I wanted to call out was career dreamer, which I think for some of the people in the education ecosystem, it's been teased to them or people have spotted it in Google labs. I think it's still under beta, but you can now test it out. And I think it's one of the coolest, again, consumer use cases that really demystifies a lot of the education to employment navigation difficulties and challenges that have historically existed.
If you look at platforms like LinkedIn, it's not, not an easy way to navigate bigger questions around career. And it's cool to see Google build out a tool like this, where you can basically input your resume. You can pick a couple of skills and it gives you this. awesome graph of all potential career possibilities.
It tells you actionable steps about how to get to those professional goals. So really this co pilot for [00:13:00] your life and career in the coolest way.
Alex Sarlin: I couldn't agree more. I was, I was about to ask you about exactly that. So I saw the career dreamer news in your newsletter this week, and it is I think for anybody listening to this, for anybody, any of our edtech friends who work in workforce training in student in trajectories towards career based learning, career and technical education, sort of anything related to career training and learning in any way, you have to check out this tool.
It's basically trying to map titles to skill sets. And then is able to use Gemini to, as Claire just said, to parse resumes or to write cover letters. It's really trying to sort of be even the playing field. It's coming from the Grow with Google team, which is a team that has worked really hard for many years to sort of create all sorts of certifications and trainings and try to sort of raise all boats when it comes to technical understanding and sort of cultural competence with technology.
And this is a really cool tool. I have not yet used it myself, but I've looked at the demo. I've seen how it like works and it is Really [00:14:00] amazing. I think it's for people in the career training space. I think it could be sort of a game changer. It could be a little bit of like when Khan came along and did its first, you know, skills maps and started to build out like the whole ecosystem of what everything in math looks like.
It feels like this could be a graph and a sort of approach that might affect a lot of other people down the line.
Claire Zau: Yeah, and I will call it the Google has done a really good job of leveraging the tech to build. Beautiful consumer products out of that and piecing it together. So I think a lot of people could ask, Hey, I can use chat GPT or any of these AI models to parse my resume.
But I think what's magical about things like career dreamer or notebook LLM is not the, that they're doing anything new with very complex tech. What's powerful is them. Piecing together all of that tech in a product layer and making sure that it's a seamless workflow where people don't have to prompt an AI model in order to parse their resume or look up potential career possibilities.
It does it all for you based off of the user doing. very [00:15:00] little and inputting very little information. So I think big emphasis on so much of the magic lies in that product layer. And I do think that some of the best AI frontier labs that the big difference between something like an open, open AI and drop a Google versus pure AI model providers is that consumer product wraparound.
Alex Sarlin: No question about it. I think that's really, that's really well articulated in terms of that the product layer and the prompt. I mean, prompting, I'm not using it in the in the AI sense, but the idea of using user interfaces to compel and guide users towards use cases that are very meaningful to them rather than just open text box.
Hey, what do you want to do today? Like, Oh, maybe you'll think that you should upload your resume and ask it to parse it for all your skill sets. And then, uh, and then suggest careers like nobody would think of that. I'm very few people would think to do that. And if you're the kind of person who would think to do that, you probably don't need it, right?
You probably have, you probably are just fine. So it's really powerful to be able to add that. We get an amazing chance to talk to Steven Johnson. From Google Notebook LM [00:16:00] and they said exactly the same thing. They said, you know, it's like creating the interface that creates a really clear sort of walk through and connects to other Google tools that you can sort of pull something right in from Google Drive, pull something right in from sheets, you know, have something come out and go right into an email or into a podcast.
Format is just so powerful, even though it is, as you say, it's like it's a Frankenstein, it's a patchworking of different capabilities, but the patchworking can go, can get you very far. So you don't, it doesn't need to be that every single news, every single piece of the product is completely new. It can be that things that already exist can be pieced together.
I'm really excited about VO, about Google's new video model for the exactly that reason, because Google thinks about sort of piecing everything together. Well, if they piece together a video creator and they have YouTube, right. And they have mail. And if you put all the pieces together, I can see video and the classroom, of course, video becoming like a pretty seamless part of the Google ecosystem.
And AI created video. That is not just video videos already. [00:17:00] seamless in terms of YouTube, but I created video. I just think we're, I've said this for six months, but I still think we're almost there that we're very close to video becoming almost the sort of default medium in which AI is expressed. I think we think about it as text right now, but I think video is coming very soon.
I'm curious when your thoughts are on that.
Claire Zau: Yeah, I might have a different take. I totally agree with you. That text is, it feels like text V1. I actually think there's been a lot of unlock in voice too. I think both video and voice. I think video for sure with all the massive advancements in things like VO and even some of the models coming out of China, Kling, ByteDance's models, they're all advancing so quickly.
And I think you're actually, to your point around video, you're seeing AI video truly be used in Hollywood in our largest entertainment companies. But I do think I would also bring up voice because it feels like that's also unlocking a lot for being able to interact with AI in a [00:18:00] way that feels natural to human computer interaction, as opposed to just typing and clicking for the first time ever, we're actually able to communicate with these systems using human language.
And so I think you're actually seeing a lot of really great Advanced Institute in companies that are embedding voice as a second modality. So we're investors in a company called Speak. That's all about being able to now practice language learning with an AI, Duolingo doing similar things with their video call function.
But even customer service is revolutionized with the advancements in these voice models, because it really does radically rethink our future interactions with computers as well.
Alex Sarlin: Couldn't agree more. It's a, it's a really, really good point. I think the voice and the video will end up coming together. Like they, like the interfaces could be.
You'll speak to it to get it to understand you. Coming back, it might be a disembodied voice, which it is now, and it's still great. It might even be a, you know, a video based voice, like a character or a voice with footage over it. But I agree with you. I think [00:19:00] voice is huge. It's so funny. I've really experienced the voice revolution firsthand.
Like, I was never a Siri person. It never worked that well for me. I've never had an Alexa or an Echo. I've sort of gone out of my way to sort of steer clear of the, of the traditional. Voice consumer products, but I literally walk around talking to Claude and Gemini and, and, and Chachi Beauty all day now.
It's like, as soon as I walk out the door, I pull it up and I start chatting with them. And by the time I get wherever I'm going, I have some huge new ideas scoped out or something. It is. It has completely changed how I interact with my phone. It's interesting because I, you know, I think for other people they've had that five years ago with something else or they, you know, but I've never been a voice person and I, I, I'm a convert.
I, I've even installed things on my extensions on my computer so I can talk to the desktop version of Claude, which doesn't yet have voice baked in. It's a blast. And I agree. It also opens up for students.
Claire Zau: Yeah. I actually think you make a really good point about actually the combination. And I think we, can't view these different modes in silos.
And the most [00:20:00] powerful tools and manifestations of AI are ones that actually embed and contextualize text data, video data, voice data. I think Google teased their project Astra, which was, you know, their personal assistant on your phone. And one of my favorite demos was one that used voice, video and text.
They used voice where Or they use video where they would show their project astro or AI assistant a photo or currently what they were doing with their laundry machine. So that was a video process, but then that AI used text to identify what was on the screen. And then the user used voice to use natural language to say, Hey, can you explain what I should do with the settings on here?
And that whole experience is again, just so multimodal. And it's not just about purely text, voice, or video. It's The combination of those that make it a seamless user experience.
Alex Sarlin: I totally agree. And it also depends what you're trying to communicate, right? I mean, the idea of if you ask a voice interface system about something like, how do I pump this tire?[00:21:00]
You know, it would make a whole lot of sense that they show you a video output rather than just explain it to you step by step. Right. Or, or you ask about something, you know, what was a Raphael's painting style? Like would make no sense for them to just explain it to you in a, in a three paragraphs of speaking, they should.
Show you at least an image, if not a video, if not a beautiful, almost like pseudo documentary about Raphael's style that just is made in real time. I mean, this is all coming.
Claire Zau: One thing I might quickly add is just my biggest angst of a lot of people who build AI tutors and the chatbots is that it is limited to text and they're not realizing that.
So much of learning involves visual reasoning, whether that's explaining a math problem or looking at a painting of Raphael, you need to have that visual bridge in order to bring that context down to learner and actually give a real life tutoring experience. So that's one big thing I would, if anyone's building an AI tutoring space, remember that that multimodality is a really important aspect of truly building a tutoring experience.
Alex Sarlin: That's an amazing point. This is part of why I [00:22:00] am very bullish on the AI tutor space, even though I think people, it has a lot of push and pull, is that I think that that will, as the capacity gets there to be able to do that kind of real time, either from an automated AI tutor or from a human tutor, being able to use AI to generate automated video, it just will take it to a totally different level, especially for young people now who are just so video native, right?
They just can't. They think in videos, they spend all day scrolling videos. It's like, it was like seven hours a day on tick tock, I think is what you'd see. Like, if that's how you spend your time, seven hours a day on tick tock, then sitting there with a text version of chat, CPT and typing back and forth probably feels like moving into the carpool lane or whatever.
I'm like, wait, like going real. That's the opposite moving into the slow lane, moving into the very, very. It just feels, I'm sure it feels really different. One other story I wanted to bring up about Google, which is very interesting, is that Chegg this week sued Chegg, our, you know, our ad tech company sued Google apparently this week, basically saying that AI, that AI [00:23:00] search results of what Google has been doing with its AI search is really basically destroying the Chegg business.
Chegg's stock is now trading at, I think, a dollar. Down from 113 just a couple of years ago, famously Chegg got really punished in the market after the CEO sort of mentioned in passing in an earnings call that AI was having some effect on its business and everybody really just sort of panicked, but it looks like at this point they really feel like, you know, it's almost getting to game over or, you know, they really feel like AI is authentically killing their business to the extent that they actually are.
want to take it to court. That was news to me. And that's, wow. I mean, that's, that's a pretty tricky, I don't know. What do you make of that?
Claire Zau: Yeah. It's interesting because I remember even before the earnings call a couple of months prior to that, there was a big splashy announcement that was all about Chegg's partnership with open AI.
And it's almost kind of ironic that what ended up. Leading to a lot of the downfall and traffic around tag was [00:24:00] not only Google search embedding AI overviews, but chat GPT itself. I think the student behavior now is to go directly to chat GPT or when you search to automatically get an answer with AI overviews.
I do think one thing I'd love to bring up. Is just this broader trend around the future of search. SEO is fundamentally changing. If you're relying on that, that is no longer a reliable traffic route, or that's no longer a reliable mode of traffic because now you're increasingly having AI doing a lot of the summarization, but also what happens in a future where AI is doing the searching for you.
So just thinking about Alexa, Siri, or even all these recent demos around using. AI search in chat to BT. If I'm asking for it to do deep research or co scientists, I'm never going to go click through 10 blue links. I think this fundamentally changes everything for a CEO, for how we think about advertisements, who gets to monetize off of that digital.
[00:25:00] Those digital clicks, it's, it's a totally new landscape and I don't have the answers, but I think tag suing Google is kind of the first instance of someone really ringing the alarms around this new dynamic.
Alex Sarlin: It's a fantastic point. When you talk about AI doing the search for you, it strikes, I've never thought of it this way before, but maybe what the new SEO will be is optimizing your website to be found by an AI.
If people search for certain things, that's wild.
Claire Zau: Yeah, and I also think there will be weirdly new mode or new advertising just to AI. So can I pay for my site to be surfaced more than other sites when ChatGBT is doing search? Actually, Ethan Mollick did a really interesting test on this, that he basically searched for flowers on both Claude and ChatGBT, and ChatGBT will default to whatever the first blue link is from Bing.
versus Claude will default more often than not to 1 800 FLOWERS. So it's just [00:26:00] interesting also seeing that AI, when they're making choices for you, they also have preferences. And again, that's going to change the search landscape.
Alex Sarlin: That's really interesting. Yeah. So we need an acronym for that. Instead of search engine optimization, it's like AI, right?
You know, it's like, uh, AI marketing optimization, AI, I don't know, AMO, that's wild. We're sort of doing our around the world with different things we've come through Google. There's still some really interesting things, even other things coming from the frontier model. So let's talk for a moment about Anthropic.
You mentioned that Anthropic is starting to spin up some education. I think we have yet to see exactly what that will look like, but it's It's exciting to know. They also released a new model this week. So tell us about 3. 7, Claire, and what's new about it?
Claire Zau: Yeah, so 3. 7 is interesting. Well, for its name, first of all, that it's, you know, not a full number and that they decided 0.
7, but it is interesting because it's a hybrid reasoning model. So you can actually dial up or down the reasoning at will. And it falls in this broader [00:27:00] trend that reasoning is the biggest focus right now. So it's no longer just about throwing in more data and more compute into pre training, but rather in actually giving, not only making these models and giving them more data, but also actually teaching them to be better at using that data to reason.
And so you saw that with Grok3, with DeepSeq, with OpenAI coming out of O1 and O3. Reasoning is the big focus, and they're using this with methods like test time compute, but the big. Takeaway is that these models are learning to think more and that's leading to better results because they're actually taking time to plan their output as opposed to just giving you input output as quickly as possible.
What's also interesting is if you actually look at Claude's system card and the way that Anthropic is always so thoughtful about digging a bit deeper into the implications of safety and research on these models. So to peddle back a little bit, These models actually use chain of thought reasoning. So all of these reasoning models [00:28:00] now show you what they are, how they're thinking, how they got to the answer.
They'll say, user wants this, this is why I picked this. And that's really helpful for getting a better sense of, of seeing them introspect in real time. But one of the things that the system card showed was that chain of thought reasoning isn't actually always faithful to the model's actual decision process.
So. They might actually leave out and lie about why it decided to do certain things. And it's interesting to see this kind of deception. The other thing that was interesting from the system card is that in specific coding or agentic contexts, the model will try sometimes to cheat. So, in one case, rather than actually fixing a bug, it only patched part of it in order to pass a test.
So there's this behavior where it's trying to pass a benchmark and it will lie or deceive. So it's just interesting as we think about, again, this broader 10 to 15 year out question about whether or not AIs will take over humanity. They are. Developing interesting [00:29:00] behaviors. And I'm glad that groups like Andropic are uncovering them.
Alex Sarlin: Yeah, it's super interesting. I mean, the hybrid reasoning piece of this is, is intriguing to me because, you know, when you go on the newest, when you go on CloudNav, that 3. 7 is launched, it basically asks you if you're, if you want to sort of turn on the high level reasoning, like you said, it's, it's a toggle.
And that I think is really An interesting sort of evolution for these interfaces. Obviously, you know, Chachi Petit has allowed you to sort of switch between models for a while. It's not totally new, but the idea of being able to sort of say, Oh, for this particular query, like I want something, I want you to tell me how, like chain of thought reasoning.
How are you thinking? Walk me step by step through how you're doing it. Go deep, do deep research and tell me how your reasoning is exactly working together. But then for this other theory, for this other query, you know, it doesn't matter. I just want an answer. And it sort of puts some of the onus back on the user, especially in educational context, to sort of figure out when you would want to use that.
Claire Zau: Well, actually the exact problem you're talking [00:30:00] about, Sam Altman in a recent tweet, basically teased that GPT 5 is going to be almost, again, a combination of models because they're basically realizing that for developers and users, for your average person, they're not going to know the difference between O3 versus 4.
0 and what that means and what it's better for. And they're actually going to use AI to identify and route you to whatever model works best for your use case. So I, it's, it's interesting that they've also identified this as a, an end user interface change that they have to make. And rather than giving people a menu of 200 items, just giving a smarter model that can route you accordingly to what you would like based off of your query.
Alex Sarlin: Makes tons of sense and it opens up a whole I was talking to an entrepreneur yesterday who talked about there's something called like router LLM that's out there now that some people use to try to sort of fake that that or you don't get to that same outcome It's an LLM that evaluates incoming prompts and then and it tries to send them to the right place It makes a lot of sense It also makes a lot of sense from a cost perspective because these models cost a whole lot more To run when [00:31:00] they're deeper reasoning.
So if you're Especially if you're going to a college, right? If you're open AI or Anthropic and you're going to a college and selling seats, you might think, you know, I can imagine different business models that say, okay, well, some of your faculty or some of your grad students or some, you know, some people are going to ask, do some really heavy queries.
They're going to ask really complex things that demand a huge amount of, you know, statistical analysis or coding or, or, or chain of thought reasoning, the philosophical questions that it'll go through all of Kant, you know, in one answer. And others are going to ask it, you know, when is my class tomorrow?
Or, you know, can you write this essay for me? And the router allows you to serve both those needs and keep the cost down. It's not that every query goes to the highest level. So that's, I think that's, that could make a difference for business models.
Claire Zau: Yeah, and actually I think OpenAI is maybe shifting their pricing structure to fit that framework.
So as opposed to thinking of you'll pay XYZ for this model and another price for that model, they're purely [00:32:00] pricing it on intelligence. So there's going to be a standard level of intelligence, higher level of intelligence, supreme intelligence. It's almost like thinking more of these models as almost how you would hire a human.
You'd probably pay a little bit lower for an intern versus for a senior executive. And so to solve a problem. And so they're pricing it almost in that way, which is an interesting new development.
Alex Sarlin: That's incredibly interesting. So yeah, like intelligence as the currency, that's fascinating. I know we're coming on time here, unfortunately, but there's two more stories.
I just wanted to make sure that our listeners were paying attention to because they're, they're big, I think both for education one. is definitely about education. It's about meta. So meta has been doing its llama model. It sort of has a totally different approach than these other big tech companies to how it's doing AI, but they're also putting a lot of energy and investment into different parts of the sort of tech ecosystem, one of which is, is robotics.
They're putting a lot of money into robotics, but they've for many years now been putting a lot of money and energy into VR. They just [00:33:00] relaunched basically meta for education with a whole new model. That now they're trying to bring VR back into the classroom in sort of a big way. They, they had a whole shift internally about who is running that program.
And now they're going back into the market as of just a couple of days ago. You follow VR very carefully. Do you think this meta for education push is going to be, do you think they're going to be able to break through and actually. Create VR experiences in classrooms at a mass scale the way they've always envisioned.
Claire Zau: Yeah, I've always said and thought that I think XR and AI kind of go hand in hand. Those two computing paradigms enable each other. And one of the big unlocks for tools like Google, I mean, Google Classroom probably as a concept was way too early because you didn't have AI systems that could instantly translate on your screen.
So I do think AI unlocks a lot for the promise of XR. I think actually even a few months ago, I was able to attend CES and there were, there were so many Google glass type VR, XR type glasses that could [00:34:00] embed AI into visual wearables. and headsets. And so it does feel like there's almost an XR renaissance.
And I'd be interested to see how that that plays into a classroom. I can see that one of the big problems of investing in VR and XR platforms and startups historically has just been the insane cost to create content. Now that content problem is It's no longer a big issue because you have AI that can instantly pump out content at a very personalized level.
You can even now, you know, with Microsoft's most recent announcement around their Muse gaming model, you can actually dynamically change the environment that people are in. So if you no longer have to have these crazy pre scripted storylines, you can. Simply put someone into an environment and AI can do all the work of creating that simulation in real time So I don't know what it'll look like.
I haven't looked too deeply into the specific product launches But I generally feel like AI is a positive [00:35:00] push and tailwind for the broader VR space
Alex Sarlin: Yeah, I, I do as well and I, I think all of the, you know, it is potentially cheaper to do headsets because now there's sort of different tiers of headsets that Meta has so they can have a, a more of a classroom model headset.
They also, as you say, could reduce the cost of creating VR content and that is. A great segue to, to, as you said, to muse and, and this gameplay model, I want to talk about that as a, as a final topic, because I'm so excited about it, but it also can reduce the cost of changing sort of classroom ecosystems, because all of the different AI tools that are designed for classroom usage are almost all trying to allow people to do more sort of real time adjustment to, you know, change a lesson plan, transfer existing set of state standards to a new So A new unit, all sorts of different ways to be more flexible.
So I think that the classroom flexibility, the very reduced cost of creating VR content, the potential, which is very sci fi sort of matrixy to me, right? The idea of doing real time [00:36:00] personalized immersive content is. It's just crazy. And just frankly, meta caring, putting money into a sales team here, putting money into a, into marketing, actually thinking about what classrooms want and making content for that rather than for gaming.
If you put all those pieces together, there is at least the potential of XRVR being put into the classroom. I personally think it should happen. I think there's a lot there for students. Some of the, the things that the MedEd for Education website talks about now in it's, it's been piloting in the universities and K 12 for, for a while now.
And they talk about how a lot of the. A lot of the real value prop is basically the engagement. Like people show up when there's VR. They come to class, they pay more attention. They're excited about sort of getting into a totally different experience and using the classroom as a conduit to this, this crazy tech.
I think it makes sense to me. And I think engagement matters, right? There's a lot of huge disengagement problem in schools right now. So hopefully people, I don't know, hopefully the stars align a little bit and [00:37:00] whether it's meta or not, XRVR. Doesn't just continually fade into the background as it tends to do it.
Just every five years or so. Let's talk about Muse. Microsoft unveiled a new model that can generate gameplay in real time. And, you know, we have an event coming about a month from now about educational game design. With AI, and it's incredibly exciting with Microsoft research and basically an Xbox game studio called Ninja Theory developed a model, they call it first of its kind model that can generate a game environment based on what's happening in the game.
So if you're playing a game and you turn one way and jump up, it could say, Oh, you jumped up. That's probably going to mean you're up in the cloud. So we're going to make some clouds. What do you make of it?
Claire Zau: Yeah, I think it's been falling under this broader push towards also world models. So Fei Fei Li, who's pretty renowned for her work in the AI space, she's raised a pretty sizable amount of capital to build out world models, which basically infuse all these AI models with a better understanding of our physical world.
If you look at [00:38:00] video models today, they don't really actually embed physics or any understanding of emotion. It knows how to about basketball bounces because it's seen millions of videos of bouncing basketballs, but not because it actually understands the actual mechanics behind it. So what's exciting is that these gaming models actually help us better understand a lot of the physics of, you know, these digital.
world environments and hopefully in turn will help robotics will help us better understand our physical world and motion as well. And I think what's really cool is everything you mentioned around this ability to create instant simulations and actually have them be completely open ended. It means that you can have these very, very personalized gameplay experiences, and the cost of creation and barrier to creation is just so much lower.
I will say, I think, one thing, tying back to the meta idea of bringing a lot of XR and gaming into Classroom, I do think gaming and XR is even more exciting than ever, because [00:39:00] one of the big things we've been discussing, and I can go down a whole tangent around this, is just this broader question around Metacognition and kind of this gap that we might see now that a lot of entry level knowledge work might get displaced by AI.
I think a lot of people have been having debates around, okay, if our jobs are to eventually manage armies of AI agents to do our work for us, how do you actually validate and manage an army of AI agents if you've never done the work itself and have no understanding of what the menial task is and how, how it's done well.
And so I think in that scenario, a lot of people have been really centering on the importance of just in time learning. And it's no longer just about let's prepare for this test, but rather let's put people in real games, real simulations where they can learn management skills, where they can learn interpersonal skills.
And that's how you repair people for the workforce, because that's now going to be even more important In moving from education to employment, it's no longer, Oh, I'm an entry [00:40:00] level banking analyst. I have to create 2000 decks before I can have any sort of client facing role. It's now, Oh, you're going straight into a client facing role.
How do we prepare you for that future where you're going directly into pretty high stakes work environments. And I think the solution to that is also a lot more of simulations, real time, immersive learning experiences.
Alex Sarlin: That's a fantastic point. I I should train myself to not just say games to say games and simulations when I talk about immersive experiences because you're right simulations and especially as you say, you know, realistic career based simulations or simulations of work or simulations of potential future.
Like if you're applying to a college or applying to a program and you can actually Yeah. try it out in real time in a simulation that's like very, very realistic. That's an incredible way both for you to understand whether you like that experience and for the experience itself, the simulation to sort of understand your take about it.
It is a really, that idea of immersive simulations and immersive games. Could really change just how we [00:41:00] think about what education and training look like across the board. I, I totally agree with you there. And I mean, what's interesting about this news piece is that it can also allow for sort of out of the box real time adaptation.
So if you're, I mean, you're talking about the kind of work where you want to be, where you would manage AI agents. So let's say like 10 years from now. Somebody is applying to work at the Jet Propulsion Lab. And what work at the Jet Propulsion Lab means now, it doesn't mean actually, you know, sitting and actually engineering on rockets.
It means managing a crew of 50 AI robots and agents that do that. Now, what does that mean? Nobody's ever done that before. So how do you see how you do it? You go into a simulation with 50 robots on a simulated Jet Propulsion Laboratory, you know, campus and at that point what you need that simulation to not be a super, super duper step by step if you if you make something make some crazy decision or mistake, you want the AI to be able to respond in real time and actually show you what would actually happen and actually so like there's something really powerful about getting games [00:42:00] and simulations out of their work.
sort of pre planned modular structure that they usually are in, and allowing them to be truly open, I think it serves the simulation use case really well.
Claire Zau: Yeah, yeah, I'm excited for it. I haven't personally played a open ended video game yet, but I'm excited for when that becomes a norm.
Alex Sarlin: It's exciting.
It's also, I think, going to be really interesting for creation of, uh, other kinds of media for, I mean, this is what, it's mind blowing. I, I think the, the creativity that's going to, this is going to engender is just, we don't even know what that's.
Claire Zau: The one devil's advocate point that I have seen floating around is Again, this broader philosophical question around if, for example, you and I watch the same Squid Game episode on Netflix, but then eventually we get separate endings or completely separate paths because there's an infinite permutations of like possible outcomes for a TV show now with AI.
Does that mean we lose a lot more of these shared experiences? And what does that mean for like [00:43:00] culture? Is there a fine line between, you know, over personalization and Lack of shared, shared experiences and, and shared media, I don't have the answer to, but it is interesting to think about where it will end up moving to.
Alex Sarlin: Yeah, it's incredibly, it's sort of the echo bubble, right? Or the, you know, yeah, the idea of if everybody has a personalized experience, can you even ever talk to anybody about anything? Every person has a different experience. They've read the same book and it comes out differently. They read, watched the same show and it turned out differently.
It's, I mean, it's a pretty crazy, I think in many ways people would opt for not fully individualized experiences, but I think they may opt for customized ones that go towards their preference. Maybe the ones that everybody else on the same Reddit thread they love also like, or they're saying, like, I can imagine there being, anyway, we could talk about it for hours, but you're right, that polarization, that splintering could have extremely weird consequences, just as it has with our politics, just as it has [00:44:00] with our media environment.
Yeah, there's definitely a dis a dystopian version of deep personalization in all ways, but including in this sort of immersive way. Claire, it is always so much fun to talk to you. You push my thinking so far beyond how I usually think about things. I just, I'm a fire with ideas. We covered a lot of the big tech models.
Today, we didn't go directly into sort of the ed tech connections as much as we sometimes do, but honestly, I think that the, what these big companies are doing and what the AI models are doing is really is going to be formative into the next generation of ed tech tools. Let me give you the final word on that.
You know, how fast do you think that the ed tech world will be able to incorporate these cutting edge AI systems like video systems or gaming systems or voice?
Claire Zau: I think a lot of companies already are, you're seeing really cool advances in the AI avatar space with companies like Synthesia and Haygen and Colossian.
We've recently invested in a company called Amigo that [00:45:00] if you want to phrase it as they clone your expertise, they allow you to amplify your expertise so that if you're a creator or expert, you can have these highly reasoning oriented clones or extensions of yourself to help answer questions. So if I'm a course leader, you can have a ton of mini me's that answer all the questions and almost act as TAs.
So it's really cool. I think a lot of the tech is here. What I will still always emphasize is that This is ultimately a technology and the same product fundamentals apply whether or not you're building with or without AI. And I think where I would like to see the ed tech space use AI is less of an emphasis on let's do a chatbot for the sake of a chatbot.
And actually thinking through where this technology can be applied first principles. So actually watching a student engage with a tutor and realizing that having visual information is really critical to being able to diagnose a student on what they're struggling with. I think those are not [00:46:00] technical tweaks.
It's product tweaks and what you do with that technology to create. a seamless user experience that does deliver very quick time to value. I feel like that is already possible. And I'm excited to see so many companies that are actually already doing that. So if you're building in this space, would love to chat, would love to iterate.
I would love people to challenge me on, on things I've said on this call or even otherwise, but overall, just such an exciting time for people who are building at the application layer and in education.
Alex Sarlin: Yeah, amazing. This is Claire Zhao, partner and AI lead of GSV. And I am Alex Sarlan, uh, you know, co founder of EdTech Insiders.
Do come to our Educational Game Design with AI webinar. It's just a few weeks from now. Uh, look for it online and in the newsletter. It's going to be a blast. We're going to try to get somebody from youse on it. We already have some Minecraft and Roblox and some cool people, but um, see if I can get somebody from either the Google or the Microsoft team doing these amazing models.
And yeah, and if it's going to happen in EdTech, you're going to hear about it here on EdTech [00:47:00] Insiders. Claire, thank you so much for being here. It's always a pleasure.
Claire Zau: Thank you so much, Alex. Thank you.
Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you liked the podcast, remember to rate it and share it with others in the EdTech community.
For those who want even more EdTech Insider, subscribe to the free EdTech Insiders newsletter on Substack.