Edtech Insiders
Edtech Insiders
Week in Edtech 11/12/2025: Google DeepMind AI Forum Recap, Duolingo Crash, Parents Turn to Screen-Free EdTech, AI Companions in Schools, and More! Feat. Michelle Culver (The Rithm Project), Erin Mote (InnovateEDU), & Ben Caulfield (Eedi)
Join hosts Alex Sarlin and Ben Kornell as they unpack the breakthroughs and backlash following the Google DeepMind AI for Learning Forum in London—and what it means for the future of edtech.
✨ Episode Highlights:
[00:03:30] Google DeepMind’s AI for Learning Forum sets a new global tone for learning innovation
[00:06:58] Google’s “Learn Your Way” tool personalizes entire textbooks with AI
[00:08:12] AI video tools like Google Flow redefine classroom content creation
[00:13:40] Why this could be the moment for teachers to become AI media creators
[00:18:36] Risks of AI-generated video: deepfakes, disinformation, and youth impact
[00:22:19] Duolingo stock crashes over 40% amid investor fears of big tech competition
[00:23:52] Screen time backlash accelerates: parents turn to screen-free edtech
[00:26:14] Why physical math books and comic-style curricula are surging in demand
[00:27:35] A wave of screen-free edtech: from LeapFrog alumni to audio-first tools
Plus, special guests:
[00:28:51] Michelle Culver, Founder of The Rithm Project, and Erin Mote, CEO of InnovateEDU, on the psychological risks of AI companions, building trust in AI tools, and designing for pro-social relationships
[00:51:48] Ben Caulfield, CEO of Eedi, shares groundbreaking findings from their Google DeepMind study: AI tutors now match—and sometimes outperform—humans in math instruction, and how Eedi powers the future of scalable, safe AI tutoring.
😎 Stay updated with Edtech Insiders!
Follow us on our podcast, newsletter & LinkedIn here.
🎉 Presenting Sponsor/s:
Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal b
This season of Edtech Insiders is brought to you by Cooley LLP. Cooley is the go-to law firm for education and edtech innovators, offering industry-informed counsel across the 'pre-K to gray' spectrum. With a multidisciplinary approach and a powerful edtech ecosystem, Cooley helps shape the future of education.
Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.
As a tech-first company, Tuck Advisors has developed a suite of proprietary tools to serve its clients better. Tuck was the first firm in the world to launch a custom GPT around M&A. If you haven’t already, try our proprietary M&A Analyzer, which assesses fit between your company and a specific buyer. To explore this free tool and the rest of our technology, visit tuckadvisors.com.
[00:00:00] Alex Sarlin: It's important that we get to this moment and we don't sort of reify the status quo and pretend that everybody is Mr. Chips and to Sir with love a dead poet society. I think like we should not romanticize how well education has worked up to this moment because it hasn't always worked that well and certainly for not for many, many, many, many people.
And I think we have a real chance to rethink it. And I think Google is trying to just let everybody who's thinking about what rethinking it looks like be in the same
[00:00:27] Ben Kornell: room. I've gotta think about a classic challenge, which is safeguarding and how do we make sure things are appropriate and limited? And then I have to think about this idea of co-creation and who has the power to create, and I guess what I would say gets me over the hump on like this breakthrough technology is gonna be very powerful.
Is in the hands of teachers who are trying to make concepts come to life for their kids. I get super excited about what we saw at the DeepMind event.
[00:01:05] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to ai developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders.
[00:01:21] Ben Kornell: Remember to subscribe to the pod, check out our newsletter, and also our event calendar.
And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod.
Hello Tech Insider listeners. It's. Ben and Alex again with another week in EdTech. Oh man. Are you over your jet lag, Alex? It was nope. So insane in London last week. It's one of those times where you feel like you have a glimpse into the future of what AI is going to do, and instead of getting more answers, it raised more questions.
Fascinating, fascinating week. We'll catch you up. Alex, what's going on in your world? What's going on in the pod?
[00:02:12] Alex Sarlin: Yeah, I mean, it's such an exciting moment for the podcast. So in the last few weeks, we're putting out a great interview with Rohan Mahi Ker, who's the CEO of Prodigy Education. They've done incredibly interesting work with this sort of hybrid model of free to schools, paid for families, and then everybody sort of works to keep everything aligned.
Really interesting interview. We talked to Brandon Baste from Ed Connick, who is a longtime higher ed critic and observer, but somebody who is really innovative around apprenticeships That's coming out just next week, and we just continue to talk to absolutely incredible people. In this podcast. We're talking to Ben Caulfield.
He is the CEO of Eedi. Eedi just put out a paper with Google DeepMind talking about AI tutoring and how human in the loop tutoring is actually outperforming or at least performing as well as. Human tutors. That's a big deal in a math context. In this particular case, that's a big deal for the space. It's just, it's great to be back.
Frankly, it's been, I'm a little travel fatigued, but it's been so much fun. Ben, let's kick off by talking a little bit about this Google DeepMind AI and Learning Forum event. It was really exciting. It felt like a real honor to be there with some incredible people. Give us some of your takeaways for the people who were either there or people who are listening to this who didn't get the chance to be there.
It was in London this year, big trip. Tell us about some of your takeaways, and I'll tell you mine.
[00:03:30] Ben Kornell: Yeah, so last year we were in Mountain View for a AI for Learning event at Google, and I think that it's meaningful that this event was in London where DeepMind is headquartered signaling a shift in Google's ultimate center of gravity around learning and learn LM to DeepMind.
What's exciting about that is. We've got a lot of channels to distribute Google out, and that's run largely by the US team, Google Classroom being the predominant one in K 12. But you can also imagine YouTube is distribution. You can imagine like all the Google Docs and the Google Workspace infrastructure is all like learning distribution.
But from a technical standpoint, DeepMind is the mothership of how do we get large language models to actually structure for learning? And that was really exciting and apparent. They had James Manka speaking. They had Demi come and speak. They were bringing out the big guns, not just to talk about the softer parts of system change, but to talk about the deep technical parts of how do you get a large language model to actually anticipate a learner's needs and results and not also give away all the answers.
Become a question answer machine. I'd say the three speakers that stood out most to me, one was, uh, Tony Blair, former Prime Minister. His main headline was, there's a backlash coming. There's an AI backlash, and that comes from a bunch of different sectors, but largely from the threat of job loss. And we should be prepared in education because if it's coming for jobs in other sectors, it will make educators and parents sour on it, but also it could feel very threatening to educators.
So that was a, like, it was a great point. And he was really talking about attending to the political reality of AI and how it's, it's handled. My two thoughts on that one are, if we make it about ai, it's a political loser. If we make it just around cooling, that works for people. And instead of talking about AI companies, non-AI companies, they're all just companies.
Then I think we can get to a place where things are far more successful. The other two were the youth panel. My son Nico was on it. So that was really exciting for me. But just hearing from the AI native youth, which is we're all using it. We don't remember a world pre ai. This is what we do and we need your help to get the adults on board.
That was the main takeaway I took from that. And then the third was the round table discussions where we were really talking of, we had prompted discussions and at my table there were seven different countries represented at the table. And yet so many of the challenges are universal. And I think when you go to a global event like this, you think, oh, I'm gonna find out what's different in these different contexts.
The only major difference is around accessibility that I found, which was, it is still very hard to access. High value AI models in low income countries. No big surprise there. But this idea of how do we create systems change that adopts AI into learning in the most constructive way. It was bar none universal.
What about you? What were some of your takeaways?
[00:06:58] Alex Sarlin: Yeah, those were fantastic, by the way. I mean, there were a few different takeaways. So one takeaway for me was from the demos. I mean, Google has been moving into AI and education in a lot of different ways for quite a while now. They've done guided learning, they've done Gemini throughout the Google Suite.
They've done this interesting tool that stood out to me. This one they just put out a few weeks ago called Learn Your Way. I found that just really exciting because it's basically, it's in through Google Labs. It's experimental, but it's really about. Personalizing textbooks where you basically can say, this is what I'm need to learn and let me tell you about my interest and lemme tell you about my grade level.
And let me tell you about a few other customizable pieces. And it'll actually sort of restructure the entire, basically textbook with graphics, with text, with interactives, all to meet the need to the learner. And that's just, I think, symbolically a really exciting direction. I feel like what's been really intriguing is that Google has all of these tools.
They have notebook, lm of course, but they don't weave together yet. And I think they knew that. And I think that was really interesting. It was very clear from talking to the Google team, from interviewing a number of the Google leads that they're really excited about each of the pieces, about classrooms, about Chromebooks, about Gemini, about guided learning, about learn lm, but it's not a cohesive strategy.
I think it's good that they know that because they are now working very assiduously on weaving it together. And I think weaving it together is gonna be incredibly important for Google. But I think it's also gonna be very important for our AI and education story because as of right now, that backlash is real, that Tony Blair talked about.
You see it all the time, and we're gonna talk today about a couple of New York Times articles. People are really feeling the stress about ai, job displacement, about ai, screen time, about students, cognitive offloading, about cheating. Like this is becoming a lot of worry about AI in education. And you know, I say this all the time, but still not a cohesive story about why it's working, why it's exciting, and I feel like this whole event was working to put the pieces together to really build that narrative.
The student panel was amazing. I thought the video was incredible. I mean, what's coming and what's about to come through? Google's video tools are. Absolutely mind blowing. It's really very nuts how video is going to start working. How we're gonna be able to make video about anything editable in any way with any kind of character.
It's pretty crazy. They did a demo of what's going on with Google VO and Google Flow and how they're putting the pieces together and it's relationship to education is still a little nebulous except that we're just going to be in this world where video can do. Anything you can imagine, like literally imagine, and it blew my mind.
The video demo was the highlight actually of me for the whole thing. The other thing that was really interesting, there were lots of discussions about the school systems, and I think to your point, Ben, about everybody struggling with the same things. And I was at a table during the discussion with the head of the ib, unbelievable reach all over the world, incredible model for what curriculum and schooling could look like with people from extremely high-end private schools, from the superintendent of the Miami-Dade School district, and incredibly large and very diverse, and certainly not all well healed school district in the US There were people representing countries with low income across the board with not that much money to spend on schools at all, but everybody's wrestling with the same thing.
How might we use this moment? For technology, for ai, for just almost the dissatisfaction with education status quo that I think feels like it's at getting to like an all time high and use it to transform education in a way that we're really proud of as humanity, as civilization and society. I just felt like.
It was really inspiring to see everybody grappling very earnestly. There was a lot of humility in this event. I don't think people were there to showcase their solutions. I think they were there with the questions. They were there trying to figure out what's the best way for all of these people with incredible reach with incredible.
Resources, there were foundations there, you know, lots of people to put it together in a way that is really actually gonna move the needle for education. So I felt like the sort of dissatisfaction with the status quo. There were some panels of professors talking about the history of the education system and how we really shouldn't take a whole lot of things in education for granted.
Even the idea that. Universal education was meant to be a positive thing. Universal education was in many ways a way to keep people under wraps, which a lot of education performers talk about. But it's important that we get to this moment and we don't sort of reify the status quo and pretend that everybody is.
Mr. Chips and to Sir with love a dead poet society. I think like we should not romanticize how well education has worked up to this moment because it hasn't always worked that well, and certainly for, not for many, many, many, many people. And I think we have a real chance to rethink it. And I think Google is trying to just.
Let everybody who's thinking about what rethinking it looks like be in the same room. There were lots of educators that it was really, really interesting conversations and I think people are just trying to find good answers. So I just respected the humility and the openness with which everybody came to the conversation and I feel like there's a lot of opportunity for crossover between people doing innovative school models between private schools who are using AI in interesting ways between big districts, between micro school programs.
It's just, it's an exciting moment if we keep the conversation open and going, and don't let anybody try to put the stamp on this any, you know, solution and run away with it. I think that's really important. But it was really exciting event and I came back very inspired about all the tech solutions and all the possibilities, but also about the philosophy that I think the entire ed tech and education ecosystem is coming to this with.
You're saying, this is an amazing moment. Let's really try to get it right and let's not let it become another fad or another thing that sort of has unintended consequences. That's my rant, but I came away feeling very inspired.
[00:12:31] Ben Kornell: Yeah. One, I just love your rundown and there's so much there to pull apart.
And you know, part of what was special is it was more intimate event. It wasn't like thousands and thousands of people. You actually had real time for real discussions with people who've been thinking about these elements deeply. I'm curious on the video side, given what you saw, it really is rare in this work that you get a full look around the corner with like what's going to come.
And you know, you and I have seen early examples of voice technology and we're like, oh, you know, omni modal is coming. And you know, when we interviewed Sam Altman, he was, the parts that we had to redact from the interview were all about his own like video released through soa. Right. But now that you've seen where the future lies with video, how does it change education and EdTech?
What do you think are the use cases now that come into play? And you know, at EdTech Insiders we're always about, let's talk about the use cases first and then the technology. Where do you see this changing the game?
[00:13:40] Alex Sarlin: I mean, I think we have seen this rise of content creation often in the form of video over the last 10, 15 years.
Mostly in the context of social media where people are trying to get likes and get views and get followers and all of those things. And I think we've sort of seen video through TikTok, through Instagram become a little bit of a scary, weird tool, like something that anybody can use, which is, it's democratized, but they're using in ways that are not particularly pro-social, they're self-involved, or just to get attention.
What these tools do, these new tools do is they not only democratize access, I mean, Google Flow is not an expensive tool, and if you look at all the other video tools that are out there, the Cling and Midjourney and Runway, and the ones out there, none of them are remotely expensive compared to anything about where video was in the past.
Creating professional video is a very expensive thing with professional cameras, with professional editing, with color correction. With all of that, this is gonna bring the price down to. A hundredth of the cost of creating video. I mean, creating professional movie quality, television quality, commercial quality video, like let's just see that coming.
It also allows anybody to literally invent actors. They can use themselves and they can turn themselves into actors. They can use characters, they can use historical figures. You can make a person out of whole cloth, so it takes the entire, oh, the most attractive people are the people who are gonna get the most views, right?
In TikTok takes that completely out of it. It just democratizes the ability for any person, including, and this is the education part, including educators, including ed tech companies, including students themselves, to create video that is just absolutely. The most professional, the most like beautiful, compelling and instructive video you can imagine.
It's basically about to be in everybody's hands. You'll be able to create video from scratch. And you know, we've been talking about the podcast for years now, about how like you know, there's gonna be a 14-year-old from Baltimore who creates the next academy award winning movie. Like I believe that more than ever now after seeing these tools, because I mean they were showing the ability to edit video by drawing onto the video and removing things.
Just circling something, saying, I don't want that. And it disappears, saying Actually there should be a person here and they appear it should be a pirate. Make them, make them come in screen, left drawing it. And they come in and they come in. They were able to take real video and then add AI effects to it in this incredibly seamless way.
They had this video booth called Video Booth Tool, where you could take a picture of yourself and then say, okay, now I want birds to come in and fly into the picture. And it not only brings birds in like a. Snapchat or anything. It does it literally in the context of where are you, what angle is the video, what do you look like?
You can have your own hand come up and show something and it looks like your hand, even though the hand wasn't in the picture. You can edit video after it's taken real video. I mean, it's almost hard to describe how transformative this is going to be for the industry, for anything, the video industry, but what I'm most excited about, it sounds silly, but what I'm most excited about is that educators, everybody who has played with the flip classroom over the last 10 years, every educator who has tried to, you know, get onto Instagram and make a video, or who has drawn something like a Khan Academy style, you know, drawing and put it on on YouTube, like those creators are now can do anything they can imagine, just like anybody else will be able to do anything they can imagine when you do that with a pro-social pro-education lens.
I just think it's gonna change how we learn. There are so many functionally illiterate students in the world, even in the us, but especially around the world. There are so many people who do not like learning through many different common learning media yet they all do video. They almost all do gaming.
When those media can be literally co-opted into an education viewpoint, yes, it'll be competitive. 'cause every people will be creating unbelievable video in every format, but it just raises the boat for what media can and would look like. I think teachers, if I'm a teacher right off, I'm an educator listening to this podcast, I really would go on and learn flow.
I would play with these different video tools because you can do educational video, the likes of which we've never seen. In the history of the world. And, and after my Coursera, you know, at Coursera, people would spend so much money doing video that was frankly a professor standing in front of a room, you know, or, or a professor writing on a board.
And it would cost a ton just to put in these basic effects. All of that, the price is gonna go to virtually zero. So let's just take advantage of it from an educational standpoint. I know I'm still hand wavy a little bit here, but it's just like, the potential is so crazy. You're gonna have individual educators, a sixth grade teacher in, you know, Mississippi, who just loves this stuff.
Creating YouTube channels, a video that are just like mind blowing, like things you've just never seen before, and getting millions of people to watch. It's going to happen. If you're an educator, be that person. Jump in, you're, you'll be at the cutting edge if you do it right now.
[00:18:36] Ben Kornell: I love the optimism. You know, as I hear all of this, it's hard to not wear my like common sense media hat, all the bad things some people could do.
And let's say you drop infinite free video on a middle school, what do kids do with it? I think it's like 95% positive and 5% negative. But man, that 5% negative could be really, really destructive. So this is probably where it's like, old man, get off my lawn
[00:19:09] Alex Sarlin: time. No, I'm in the minority here. By being so positive at this point.
Of
[00:19:13] Ben Kornell: course, like I think about my parents, they're not gonna have any ability to tell what's real and what's not real. That's true. But when I say my parents, I mean only partially myself too. But I mean, I'm thinking about degree to which seeing is believing. Is now a human deficit, and this is going to be really challenging.
[00:19:36] Alex Sarlin: That's true. And I'm, I'm not trying to dismiss it, but I just think if we come at these questions purely at what could go wrong, we're going to limit our imagination. And honestly, the imagination is the key here. I mean, if you're a physics teacher right now and you're like, if I could literally show my students anything in the universe, in any context, in any style, what would I do?
Would I show them quantum leaps? Would I show them, you know, black holes? Would I show them the, the Newton's laws in the context of sledding? Whatever you want. Whatever you want. It's all just suddenly possible. We've never had that before. So yes, there will be deep fakes and there will be lots of technology to tell deep fakes from, from not DeepFakes, even though that doesn't fully exist yet.
Google does have stamps, you know, meta tags for that. But it'll come. But it's just before we think about the fears, like let's just take a little while to think about the potential because it is so. Crazy. It is so massive. And look, if we had gone back to the first day of YouTube, let's think about this Ben, first day of YouTube, they come out with YouTube and say, by the way, people can put out videos online now.
And you know, streaming speeds are fast enough that you can upload a video and other people can watch it. What's gonna happen with that? You probably would not guess that, you know, a few years from now there'd be more video uploaded every minute than like, you know, the Library of Congress. And I think that's where we're at now.
I don't think you'd necessarily think that there'd be people whose entire jobs would be YouTube creators, like lots of them. I don't think you'd think that there'd be channels from every major institution and museum and school in the world. Like there's a lot of things we wouldn't have anticipated that have been good about YouTube a lot.
And yes, there's been lots of bad things. There's been addiction, there's been, you know, people watching video game twitch videos all day. Like there's been lots of problems, but there's been some amazing stuff and let's not forget about it.
[00:21:19] Ben Kornell: I guess the question I have if, if I'm sitting here as an ed tech entrepreneur.
Especially if I have under eighteens using my product. I've gotta think about our classic challenges like engagement. Video is a great way to engage. I've gotta think about a classic challenge, which is safeguarding and how do we make sure things are appropriate and limited? And then I have to think about this idea of co-creation and who has the power to create, and I guess what I would say gets me over the hump on like this breakthrough technology is gonna be very powerful.
Good is in the hands of teachers who are trying to make concepts come to life for their kids. I get super excited about what we saw at the DeepMind event. We should probably spend a little bit of time on some just going around the world on some, yeah. Let's do, let's do it. What are two or three headlines around the world that popped up for you?
[00:22:19] Alex Sarlin: So, I mean, one that I think we should not ignore, it's a little scary, but Duolingo has seen a major stock correction over the last month or so is a month ago. The stock price of Duolingo was $330 a share. Today it is $178 a share. That is a loss of more than a third, I believe, of the value that is in direct response to, I think investor fears about all the potential that is coming from OpenAI, from Google, from, you know, Google Translate has had done something.
We've covered that slightly before, but it continues to be to be an issue. I think that, you know, obviously speaks more broadly to the idea that big tech can do something as a side project. Some of the things that Google is doing are a good example and people will say, well, that totally undermines an entire branch of the ed tech industry, and they get punished for it.
So that's a big one. We saw OpenAI offer GPT plus access to US service members and veteran. I thought that was, it's not an education directly story, but it stood out to me because it's something that is a massive expansion and you can just see more and more of these humongous groups of people being introduced to ai.
This was an interesting one. We saw an interesting article about European ed tech funding revival. 'cause there's been a lot of big rounds coming in Europe. And then one, I'd love to hear a take on any of these, but one is there were a few big articles, especially in the Times there, surveys of teachers or a big article from gene twins from San Diego State basically saying like, Hey, post pandemic kids have been doing a lot more screens and a lot more screen time in school and this is a problem.
This is very scary. A lot starting, people starting to have that narrative, which obviously is not great for the EdTech world. Yeah. Ben, what do you think about any of those?
[00:23:52] Ben Kornell: Yeah, I mean the Duolingo one, I think people need to understand that how you get valued as a company is directly correlated to public market valuations and anything that's known around exits with private equity and so on.
And so anytime there's a public market decline in one of the bigger EdTech companies, it's a hit. Duolingo is probably even more painful because it was the one company that had achieved some escape velocity from the EdTech discount. And so now to have it back down, and by the way, Duolingo has like had an incredible run and is in an incredibly accomplished company, and I don't think this valuation reflects a sense that they're going away forever.
But the idea that AI is going to eat or replace total addressable market, that is what's driving this. And if you wanna learn a language, do you go to Duolingo or do you use chat DPT? They're like, these are the kinds of questions on the screen time. I think there's a larger question around what is the backlash that is here or that is coming, is the backlash and AI backlash is the backlash, the tech backlash?
But I'm just seeing so many people who are going old school and you know, at at Art of Problem Solving, we had one of our best years ever in terms of selling physical books. Like parents just, you know, we have these comic books for elementary kids to learn math. They are flying off the shelves and it's nothing that we're really doing differently.
It's just people don't want screen time to learn math. They want books. So I think there's a realization that there's an in-person premium and there's a physical versus digital premium. And as you balance out your strategy, you probably need to have a digital, physical strategy in education where it's not all one or the other, but popping a kid in front of an app is becoming less desirable than it was before when the thought was.
Physical materials are linear and digital materials are adaptive. Now there's a feeling that physical materials have a lot of engagement. Positives,
[00:26:14] Alex Sarlin: no question that physical premium is a really interesting thing. You also see Symantec companies going screen free, right? We talked to the, the head of KBI who is a leapfrog veteran and they're doing a screen free ed tech or you see yo or Tony's or you know, audio only devices, or you're starting to see ed tech that is saying, Hey, we, you wanna get your kid off a screen?
Come to us. We actually are doing screen free ed tech, and I think it's probably a pretty good moment for them. There's also probably connection, you know, when basically this, this times article says they surveyed students before the pandemic. Three out of 10 said that schools, that each student had a had a one-to-one device when they surveyed them this year, eight outta 10.
So, you know, that's anecdotal. Obviously we can get the real stats on this, but basically, you know, humongous increase in the number of devices, in the number of screen time, in the number of students who are required to use screens to do their homework or to submit. And I think that's creating a backlash at home and for the elite and saying, well, if every, if school is using screens all the time, then let, then let's, let's do something different.
And we'll buy comic books that do either teach math or we'll do things that we'll get a tutor or we'll ban tech at home or do Montessori. There's a little bit of a cyclic nature, I think, to this as well, but it's a crazy moment. Let's go to our, our, our conversation because it's a, a really big one today.
We talked to Ben Caulfield from Eedi, who just put out an incredibly important paper in collaboration with Google DeepMind, basically about what AI tutoring might look like.
[00:27:35] Ben Kornell: Yeah. Let's head over to that interview and thank you all for joining week in EdTech. If it happens in EdTech, you'll hear about it here on weekend.
EdTech.
[00:27:43] Alex Sarlin: We have two very, very special guests this week on the weekend in Ed Tech from EDD Tech Insiders, it's Michelle Culver and Erin Mote. Two superstars of the education scene. Let me read a little bit about them and then we'll get into this really interesting conversation. Michelle Culver is the founder of The Rithm Project in empowering young people to rebuild human connection in the age of ai, a former teacher in Compton and Teach for America Leader, she has driven innovation at the intersection of equity and education.
She also advises many AI or ed tech organizations and serves on multiple boards. Erin Moat, who has been on the podcast a couple of times is CEO and founder of InnovateEDU, where she leads systems change through uncommon alliances in special education, talent, ai, and data. She's an enterprise architect and she and her team created Cortex, a personalized learning platform and landing zone, an innovative infrastructure as a service.
Data product. She's also a frequent contributor to many things around AI and education policy, and that's some of what we're gonna be talking about today. Michelle Culver and Erin Moat. Welcome to EdTech Insiders.
[00:28:51] Erin Mote: Hi Alex. Thanks for having us. It's great to be here.
[00:28:55] Alex Sarlin: It's great to have you both here. So just to kick it off, one of the things that is top of mind for all of us these days at Tech Insiders, we put out a newsletter a few weeks ago about characters in AI and how AI could draw some of the ideas from the sort of companionship space.
And both of you have been saying enormous amount of tension to this companionship space. And you both raised a flag and said, hold on, this may not be the safest, smartest way to go for EdTech. We've gotta be really thoughtful and careful. So as we kick it off, I'd like to hear both of you talk about the companionship space for AI and the idea of AI characters that might be part of students' lives and how it, there's a lot of risk there.
Aaron, can I start with you? Tell us a little bit about. How you're thinking about AI companions and characters?
[00:29:41] Erin Mote: Yeah. Well, I think it's how we're thinking about AI use case and education as a whole, and that's really through the work that we do at the Ed Safe AI Alliance, where we use the safe framework to really frame the discussion around the appropriate use of AI and education.
And that starts with the safe framework, so safety, accountability, fairness and transparency, and then efficacy of these tools. We shouldn't always be using AI for everything, and so I want us not to sort of lose the larger framework as we're thinking about the specific applications. Ions. But I think the thing that is really important as we think about AI as a tool for personalization, of which there is enormous opportunity, I wanna say that this is a story of promise and peril.
And so the promise is obviously that personalized content drives engagement, that it can deepen students' engagement with passion driven learning. Project-based learning can certainly get them more excited about what they're learning in school. But the reality is that we don't have sufficient guardrails and guidelines in place right now for that level of personalization and education.
And parents know it, and the American public knows it. So when you're looking at polling that's going on right now. Parents all over the country are really not as enthusiastic about this technology as maybe some folks who are in the technology space. In fact, parents are more fearful than they are enthusiastic about this technology, and I think that's wisdom.
I think parents are instinctually understanding that we need more transparency around these tools and that the stories right now of AI Companions are stories about keeping kids locked in platforms for multiple hours. Companies really prioritizing market capture, user capture, really thinking about driving solely user engagement.
And there we really do break the most fundamental rule of what powers technology, progress in education, which is trust. And I'm a big fan of saying that innovation really moves at the speed of trust. And so while there is enormous potential for personalization with these tools and an enormous potential to really think about how we meet all students where they are, we're not at a place right now where we have the guardrails around safety, transparency, and above all efficacy.
And Michelle has been some incredible work around this space And talk a.
[00:32:20] Alex Sarlin: A hundred percent. So Michelle Rithm Project is all about AI and its relationship to basically healthy social emotional development of young people and the perils and promises and risks of AI companions. Tell us your take on it. You've, you've been doing a lot of research, you've been ingesting a lot of research.
What do you think we should be doing at this moment in relationship to ai? Companions.
[00:32:44] Michelle Culver: Well, I wanna begin just by acknowledging that for young people right now, even before we get to the arrival of AI Companions, they are telling us they're lonelier and more disconnected than ever before already. And so when you're looking at anxiety, depression, suicide rates, number of minutes per week that young people spend in person with each other, all the data points are going in the wrong direction.
And then with the arrival of of generative ai, it just makes it a lot more complicated. And by complicated, to be clear for us, it's not inherently good or bad. No technology is inherently good or bad, but it is definitively not neutral. And so one of the things that we see with AI Companions is that this marks a moment where we're moving from.
Leveraging technology like we're doing right now to connect with one another, to actually connecting with the technology itself. So there's not a human on the other end, there's a bot. And that is a radical disruption in the way in which humans have related for millennia. And so it's not to take that moment of disruption for granted.
And when we think about what makes bots compelling, they are, and young people tell us they're judgment free. You can ask advice whenever you want. You can use them anytime of the day. They have zero needs of their own. They can talk to you endlessly. They can even talk and look and sound exactly as you design them to.
And they most often tell you how great you are and how right your ideas are. So that's compelling in contrast to human relationships that are inherently messy, where people do need. A bi-directional relationship where people do have needs of their own and differences in point of view, and it's vulnerable to risk being rejected or to have your ideas or needs not met.
And so part of what we're seeing is the way in which young people practice engaging with bots. We'll also influence the way in which they then show up with other humans at a time where human relationships are already really vulnerable. So while we do see potential and young people tell us all the time that you can use bots to ask for advice, to coach you on how to have a difficult conversation to become less triggered, and that allows you to have healthier human relationships, it also poses real risks to the way in which we practice being with one another as humans.
[00:35:00] Alex Sarlin: Yeah, it's a lot to chew on. I mean, so we've all worked in collaboration with a lot of different interesting organizations. One that all three of us have worked in relationship with is, can be, our education thinks a lot about education equity and I think one thing I'm curious about, again from both of you, but I'd love to start with you, Aaron, is again, you've highlighted both the promise and peril of character based AI tools and that's both commercial character based AI tools and potentially educational based AI tools, which are are much newer.
I think when you think about the huge spectrum of different students in a public education system, for example, if we get to a place where every student has access to the type of. Pseudo relationship that Michelle is mentioning, where they compliment you and you can sort of design how the character works and what they look like.
How might it affect different students? You have people who have never been connecting with technology in this one-on-one way, having this super personalized, super customized, very sycophantic, potentially experience. How can we think about, as we as an EdTech sector roll out different products to the entire sector, we don't wanna inadvertently create another social media type debacle where we've inserted something into the conversation, which is just driving students into a really strange mental place.
How do you think about it?
[00:36:16] Erin Mote: Well, first, I mean, I think we need to work harder on this than we did on social media. To be honest. I think as an education sector, we abdicated responsibility on social media and we left parents and young people in a space where we did not. Ensure that there wasn't just a mad dash for attention and optimizing for sort of addiction and endless scrolling.
That is the situation we're in in social media and we are paying the price right now. Michelle gave some startling statistics around disconnection. Recent medical studies from Jaya show that that also has an impact on student outcomes as well, on the ability to read, the ability to discern the ability to focus on core tasks.
And so first we have to make up ground, so we're already at a deficit when it comes to trust with parents because of the role that social media has played in education, and I think this is even more urgent when it comes to AI companions. I don't think we're in a space right now. Where Michelle and I are working on some shared work related to AI chat bots and education through Ed Safe AI Alliance on a task team that brings together researchers and global experts and industry, and I think all of us acknowledge that there is a clear need for intervention in the education space.
Also because young people don't have universal AI literacy and digital literacy to be able to get to discernment. And so from an equity perspective, let's first acknowledge that and second, know that we need to build those skills. But also the science tells us that it's really different for a young person like Robert.
Who's 10, my son or a child who's 15 or even 20 to engage in conversation with a chatbot. And because that's because their brain is not yet fully developed, here's what we know about brain science. The reasoning center of the brain, the prefrontal cortex is still developing up to the age of 25, and it's even less well developed in boys than it is in girls.
And that's what the science tells us, that it just takes longer. And so the way you and I might interact with a chat bot, the way somebody who is an adult might interact with a chat bot and have that reasoning center fully developed is not how a high schooler is interacting with a chat bot. And so we need to be really clear that the early data is that 20% of young people have either been involved in an intimate.
Relationship with a chat bot, or they know somebody who has, that's data just released this month from cdt. And that being involved in that relationship can take all types of forms. And I think that really under this guise of connection. And so how do we think that, okay, we want personalization. And so that means we have to give every child a chat bot and we can't be focused on the work that Michelle is doing, which is re-anchoring and social and emotional relationships and connections to humans.
And so I think we need to design for different feature and really work alongside industry to build a new generation of tools that acknowledges where we have been in the past, but also really takes advantage of that future promise of personalization. It means prioritizing. Some of the things that Michelle has said are important human connection at the center.
And from a technology perspective. I know a lot of the audience knows me as an enterprise architect, prioritizing model welfare, deliberate design decisions that allow the model to offload or put a young person in a pro-social environment or connect with a human when they might show a sign of distress or anchoring with a chat bot for too long.
[00:40:13] Alex Sarlin: Michelle, at the Rithm Project, you focus on rebuilding connections As the social media age starts to go away, hopefully, and as AI starts to become more and more embedded in schools, what steps should communities take to ensure that relationships aren't lost and that even if we do use AI a lot, we're not using it in this individualistic isolating way.
[00:40:34] Michelle Culver: Yeah. Beautiful. Well, a couple things. Number one, we actually interviewed recently 27 young people about their relationships with each other and with ai. And one of the things we asked them to do was to chart over the course of the day when they were feeling more connected and least connected, and to tell us about the stories of those highs and lows.
And we saw a pattern emerge, which was a, you know, a sort of uptick in the morning, and then a long, flat, low, quick, high, long, flat, low, and then would increase as the day went back on what that long, flat, low is. School. So the reason why I'm saying this is the irony is that we as adults are very preoccupied with what young people are doing when they're alone in their bedrooms behind the the screen, and they're telling us that they actually feel more connected to others during that period of time than they do when they are.
In person with their peers and an adult who is responsible for and cares about their development. And so we are just missing one of the most important levers is making schools a place of belonging again and thriving human connection during the school day. That in and of itself is one of the most powerful things we can do to ensure that this moment of ai.
Is applied into a context where young people know how to use it well because they already have practice with healthy, thriving human relationships, which we're not giving them much practice for. So then on top of that, I would say, okay, we know that this stuff is coming, and young people are experimenting.
They're always the first ones to play with tools. Most of what they're engaging with is not actually ed tech. It's, it's what they're getting on the consumer markets. And so we need to actually open up the conversation with them about what they're experiencing with technology in and outside of the school day.
And we see already 72% of young people are, are using AI for relational purposes. And yet they tell us in that same, in that same set of interviews. We asked them at the end of the interview, we said, what's the thing that adults are missing right now that we should know? And almost every one of them said, some version of.
We are using this and we are not talking to you about it because we can feel your judgment and your shame and your fear. You're gonna tell us not to use it. It's bad. And so we're just not talking to you about it. And so this disconnect between the adult conversation and the youth conversation especially about how to use AI relationally feels so important.
And so one thing that any parent or educator can do is start to open up conversations in a nonjudgmental way. And not to assume that as educators, we even have all the knowledge we need to teach them what's right and wrong, but instead open up a, a bi-directional conversation about AI literacy. And by that I mean not just the technical aspects of using ai, but the implications relationally as a part of AI literacy and to do that in a nonjudgmental way.
So I think there's a lot we can do just as educators immediately. And then as ed tech builders and developers, there's choices you make that can make this technology more pro-social or less. Pro-social, depending on those design choices. One example I think is a particularly, uh, poignant one to understand is the use of of voice.
So I personally love voice mode. I find it, you know, I can go on a walk and I can talk to chat GPT and it's so productive. But what we see is that in an early study from MIT and from open AI voice mode does result in more time spent on chat GPT per day. The problem though is that it also showed that the more time people spent engaging with AI directly decreased the amount of time they spent with humans.
So we could unintentionally addressed loneliness by increasing isolation. And this is a huge responsibility for ed tech developers and product builders right now who have an opportunity to think about what's the unintended implications for each design choice that we're making, and how do we nudge young people to consistently go back and keep reengaging with other humans over and over and over again.
[00:44:35] Alex Sarlin: Yeah. I'm really intrigued by the potential, at least for pro-social ai, for AI that actually increases human connections, brings people together, moderates, facilitates, rather than being that one-on-one, whether it's in a voice or a text format. But what we and a few people are starting to work on that, and it's really intriguing to.
We're down to our last question. I know we we're almost outta time here, so it's a little bit of a lightning round for each of you. Aaron, let's start with you. You know, looking ahead, do you think the future of AI and education will be defined more by its ability to personalize learning? Do we feel like we're finally on the verge of that personal, customized learning, or do you think that it'll actually be on that pro-social ai, it's influence on human connection?
I feel like we're at a crossroads. How do we balance those two forces?
[00:45:20] Erin Mote: Let's put them on the same road. I think this is a false dichotomy. I actually think you can do personalization and human relationships, and you can do that in a way that, you know, drives inquiry, drives dialogue and discourse. Has young people debate their views live in class.
So that flat line that Michelle talked about peaks because young people feel seen, loved, heard, acknowledged, and frankly, I don't think this is just a, something for. A company to solve or for regulators to fix after the fact. We're gonna have to be, if we want that blended path, if we want a path that is both personalization and human relationships that really renews that passion for education, we're gonna have to work really proactively and collaboratively to get there.
And we're gonna need guardrails in place before we deploy again for trust building. And that is, you know, transparency about what's happening in these models. Designing for pro-social forces while being over, keeping a kid locked into the A platform. And then protection for our most vulnerable. So we know that some students, particularly students with disabilities or those who might have cognitive processing disorders, display actually a hard time sometimes distinguishing between.
A chat bot and a human and that we have to sort of overindex to protect our most vulnerable students overindex, to protect those students who might not have access to AI literacy. That's work Michelle and I are gonna be working on together, I hope for many years to come. And we look forward to doing that really in partnership with industry.
This is something that as a sector, we have to decide we wanna shape for good together rather than decide that it's gonna be an either or.
[00:47:06] Alex Sarlin: Yeah. Great points. And Michelle, how about yourself? Do you see there being a conflict between personalized or pro-social or do you feel like that we can have both?
[00:47:15] Michelle Culver: I agree with Erin completely.
So well said. We have to just reject that false dichotomy here. And I think in order to do this, part of what Erin is modeling so brilliantly is we need both the systems level work, which is what she's leading. So. Uh, you know, thoughtfully and proactively. And we also simultaneously need to teach both young people and the adults who support them, how to become critical consumers and producers of AI themselves.
One resource that we can make available is our five principles for pro-social ai. And in it, it helps you know what to look for so that you have a good sense of whether or not you should proceed with caution or with confidence when using any AI tool. And I think if we, as both consumers and builders of AI are intentionally making pro-social relationships part of our North Star, alongside learning, then we could really see, you know, a future where AI works to, to strengthen.
Versus erode human relationships. But that choice is not a foregone conclusion and it does require a lot of collective attention right now.
[00:48:18] Alex Sarlin: Phenomenal. Michelle Culver is the founder of The Rithm Project, that's R-I-T-H-M, empowering young people to rebuild human connection in the age of ai. And Erin Moat is CEO and founder of InnovateEDU, where she leads systems change through uncommon alliances in all sorts of areas.
Amazing. Thank you both so much for being here with us on EdTech Insiders. This is, I think, the beginning of a, of a long and really interesting conversation.
[00:48:42] Erin Mote: Thanks so much, Alex.
[00:48:43] Ben Kornell: So EdTech insiders. We have a very exciting guest this week. Someone who we got to hang out with at the Google AI for Learning event.
Ben Caulfield, CEO of Ed. Welcome Ben. Thanks very much, Ben. Thanks very much, Alex. Great to be here. So Ben has been a long time friend and we've been collaborating around Ed for quite some time. So full disclosure, I'm all in on Eedi, but for those who don't know very much about Ed, Ben, tell our listeners a little bit about how Ed got started.
[00:49:14] Ben Caulfield: We've existed for about 10 years in the forward assessment place and basic teachers have been using our content over that time is a massive, huge amount of data. But the thing that makes us different is the use of a diagnostic question. So the diagnostic question in our instance is multiple choice.
There's one correct answer, and then there's three incorrect answers. Those three incorrect answers are designed to uncover a particular misconception so we can recognize not just that a gap in knowledge exists, but why a gap in knowledge exists. And it's this data that's led us to basically where we are today, which is that we understand better than most companies, perhaps not all companies, but most companies, exactly.
Why a student is struggling, which means that we're better placed to provide that sort of idea around what an intervention might look like. So fast forward to today, I am a big believer in like Simon Sinek Golden Circle. I start with the why. So like why are we here? We're here to sustainably demonstrate measurable learning gains for 1 billion kids by 2030.
And I just wanna sort of stress this demonstration of measurable learning gains. It's what I think sets us apart to a lot of organizations is that when we do something, we try to prove that this thing works, which I think is really what was evident in the Google Forum last week. But how do we go about trying to help those billion kids?
Well. We've had a stab at being direct to consumer, but actually what we worked out was B2B was where we belonged. So what we really do is we help market leading learning platforms and publishers. So we work with people like the Achievement Network. And as was announced last Tuesday, imagine Learning create measurably effective learning experiences by providing this AI infrastructure built on academic research and real world evidence.
So I'd like to say we're the intelligence layer for the ATech platforms the world uses. We don't just build new apps, we make the existing ones smarter.
[00:51:11] Alex Sarlin: What's so exciting about ED and this B2B strategy that you're really pursuing is that you're taking this amazing data set of these math misconceptions and these amazing tools, this sort of 10 question diagnostic and now an AI tutoring tool, and you're able to provide them to many different platforms across the ed tech ecosystem and provide that level of evidence-based support.
It's a really exciting model. And what's exciting about this moment is you just came out with a new paper in collaboration with Google DeepMind about this AI tutoring space. It's very exciting news because it's a very positive outcome. Can you tell us a little bit about that research?
[00:51:48] Ben Caulfield: Absolutely. So I think I want to set this all up 'cause I think there are a couple of different ways of looking at this, and it was very difficult both for Google, DeepMind and ourselves to think of what is the right way of describing this.
What this wasn't. Was, it wasn't AI assisted tutoring. So when you think of AI assisted tutoring, you think of a tutor speaking directly to a student, and they may be benefiting from some AI that's maybe giving them some prompts that they might wanna utilize. This was bigger than that. This was Human in the Loop AI tutor.
So the AI tutor was in effect, in control. That was the thing that was communicating with the student for one, and it was being moderated by a human, and there was safeguarding from that perspective, but actually it was more pedagogical moderation. We wanted to make sure that what we had built was actually delivering those measurable learning games.
So the sort of headline thing that I think was reported was that we'd shown that an AI tutor could outperform a human tutor. And I think that's a little bit misleading in some ways as well. So I think if I just elaborate on that little bit, is that the AI tutor did marginally. Outperform a human tutor.
But actually it was the fact that it, it was at the same level of a human tutor. That is more important because an AI tutor is infinitely scalable, a human tutor is not. And I think that's what we really proved is that we've built something that actually could be and should be public infrastructure.
Every child should have access to a AI tutor that helps them when they need it in the areas that they need it most at the moment. Tutoring happens to be for the more wealthy, the more affluent, and helping their kids perform against particular examinations and stuff like that. So I think headline for me, not AI assisted tutoring, human in the Loop tutoring, and actually not to get lost in, yes, it did marginally outperform a human tutor, but it did perform as well as a human tutor and is infinitely more scalable.
[00:53:44] Ben Kornell: So one of your stats was 82.4% of tutors. Found Learn LMS ability to support multiple students simultaneously as its most useful feature. So the future state, let's say something like this goes into the wild. I might be a tutor. I might have 20 students that are simultaneously being tutored. I am seeing and watching the interaction, and then I'm engaging in the areas where either the student is off, the AI tutor might be off or something like that.
But essentially the idea is, and I totally get what you're saying, that it's better to have an AI tutor than no tutor. But also there's a different way that a tutor might work in the future where they're actually supervising multiple students. And then that's where you start looking at this range of.
There's self-directed learning, there's tutoring, there's ai. You know, I might be like the conductor orchestrating tutoring all the way to, I'm a full classroom teacher teaching students. How do you see the role of the instructor evolving based on the breakthrough of this technology?
[00:54:53] Ben Caulfield: Great point, Ben. So I think another area where this has been potentially misconstrued is we're not replacing the human in the teacher.
So we feel that the teacher is still all important. They're the person that drives the direction. And actually that was a very strong part of this work. So again, you can think of the distinction between a tutor that might work at home that has the cold start problem. What does this child know or what don't they know?
Where should I start? Whereas this, this AI tutor was actually being informed by the teacher. In some respects, I've taught this, I want to test my class's understanding of what they've been taught, and then I want 'em to work with an AI tutor to remedy those different areas. But I think what's really interesting is like thinking about this in practice.
Is if version one was, I think we're all of a similar age, when we did homework, we handed our assignment in. A teacher would mark an assignment and they would hand it back with some feedback, and that probably would be the next lesson, which could be the following week. And at that point, you've probably forgotten why you've written that initial response in the first place.
Whereas version two is the auto marking. So you know in the moment whether you're right or wrong, but here's a video that you could watch, um, and that video would help you remedy and you could re-answer the question. I think really what we've, we've demonstrated is version three, like if you can provide personalized support rather than static, generic support, which is what videos provide, then why wouldn't you do that?
And I think really for me, if you were just to take the human tutor versus static content, which is one of the arms of the experiment, you wouldn't be surprised that that outperformed just watching a video. But again, it goes back to that point that an AI tutor performs as well as the human tutor. Which in itself performs significantly better than watching static content.
And I think that's one of the things that we were best placed to do. There are organizations out there that have spent years, years developing video content and actually we were just able to sort of tear up the rule book and say, actually, look, we have built this video content, but we're just gonna replace that.
And actually that's where the AI tutor performed is that it provided that intervention, it provided the fluency practice within the intervention. And then we tested out, um, with the diagnostic, again, to understand the effectiveness of the intervention. And I think from a static hint perspective. If I remember says me correctly, it was nine points, nine percentage points better in its performance versus the static content.
[00:57:22] Alex Sarlin: That's right. We really recommend everybody look at this paper because I think this could be a really inflection point moment for sort of the perception and understanding of what's possible with a. With Human in the Loop AI tutoring, just as you're saying Ben, because it really, I mean, the way this is sort of set up, you had static hints, then you had a human tutor, and then this Learn LM based tutor with Human in the Loop support.
And some of the findings I think are really exciting. I mean, you mentioned that 76% of the messages generated by Learn LM were basically accepted as is from the human tutors, three outta four, or with, you know, one or two character changes. So that in of itself, I think is really powerful. You're seeing that 75% of what's coming out directly from the AI is something that human tutors can sort of stand behind and say Yes.
And not only can they stand behind, they were saying it showed some really innovative and smart techniques. It was doing a lot of Socratic questioning, it was encouraging transfer, it was doing some things that were really powerful. Pedagogically, and I think we're very early in the AI world, but the idea that AI tutoring can perform even at close to the level, and what you're showing here is basically at the level of a human tutor allows the possibility of scaling tutoring to be possible literally for the first time in human history.
Tell us about what it feels like to be sort of at the cutting edge of this really could be a pretty massive movement if we can get all the details right.
[00:58:37] Ben Caulfield: Yeah, I, I think as you touched on a couple of really interesting points there around the safety aspect. Again, there are sort of common things which are sort of aligned with AI generally.
One is it's safe, and actually you're absolutely right that in the three and half thousand messages, I think that were sent, like 0% of those had any safeguarding concern whatsoever. So they could have literally been sent to the student in 75%, I think it was 75, 70 6% of instances. The messages were left completely untouched by the shooters moderating, and even then, I think it was another 20% on those, which as you said, only had one to 10 characters that were adjusted.
I think if I remember correctly, it was five messages from the three and a half thousand that were deemed incorrect. So, you know, you could say they were hallucinations, but they were incorrect responses. And again, like transparency, that's nor point naught. 1% I think of my math is, math is right, might be not 0.1, but it's still a very, very low number.
And where we wanna try and get to is how do we, how do we find those moments more quickly, can, you know, identify those moments as we try and scale. You mentioned this one as well, Alex is, is short term transfer. So this was a very short study. It was only six to seven weeks. We've got random control trials planned in the UK and the US with.
Like five or 6,000 students a show and demonstrate long-term knowledge gain. But those short-term transfer ones are equally interesting because, again, one of the problems that people see with AI is cognitive offloading. You know, AI is just gonna help kids get to an answer and they don't learn anything.
But actually this demonstrated that kids working with an AI tutor actually outperformed a human tutor, which again, of course outperformed a static tutor. So, you know, in the quizzes that were used to test understanding, they got progressively more difficult. And this was demonstrating that the kids could transfer that knowledge to the next question and were more likely to answer that question correctly.
And that's hugely, hugely exciting. Um, sorry, I'm just gonna round about to answer your, your actual question, which is how does it feel? In this moment, I'm hugely excited. I think we will be one of many, many companies that will, will do this. I feel that we want to show that there is a right way of doing it, and that might be that there is some level of moderation and oversight by humans, and that adds, you know, something to it.
I think one of the key areas is showing and demonstrating measurable learning games. I think we talk about the Egen framework of diagnostics, so we understand why the child has answered the way they've done, so we understand what the misconception is. We talk about a knowledge graph that LLM understands the relationships and dependencies and dependencies between misconceptions and constructs.
So, you know, the students try to answer this, but actually their problem is this, and we should go back and teach 'em this first. And then you've got inference, and this is something that Claire Zs touched on. In some of her reviews of the latest sort of study modes and things like that is large language learning models are missing inference, they're missing context and memory.
So, and what we were able to do in this experiment was we provided that inference to the large language learning model in the prompt and we said, Hey, Ben's got these five questions. He's gonna understand these first two concepts, but he is actually gonna struggle with the latter three. And what we allowed the, the tutor to do was then think and act proactively to help that student answer those questions.
And I think that's really interesting. There's more research that we need to do, but we, we talk about this idea of the zone of proximal development, but this might. Help us better understand that actually if you, if you, the large language learning model is helping the student to get the right amount of struggle, so they're engaging for longer, then there's a potential that these AI tutors are actually much more transformative than we think.
But I think it's the inference, which is key to this. Like what does the student know? Where are they gonna struggle and where might they struggle in the future? So yeah, I'm really, really, really excited, you know, to be a part of this story, I think. And where am I lead?
[01:02:39] Ben Kornell: Yeah. So I'm curious about how this space is evolving more broadly and where it's headed.
We heard about Chan Zuckerberg initiative, which is now called Learner Commons, rolling out basically a software development kit and kind of underlying infrastructure. You are taking a different approach, but also with the same kind of motion, which is basically providing this AI intelligence layer that people can plug in to their existing systems.
How do you imagine this world playing out? Should everyone in EdTech be developing their own, or are they going to be figuring out which system they're going to be interoperable with? Ultimately? Do these systems play together or do you have to choose, you know, what your chip is? Is it intel or is it a MD?
Or you actually can layer these on? How do you think all of this plays out?
[01:03:32] Ben Caulfield: So, okay, I'm gonna try not to be biased in here and try and say they should just work with us. But I, I kind of feel even from like, the biggest clients that we work with down to the smallest clients is they will have perhaps one or two of these things, but they won't have all three.
So I think if you break it down into diagnostics, the graph and the inference of the machine learning models. So in most instances, actually people don't have good diagnostics. They have good right or wrong questions, or they'll have poor multiple choice questions, but there's not many organizations out there.
In fact, we work with probably one of the strongest in, in the us, which is the achievement network who think about misconceptions in much the same way we do. So you could develop those items. That's, that's why you've then gotta get the data from those items over a period of time. The graph. Again, every organization will have their own proprietary way of thinking about the subject.
So there are different names, like be it a graph or a skill spine I think some people call it, or, um, coherence map, I think is another term that gets banded around. But a lot of people just think it is, it's proprietary. Like nobody wants to share. It's their special source. But I mean, to pardon the phrase, math is math.
Like, you know, we. Think of mathematics the same as every country in the world does. You know, maths is universal in that sense. So I think what's exciting about Learning Commons is that it's trying to move away from that proprietary piece and actually make it more universal that people can think about state standards more collectively through one route.
We have our own graph, obviously, and actually we are working with CZI and the Gates Foundation to make that more available. But we've also got a bigger ambition, which is to look beyond the states, which is where Learning Commons is really focused to think about what that graph looks like globally. You know, from our perspective, the more people that plug into that graph, the more easy it is for them to work with us.
But I think it's then the inference side of things, and really that's where we come into our, our sort of element is being this kind of research lab. You know, there are lots of companies that have the capital but haven't deployed it with machine learning. I've been, I've been surprised at how many organizations that we've worked with that don't have like a genuine AI lead that understands this space.
I think then you've got the opposite end of the spectrum, which is that people just can't afford this kind of talent as well. So I think that's kind of the sweet spot for us is, look, we've got something, it works bloody well and you could put it into place today. Whereas actually if you were to do this yourself, you are probably gonna need a couple of years to try and get to that point.
And the fact is that we are just moving that point further and further all the time because, you know, we have a model today that predicts binary information. So does the gap in knowledge exist? But we're already very close to releasing a distractor model, which actually doesn't just predict the gap in knowledge, but it predicts the like misconception that is holding or creating that gap.
So yeah, I Does that answer your question then?
[01:06:39] Ben Kornell: Yeah, totally. And this is actually where each organization has to understand from a strategic standpoint, what do we have that is fundamental? Where math is math and you know, this is universal. And then where do we actually need to work on an integration so that these models and this intel layer actually plays with our content and curriculum and delivers.
Uh, you know, obviously once you with assessment, like once you formatively assess where the misconception is, the hard part is, I know that's a hard part, but a hard part is following up on that and saying, here's the content you should do, teacher student, here's the next step, and then here's the reassessment.
So you almost have to like, just like it's a learning graph with a map that maps the students, you also have to connect that map with the curriculum and content map. So that those things are speaking together and I think that's really, you know, we've talked many times Alex, about, you know, is this a first mover advantage market or is it a fast follower market?
This seems to me like one where it's the fast following. Your advantage doesn't have to be in moving first. Your advantage has to be in stitching these layers together in the most coherent way. And so, you know, then it feels like a less of a zero sum game and more of a, okay, how is everyone gonna think about their learning stack differently now that this stuff is kind of universally available and you don't have to have your own r and d.
To create it.
[01:08:14] Alex Sarlin: And putting those layers together allows companies to focus on different aspects of the actual delivery issue. Right. Is how does it align to curriculum or standards or outside of state standards? As Ben, you're, you're saying, or how does it get delivered in an afterschool setting versus an in-school versus a remote tutoring versus a, you know, remediation.
I mean, there's just so many things to figure out in the ed tech world. The more we can inherit, the more we can build in that infrastructure layer. And the more of that that's deeply evidence-based, like what you're doing with Ed Ben, the better the entire ecosystem is. And it's really exciting to hear you talk about all the different initiatives that are coming together to make this possible.
I
[01:08:52] Ben Caulfield: just think to, to lean in on that point, I think it's the adage of it takes a village to raise a child. Like when we talk about a billion kids, it's by working with others that will achieve that or get somewhere close to achieving that. And I think that's what education is about. Not, there's not a single company that has the answer.
There's not a single company that's gonna work with every child, but actually, if we work collectively and we show what's, you know, what is the best way of doing something, if we can demonstrate measurable learning gains and stuff, then like surely that's a better place for us to be, and we can help more children in the process of doing it.
[01:09:23] Ben Kornell: That's a great note to end on. Thank you so much, Ben Cofield, CEO of Ed. That's been so inspiring to watch your journey from early stages all the way to where you are now. And I get really excited about the future, not just for those billion learners, but also for the EdTech community. Thanks so much for all you do.
[01:09:43] Ben Caulfield: And thank you Alex, and thank you Ben for having me on. Uh, EdTech Insiders. It's been great to speak to you both.
[01:09:48] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.