Edtech Insiders
Edtech Insiders
Week in Edtech 1/7/26: Tech Backlash, PowerSchool Layoffs, Consumer AI Learning, Screen Time Scrutiny, AI’s Role in Schools, and More! Feat. Eli Luberoff of Desmos Studio & Rebecca Winthrop of the Brookings Institution
Join hosts Alex Sarlin and Ben Kornell as they kick off 2026 with a wide-ranging Week in EdTech conversation covering tech backlash, AI in education, market consolidation, consumer learning tools, and major voices shaping the future of teaching and learning.
✨ Episode Highlights:
[00:00:00] Growing tech backlash around screen time, phone bans, and distrust of edtech.
[00:03:55] PowerSchool layoffs reflect private equity pressure and profitability focus.
[00:06:30] Layoffs highlight the human cost for educators working in edtech.
[00:09:04] Screen time skepticism reaches adult learning and professional assessments.
[00:10:52] Big Tech ramps up AI competition as Meta, Amazon, and Apple reposition.
[00:12:42] Consumer AI learning startups draw VC attention amid edtech valuation gaps.
[00:13:58] Funding: Obo raises $16M Series A for AI-generated, multimodal courses.
[00:17:16] UX, speed, and multimodality emerge as key edtech differentiators.
[00:19:10] Speechify secures NYC schools deal, blending accessibility with consumer-grade UX.
[00:21:08] Engagement-first consumer learning apps challenge traditional edtech models.
Plus, special guests:
[00:23:48] Eli Luberoff, Founder of Desmos Studio, on creative math tools and Desmos Professional.
[00:50:28] Rebecca Winthrop, Senior Fellow and Director, Center for Universal Education at The Brookings Institution, on how AI risks currently outweigh benefits for students without better guardrails.
😎 Stay updated with Edtech Insiders!
Follow us on our podcast, newsletter & LinkedIn here.
🎉 Presenting Sponsor/s:
Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how t
Tuck Advisors was founded by entrepreneurs who built and sold their own companies. Frustrated by other M&A firms, they created the one they wished they could have hired, but couldn’t find. One who understands what matters to founders, and whose northstar KPI is % of deals closed. If you’re thinking of selling your EdTech company or buying one, contact Tuck Advisors now!
This season of Edtech Insiders is brought to you by Cooley LLP. Cooley is the go-to law firm for education and edtech innovators, offering industry-informed counsel across the 'pre-K to gray' spectrum. With a multidisciplinary approach and a powerful edtech ecosystem, Cooley helps shape the future of education.
Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.
[00:00:00] Alex Sarlin: It just feels like there's this sort of almost like a, a Luddite backlash against all things technical. People are trying to brick their phones, they're trying to install apps that block things. They're trying to sort of end what more and more people are considering sort of toxic relationships with technology, either for themselves or for their children or for their students.
And we talked about how Blue Book sales are way up. We've talked about how phone bans have gotten incredibly popular at states all over the country and including New York State, really big systems. But I just wonder as we get into this new year, what this generalized technology backlash and sort of real distrust of the tech space is gonna mean for ed tech.
[00:00:41] Ben Kornell: Which to me then leads as a solution oriented person, let's put the AI under the hood and let's power the human interactions to be better and more efficacious. Let's make that accounting exam the best freaking accounting exam It could be to assess people's competencies and not just their memorization of a multiple choice test.
So I think as with everything, the kind of public perception tends to be this binary, this or that. But I think in the trade, in the practice, there's a lot of learnings we can have about this that help us infuse the good of the technology while mitigating some of the bad or controversial.
[00:01:28] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders.
[00:01:44] Ben Kornell: Remember to subscribe to the pod, check out our newsletter, and also our event calendar.
And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events. And back channel insights from Alex and Ben. Hope you enjoyed today's pod.
Happy New Year, EdTech insiders. We're back with another week in EdTech. It's so great to have you all back on for another year of amazing, unpredictable, sometimes turbulent news in the world of EdTech. And boy did we finish 2025 with a bang, so we're gonna catch you up on some of those stories. Before we do though, make sure you check out our predictions episodes.
V one and V two. They've gotten a ton of downloads and attention online. You can really hear from a wide swath of people from AI optimists to AI skeptics, to people who are going deep in K 12 higher ed workforce, early childhood. It really runs the gamut, so please be sure to check that out. Besides that, what else is going on here at EdTech Insiders, Alex?
[00:03:00] Alex Sarlin: Yeah, I mean, it's a new year. The prediction episodes were totally amazing. We had 20 different people. That's a record for us. Come in and talk about everything that's happening as well as Ben, we got a great chance to talk about what came true last year that we had predicted and what didn't quite come true, which is a lot of fun.
But yet we are talking to so many interesting people over the next couple of weeks. We're talking to Eli Luberoff from Desmos. We're talking to Richard Culatta, the head of ISTE, ASCD. We're talking to Rebecca Winthrop from the Brookings Institute, and it's coming out some with some really, really interesting thought leadership about everything in education.
It's a incredibly exciting time for EdTech and education as always, and as we go into this new year, I think all bets are off. It's just things continue to change so quickly in both the education world and the AI world. What caught your eye over the break, Ben? As we were taking some time away from our day-to-day, what hung with you from the EdTech world?
[00:03:55] Ben Kornell: I mean, and before we dive all the way on that too, we also have. Our events are coming back the week of February 9th. There's the Stanford AI and Education Summit, and then we'll be, hopefully we're hosting our first happy hour of 2026 on February 12th. And then for those of you following at home that are planners like me in March, South by Southwest, will be coming up in April.
We of course, have the GSV summit and in May we'll be having a big EdTech insiders, a happy hour or two then, so stay tuned. We're getting those events back up and going. So excited to see everyone there. Yeah, so man, 2025 ended with a bang, and it's probably more of an explosion than a like firework. We saw PowerSchool with massive layoffs.
They recently were purchased by a private equity group, and it's really odd to have. The timing of layoffs right around the holidays and at the end of the year, but we're seeing a number of other companies making big moves at the end of the year. Reflecting, growing concerns about the economy, growing, focus on profitability, and you know, I would say in general, like EdTech, financial markets are moving much closer to a private equity view of the world where it's not about chasing top line revenue growth and it's much more around creating unit economic and cash flow success.
That against the backdrop of all of the major AI announcements, it's almost like a logical move if you've got everybody burning cash. To chase growth in the AI land. It's very hard and expensive to acquire growth in that environment. And so a renewed focus on the p and l. That was the big highlight for me.
And then the second thing that I've been seeing a lot of around the new year is just the steady drum beat around screen time is bad AI for educators good, but AI directly facing students is kind of getting lumped in with the screen time. And now there seems to be a number of. Articles and stories, and even studies that show that there's an inverse correlation with student success and engagement and their amount of screen time.
So I think that's something really, really worth paying attention to as people weigh in on what this next new phase of ed tech looks like.
[00:06:30] Alex Sarlin: So quick note about the PowerSchool layoffs. We were looking at some of the online sites where people can sort of talk about their layoffs, talk about their personal experiences, and it's pretty brutal what happened with PowerSchool and and Bain right now.
I mean, obviously private equity, the whole model is about trying to maximize profits, trying to make companies work more efficiently. That's sort of baked into the concept. But at the same time, when you actually see the human cost and you see all of these people talking about, especially a really seasoned company like PowerSchool, there've been many people who have been there a very long time.
They've grown up with a company, they've seen it become this huge market leader in SIS and LMS for Schoology and sort of be everywhere. And then suddenly just the floor falls out because of thing. They have a new CEO. They have private equity ownership, and the decisions have nothing to do with their work.
They have nothing to do with the companies, with any individual's role. In a lot of ways. It's just these tectonic shifts under people's feet, and they're so angry and many ex teachers, many people who moved into ed tech from education. So it's pretty heartbreaking to see that happen, and it's one of these moments where the business side of the EdTech world really.
Hits. I won't say the impact side exactly, but the human side, people's livelihoods, people's desire to do good for the world, the desire, their idealism, their desire to really support students and support teachers and do all the things that PowerSchool has done to make data work for everybody. It's pretty sad to see, and obviously we're rooting for everybody in that space to help land on their feet.
And then to your point about screen time, I mean, it feels like there's this sort of backlash that's becoming very generalized right now. Ben, I'd love to hear you talk about that story. It's screen time, it's social media, it's phones. And increasingly, I think AI is being sort of caught up in it as well, just as sort of the new tech thing.
It's Silicon Valley itself, it's big tech. It just feels like there's this sort of almost like a, a Luddite backlash against all things technical. People are trying to brick their phones, they're trying to install apps that block things. They're trying to sort of end what more and more people are considering sort of toxic relationships with technology, either for themselves or for their.
Children or for their students. And we talked about how Blue Book sales are way up. We've talked about how phone bans have gotten incredibly popular at states all over the country and including New York state, really big systems. But I just wonder as we get into this new year, what this generalized technology backlash and sort of real distrust of the tech space is gonna mean for ed tech.
[00:09:04] Ben Kornell: Yeah, and just to add, this is not just for kids. The A CCA, which is the largest chartered accounting body for accountants to take tests, they're ending their remote exams because AI makes it too easy to cheat. So even in the adult learning space, I think there's a zeitgeist that you're describing and there's a bunch of different points of intersection where people are interacting with it.
But the idea that technology versus human in-person, like this idea of remote and screen versus the human interface. I feel like, you know, that was one of my predictions is the backlash. I think that there's gonna be a real premium place on the human piece, and that is largely in B2C models. But even in B2B, you're going to see a way in which like screen time will be monitored by schools and school systems to say, wait, are we really getting the best education here?
Shouldn't we use our human teachers more and more effectively or efficiently? Which to me then leads as a solution oriented person, let's put the AI under the hood and let's power the human interactions to be better and more efficacious. Let's make that accounting exam the best freaking accounting exam It could be to assess people's competencies and not just their memorization.
Of a multiple choice test. So I think as with everything, the kind of public perception tends to be this binary, this or that. But I think in the trade, in the practice, there's a lot of learnings we can have about this that help us infuse the good of the technology while mitigating some of the bad or controversial.
[00:10:52] Alex Sarlin: Totally. And I just put a piece out in the newsletter that, I don't know if it'll be out by the time this comes out as a podcast, but it, it may be out or it'll be about to be out exactly about that. Putting the AI under the hood in service of relationship building, in service of supporting group work, in service of supporting teachers, understanding of all of their students, even if they have a hundred students.
Is there a way that AI can help them know how all of those a hundred students are doing on any given day, or what they're interested in, or what they're struggling with, so that they can go then connect with them in a human relational, meaningful way, rather than it having to necessarily be a text message or a screen intervention, or a popup or remedial software service.
Like I think there is still lots of room for EdTech and ai, even if we are still talking about. Improving human IRL relationships. I mean, there's no way that they go together. But at the same time, I think there's this huge backlash, this zeitgeist of backlash against tech companies and they are just continuing to push the gas down, right?
I mean, we, we haven't talked about this meta acquiring manis meta. We've talked many times on this podcast about how Meta is sort of feeling like an also ran. They tried this huge open source solution with AI and it didn't quite pan out the way they wanted to go necessarily. It worked a lot overseas, but.
It not really competitive here. So they just did this enormous $2 billion purchase of Manus, which is a Singapore, Chinese based agentic system as a way to try to catch up. Uh, but we're seeing shifts that in Amazon where they're starting to do all sorts of AI literacy things for hundreds of thousands of people.
You're starting to see Apple change their leadership with ai. Like while the rest of the world is starting to feel really nervous about this, the tech companies are still just looking at each other and saying, how do I just get leg up over the next building down the road in, in Menlo Park and, and Mountain View?
It's a really interesting moment.
[00:12:42] Ben Kornell: Yeah. I mean, so on the financial side part of this is that their valuation is based on 10 year, 20 year forecasts. Their valuation also is in relevance to the cash on hand, and a bunch of these tech companies have been sitting on massive stacks of cash. And what are you going to do with that?
And one option for any company that's more mature is you give a dividend and you start de pleading your cash and paying your investors back. But these companies don't want to do that. They want to chase that next horizon of growth. Meanwhile, that makes it expensive for everybody else to chase that.
Just the sheer dynamics of electricity required for ai, for anyone to compete and purchase that, rather than going through these essentially subsidized channels with these large tech players, it makes no sense. So it, it is actually shortening the runway of thinking for a lot of the more mature ed tech companies.
I think the other piece here too is that we live in ed tech space and. These platforms live in generalist space. I think Obos funding round. You and I talked about this just before the podcast. OBO raised a 16 million series A led by Andreessen Horowitz. The founder ran podcasts at Spotify. I've tried it out.
I really, I like it. I think it's cool. You basically can build a course, any course you want, almost instantaneously, and you can consume it any way you want. You can consume it as a podcast. You consume it as something you wanna read it, you can consume it in modules, and you can even know how many minutes will it take you to consume that thing now.
For me, I am a recreational learner. Like I just like to learn things so it doesn't feel like it has, has the rigor yet to like really learn something technical. But if it's like, yeah, what was going on with Napoleon back when he was exiled, like, I wanna learn about that, but I don't want Che GPT to tell me and I don't wanna read a like 600 page book.
So I, I will say there is this VC fuel that's in the generalist realm of basically people who don't know what the business model's going to be and haven't learned to be cynical enough about ed tech. 'cause they're not viewing it as like a real ed tech thing. They're viewing it as consumer. They're almost viewing it as social media adjacent.
And so anyways, I think that that's creating. Such a weird environment to work in. And you've got real companies with real traction, with real revenue that are almost undervalued for these things that are super speculative and very, very small probability of success. And real questions about like what would a billion dollar home run really look like for something like for oboe, okay, how do you create different, if they're successful, they're gonna immediately invite tons of competition that will drive them down to the lowest denominator.
So I just think we're in one of these, probably like synthesizing in the most. There's an AI bubble that's intersecting with the tech down slide, and that is creating some weird reverberations in the force.
[00:16:01] Alex Sarlin: I tried OBO as well, and it was really interesting 'cause I, I saw the announcement about the $16 million Series A from Andreessen and thought, you know, okay, that's interesting.
They're going into EdTech in this particular way. I tried it and I was pretty impressed by it. I totally agree with you. It is not rigorous enough for technology yet, but it's really good at creating very, very, very quickly courses on sort of anything you can ask it to that's factual. I was like at a museum this weekend and I saw some new artists that I had never heard of before that I liked their art and I could go tell me about this artist and it would generate a entire course in maybe.
90 seconds with quizzes and flashcards and the people's whole lives with images in it, with diagrams in it. It was pretty impressive. And you know, it was interesting. I did that and then I went back to the announcement, the TechCrunch article about oboe. And you know what they said, Jason people said it was the speed.
That's what convinced us the fact that there was no lag time. You could type something in and immediately get it. And I was like, to your point, Ben, it's definitely consumer play and that's exactly what they're going for here. They're like, I don't know if these people care about deep education in a meaningful way, but the idea that you can create a course, a very lightweight, pretty easygoing course in multiple formats in seconds, it really is seconds before the course starts generating.
[00:17:16] Ben Kornell: And this goes back to like, where is defensibility in this space? I think that defensibility is not in the AI layer that's getting commoditized, but it's in the UX layer. UX has to do with speed, but also this idea that I could learn it through a podcast or a reading, basically the multimodal element of it made it awesome for me.
I'm on vacation, so I'm on the move. I've got an earbud. It's a lot easier to consume it that way than sitting and reading on the street. To
[00:17:45] Alex Sarlin: your point about UX on speed, I mean there's the performance, which does have something to do with the technology and the AI under the hood, but also the ux. I mean, it was so easy.
It felt on purpose. It felt like Google, right? It felt, I mean, you go in, it's just a bar. You put in what you wanna learn, and it just generates it. Within seconds has podcasts, it splits up the podcast into chunks based on the modules of the course. It has this sort of swipe back and forth, kind of, you know, flipping flashcards.
It's very, very simple to use. It's already mobile friendly. It was impressive as a consumer app. I would call it education light, but it at the same time, if you go on the app store, there are a lot of. Pretty high performing education apps that are also education light, a lot of language learning apps. I think it's pretty interesting.
The other AI news that I thought was interesting this week, there's a company called Speechify, another company that is sort of debatably ed tech because it basically, it's a voice app. It allows people to read, it, does AI voices to read any kind of text, but it does, it has some famous voices in it. I think it has Snoop doggy dog.
It has, uh, Ali Abdal as a voice in there. It's designed to just integrate into everything and make it very easy to do voice reading. They signed a big contract with the New York City school system, the biggest school system in the country, basically so that students can use this Speechify voice app to read anything in their systems.
You know, there's obviously an accessibility compliance component to that, but it's also has this sort of fun, interesting component and like, I think to your point, the consumer AI space is, and the ed tech space are having this funny merge right now where you're seeing some fast growth companies that are sort of.
Debatably education, but the UX is so clean and the appeal is so obvious that people are finding it really intriguing. I thought that was, had a little bit of a omo when I saw that presentation. Speechify, the founder, had learning disabilities and talks about reading as a, as a huge unlock for him that he wants to share.
It's, it's sort of like a borderline edge tech slash consumer slash, you know, productivity tool slash sort of fun because it has these celebrity voices. It is a weird moment, but potentially exciting. I mean, I love the idea of maybe the next Duolingo is arguably the only thing that's really like Spotify for education or maybe things like headway or Blinkist.
And it feels like there's a whole bunch of things on the horizon of consumer, very user-friendly, juicy UX apps and tools that may be coming, and I think EdTech tools may start to revamp their UX to chase that kind of market, that B2C market as well.
[00:20:06] Ben Kornell: I mean, I think the New York City example too speaks to the fact that this blending of consumer and B2B is also an interesting potential development.
You know, B2B has been a slog. It's been really, really tough. But also it's annual recurring revenue, and that's been more reliable in the education space than consumer revenue. But the idea that now you can have consumer quality UX consumer. Relevant features, but then also meet some of the compliance needs in B2B.
I expect us to have a, a lot of dual plays where a company starts as B2B and then launches a consumer, or starts as consumer and launches some B2B. And you know, frankly, I think school purchasers are getting more open to that concept or that idea, because what they've seen is that engagement is the key.
And if our students or our staff aren't engaging, then what are we buying it for in the first place?
[00:21:08] Alex Sarlin: Have you tried Paladin yet? Ben, this is a crazy app again, I think worth looking at. It's a history learning app on, in the app store uses ai, but it's extremely AI sloppy, and I don't mean that in a bad way, but I, I mean like, you know, you look at medieval history and it does this generated video of, of nights falling over and all this stuff, but then you start going through and you, you're following a little Duolingo style map to go through all the history.
It is really intriguing seeing that sort of like hardcore gamified consumer UX coming to topics like, you know, history learning again in this Ed Light way. But it's pretty fun. I enjoyed it playing with it. I recommend other people, you know, check it out. I got turned onto it from Walker Mackenzie, a, a sort of ed tech insider supporter and, and checked it out and it's definitely a sort of glimpse into a, a certain version of the future of.
Consumer ed tech and you know, all the evidence hawks out there might be groaning at this because this is like, I don't think they even care about changing people's actual learning outcomes or learning gains or test scores at all. It's really, they're looking for engagement the same way that that, uh, games might like arguably Duolingo, but it's a direction that has had some proof points.
Meanwhile, Duolingo has lost a lot of its stock value in the, in the last six months,
[00:22:22] Ben Kornell: the juice that Duolingo had has dissipated because others have been through ai, have been able to like leverage some of the learnings. And, you know, just a few years ago, if you had said we could replicate Duolingo, that's a selling point.
Now I think from a investor standpoint, that could be a red flag. You know, I think, you know, just as we transition to some of our interviews here, there's a lot of themes that we're coming into 2026 with, but I think one of the biggest themes. That will perpetuate is it really depends on where you are in the ecosystem as to what's the kind of compelling business path forward.
And there's just like consolidation and like focus on cash flow in some of the bigger, more mature B2B players. But in the, the new space where you and I spend a lot of our time with early stage, there's just like a lot of exciting stuff where we're breaking old rules and we're seeing things that are having unexpected success.
So with that, you know, we're, we're gonna head to our interviews, but we hope you all will continue to stay tuned here in Weekend EdTech for all of those stories. 'cause why Alex? '
[00:23:36] Alex Sarlin: cause if it happens in EdTech, you'll hear about it here on EdTech Insiders. Thanks so much for everybody for being here and we'll talk soon.
Ben, have a great week.
[00:23:45] Ben Kornell: Happy New Year.
[00:23:46] Alex Sarlin: Happy New Year.
[00:23:48] Ben Kornell: Alright, ed Tech Insider listeners, we have an amazing special guest, a friend, a founder, a colleague, somebody who's great to have sushi with. Eli Luberoff, the founder of Desmos Studios, a public benefit corporation with a goal of helping everyone learn math, love math, and grow with math, Desmos calculators are used by more than 100 million people around the world.
Welcome to the EdTech Insiders Podcast, Eli.
[00:24:19] Eli Luberoff: It is so good to be here, but lower your expectations after that introduction. This is gonna be hopefully better than mediocre, but just absolutely thrilled to be with you. Ben,
[00:24:26] Ben Kornell: put this on while you're having some sushi and you are going to basically reinvent the Eli Luberoff experience.
So buckle in folks. For those who don't know about Desmos, which have they been living under a rock, but you've grown from this scrappy startup to a global platform with a hundred million people. You've also had a bunch of pivots with Desmos Studios and amplifies classroom curriculum. Just looking back, what would you say are the pivotal moments or milestones in Desmos history as you see it looking backwards?
[00:25:01] Eli Luberoff: Yeah, so one of the fun things about Desmos, and maybe this is more true in the education EdTech landscape than in in other industries, but we've been here for all intents and purposes, forever. I remember at one point we were. Applying for Silicon Valley Bank, and they had a couple options for how old your startup was, and it was like less than three months, or three to six months, six to 12 months.
And I think the longest option they had was more than two years, and it's now been, I think 13 years or something like that. And a lot of it has been a very consistent through line over those years. But there were, I would say, two very substantial changes over the course of our business. So it started as free graphing calculator, put it on the web, had no idea if people would use it.
That's what's grown into Desmos Studio today and starting maybe three years in. Hired a bunch of math teachers, still a huge fraction of our team as math teachers, and asked them, what is another really painful thing that you think should be improved? Calculators are great, but what else? And across the board it was curriculum was an opportunity for a lot of improvement, especially with tech coming into the fort.
And so we built a curriculum, ended up following a very meandering path towards trying to make the best middle school math curriculum that, that we could, and that right around COVID started to really explode in in popularity. And I'm sure we're gonna get into this, but it turns out that running a curriculum business and running a tools business look extremely, extremely different.
Also, running a curriculum business is very, very people intensive. Our team exploded in size as well, transparently, it was going very well. It was growing very fast. It was a profitable part of our businesses as well at the moment that we sold it. And I was very unhappy running it. And I think some of that was being pulled in two different directions.
Some of that was the challenges of the curriculum industry, which I imagine we'll talk about. Some of it was honestly just the speed of growth and the size of the team. And so I would say probably our biggest change was about three and a half years ago now, where we split the company into two different pieces.
Did this absolutely wild transaction that all of the people who advised us lawyers, investors, whoever it was, they were like, we've never seen this. They're like, someday case studies are gonna be written about this and we don't know if it's gonna be like everyone should try this or if it's gonna be like, here's why you should never do this.
We don't know which case study is gonna get written yet, but we split the company, we sold the curriculum part of our business to a company called Amplify Reincorporated, the calculator part as the public benefit corporation that I now run Desmos Studio. So I think those have been the two biggest changes and like through that there's actually been some very consistent through lines.
The graphs that people made in 2012 on the first version of the calculator still work today. And I still meet people who are like, I discovered Desmos in 2013 and so there's a lot that's been the same and also some, some really dramatic changes.
[00:27:40] Ben Kornell: Yeah. Let's dig in a little bit on that move. Just for, you know, a lot of our listeners are CEOs and almost everyone I encounter in EdTech, they get into it because they want to make the biggest impact possible.
When you were thinking about this deal and then executing on the deal, what did you learn through that process? How did you balance the impact you wanted to have in the world and the kind of business side of things?
[00:28:06] Eli Luberoff: Yeah. I think one of the biggest takeaways for me is that there's a whole bunch of different forms that impact takes, and there are some that I think different people are good at, at different parts of it, and so I'm gonna describe the way that I see the calculator and the curriculum from an impact perspective.
One of the things that I've discovered about me and that I love about our calculator is that it's a. Very complicated product. Of course, we've invested centuries of incredibly high talent engineering into making this tool that can do wild things. But the surface area of it is actually pretty small and contained.
And I've discovered that I'm just like a wild perfectionist. Like I want anything that has the Desmos name next to it to be as perfect as it possibly can be. And like every single month it's getting better and more polished and faster and smoother and more intuitive and, and all of those things. And it's also a tool that is mediated through other people and other curriculum.
So a calculator on its own does nothing. A curriculum on its own also does nothing. A calculator with a curriculum, with a teacher, with students, like that's when impact starts to happen. And so there's a kind of different level of impact they're able to have with the curriculum because it's the thing that people are opening up every single day and using.
But there's a different set of challenges that come with curriculum that aren't with the tool. When you're making C cred home, you're subject to like all sorts of constraints. I don't think that every single thing that we teach in math is good. I don't think it's a good use of time. I don't think that every single one of the state standard requirements is good.
I don't think that the changes that individual states make to them are necessarily all changes that I would wanna make in a vacuum. And so you end up in this situation where first, the surface area of covering 12 grades of curriculum is enormous in the timescales of doing it. And then you multiply that by 50 states and you end up with a thing that is less kind of true to your vision of what you think product perfection can be.
But it also is the thing that people are using every single day. And so they're very different forms of kind of trying to shape the world in the way that you want it to be shaped. I found that the former works much better for my personality type than the latter. And I think what we made is as good as we were able to within those constraints, but it never felt quite like striving towards.
Making the just perfect way to have people learn math and love math, if that makes sense.
[00:30:23] Ben Kornell: Yeah. I think there's one thing you're pointing to is that there's like a recipe for impact that is always reliant on many other ingredients. One of those ingredients is really like student engagement exploration.
And when you describe the tool, I think it's easy for the casual listener to imagine what could a math calculator tool really look like? And then you get this incredibly engaging experience where the math literally comes alive from the tool. And I've had the pleasure of viewing the art gallery built with math as you envisioned this tool.
How did you go beyond just the functional needs to make something that was truly engaging? Is there a secret sauce or a magic to doing that? Is it just trial and error? Is it building for what you would've want, like as a product leader and visionary, what advice would you give to others and kind of how did you do it?
[00:31:28] Eli Luberoff: I do not know the answer to that. There have been so many surprises in building Desmos, and I think one of the biggest is, you're right, like people use the calculator in ways that we never anticipated. People like using it. I'll go to a store and I'm getting bread and the person checking me out sees a shirt and it's Desmos and they're like, oh my God, I love Desmos.
Like people love a graphing calculator is truly wild to me, and I don't know the answer, but I do have a few things that I think we've gleaned that have been really helpful. This is another place where being on the tool side gives you some affordances that you don't have. If you're more constrained, if you need to teach a concept within 45 minutes in a day, whatever, it's so one of them is that the amount of freedom that you provide to a person inside of an environment is really proportional to how much they're able to do, how much control they feel, and how much they're able to then surprise you.
And I think like paper and pencil and a typewriter and a dry erase board are just completely freeform tools. And people do extraordinary things with them. Like you can make anything with a charcoal pencil. It's really wild. And then we put people into an environment on a computer where there's like one box that they're allowed to type a number into.
And by the way, it has to be a positive number. And then we're surprised that it's not fun, and we're surprised that they don't do anything surprising with the calculator. We've tried very hard to make it just as permissive and flexible and powerful as possible. And so an example of this is you can graph any curve that you want.
And it turns out with equations, you can make any possible shapes, circles, lines, parais, and now you can cut them off and now you can combine them and now you can pick colors for them. And we make it so that you can mathematically define the colors. Also, there's really fun math behind color spaces, and you put those together and now you can draw anything you want.
And then you can add a parameter. You can say, I want to have the whole world of this graph depend on the value of A an A is now a number that can vary from zero to one. Everything is dependent on it. And now you can make stuff that moves. I mean, you can make animations and then you can make stuff that you can interact with.
So you add this flexibility and the result of it is something where you are just like your creative. Juices are unleashed. Like you can make anything that you imagine and we see that, yeah, our contest, I think is the best example of this. So that that's one is a more freeform environment, ends up being more fun to use and also much more surprising what people can make and maybe a couple of smaller ones.
So the second one, and this is research that comes, that has been kind of long understood of the. Unbelievable value of low latency. And so this is, you know, how quickly do you get feedback on what you're doing? How responsive is an interface? I think Amazon is the one that did a study about an extra a hundred milliseconds of latency in search cut down on purchasing by like a really measurable percent, I think like 20% decrease in purchasing based on this a hundred millisecond latency that was added.
And we do an unbelievable amount of effort on performance. And so we try to make it so that even incredibly complicated things render at 60 frames a second. You know, video games speeds and so you drag something and you see really smooth animation, you type, you immediately see the answer. You don't need to switch screens, you don't need to press a button that says evaluate.
It's just kind of like genuine live feedback with very low latency. And I think that's the other thing that across tools like the quicker that you can make that loop and the more responsive it can feel to what you're doing. The more magical of an experience it is to use something, but I'm sure that you notice this everywhere.
Like how annoyed would you be if you were on the phone and you noticed that it had like a quarter of a second latency when you're talking to someone, it goes from a delightful conversation to like miserable experience.
[00:35:03] Ben Kornell: That sounds obvious, but yet you and I both know countless educational products that accept that as part of the downside of a less than consumer level app.
Just to push on process a little bit, where did that insight come from? Was that something you believed? That we've gotta build something that has zero latency, that has video game quality rendering. Or was it something that you trial and narrowed and you were like, wait, they love the fast stuff and they hate the slow stuff.
Or was it something you learned from somewhere else? Like where did that insight come from?
[00:35:40] Eli Luberoff: So I definitely can't claim credit for the insight, but this one doesn't require AB testing. I think it's so clear to anyone using a product. So I think we would've done this even without any data to back it up.
It's just like, I don't know, make a tool that you want to use and every single time that it responds more to you, it is more fun to use. But then we actually have seen very, very concrete benefits of this. So when we started making the calculator, our assumption was that people would use it the same way that they use a handheld calculator.
And so for example, you have 10 lines of where you can write equations. We couldn't imagine that anybody would want or need more than 10 equations, but very quickly after releasing it, you know, we didn't put a hard stop at 10, of course, we just like didn't think anyone would go past that. And we get a graph in that's 150 equations and was very, very slow to render.
It took, you know, 45 seconds to display. And so we like huddle up and we try to find all of the sources of that performance. 'cause we wanted to open it. And it was, you know, it was fun to show someone look at this beautiful graph, but then they're like sitting there looking at their phone spinning. We're like, well this is stupid.
So we made it so that that one loads basically instantly. And then as soon as we did that, we got a whole bunch more graphs that were more complicated. This had been an outlier and suddenly it's became standard. And now the outlier is one that had 500 equations. And right now the biggest graphs that we see are literally thousands of equations long.
And those ones also now render in less than a second in a lot of cases. So every single time that we have improved the performance specs of our tool, the complexity of what people build scales proportionally to that and the kind of like. Window of what is normal, shifts in proportion to how powerful a tool is, and it's an endless arms race.
There are graphs now that we open up and as soon as a graph comes in and it's slow, it gets shared with the team and people feel bad and they hang their heads and I'm like, that's nuts like this. It's wild that it renders at all, but let's also think about how to make it faster. So that has been kind of the most concrete evidence, is the creativity of what people make growing in proportion to the power of the tool.
[00:37:42] Ben Kornell: That's awesome. Just for our listeners, like this is why I love just talking with you, Eli, is because at the foundation of everything Desmos does, there's an intentionality back to your point of make something you would want to use yourself. And I think there's like a curiosity that you and your team have around what would a graph look like if we had 300 or 400?
And anytime that someone uses a product in a way you don't expect, there's two reactions product leaders can have. Now why would the hell would they do that? Or like, whoa, this is interesting. Let's learn more. And I think that that's been part of your story and part of why you know kids and teenagers absolutely love you.
I do want to just spend a moment on the curriculum before we talk about Desmos Professional. On the curriculum itself, you really pioneered interactive problem solving. The idea that static multiple choice questions were too flat for real critical thinking. When you think about what makes good curriculum.
And you can make it specific to math or, or more broadly, what were the elements that you always felt needed to be included to make something that really engaged students?
[00:39:10] Eli Luberoff: Basically everything about curriculum that I know I learned from folks on our team. It, it started with Dan Meyer, who is kind of the original thought partner in some of the earliest lessons that we made, and then built up this just incredible, incredible team of educators.
And they ended up writing up a, a guide to building digital activities and, and later on they modified it to also include. Guidance for any kind of activity. And I think a lot of the components that make for a great lesson are true across paper and digital. You wanna have a whole bunch of different variety of verbs and nouns as you're working.
It's not just doing the same thing over and over, a lot of different approaches. They can get to the same place. A lot of opportunities to see your own thinking expressed in, in what you're doing. The place where I think we innovated the most and where I put the most of my personal attention, 'cause this is kind of the intersection that I had, was on the specific affordances that you can do on a computer that you can't do on paper.
And I think my favorite. Insight from that is that there's all of this, I think, good thinking about the immediacy of feedback mattering, which, which feeds into what I was saying before you, you put something in, but I think the quality of the feedback that you get matters even more than the immediacy.
And so did you get this right and did you get this wrong? Is not actually that motivating. Here's what you should have done instead. Also not very motivating. We found that there were kind of two things that were, that were more motivating. So one of them is interpretive feedback. So saying here is the impact of the thing that you did.
So an example of this is, I don't know, you're playing around with parametrics and you're looking at trajectories of objects as, as you send them out of a canon. And it could be like you want. To get to this location and you put in your angle and your velocity or your X speed and your Y speeder or whatever it is, and it tells you like, Nope, you didn't hit it.
Or Yep, you did hit it. Not at all exciting, but you do it and you notice that it fell short given the information that you put in and you're like, alright, I wanna revise this. So giving you this kind of like actionable, here's an interpretation of what you did that then you think about, all right, how do I revise my work to actually get the target that I want?
It's so much harder to build something that will interpret arbitrary input that someone puts in, but so much more motivating for wanting to revise. So that's one. And then the other thing that we learned is just so, so, so powerful is sharing work that you're proud of and seeing work that your classmates are proud of.
Learning is such a, a social experience. So a lot of what we built is, here's my solution and let's look at solutions from three of my classmates. That combination I think is really, really, really powerful. And the second one you can do in other ways, non-permanent vertical surfaces, I think is, is a good example of this thinking classroom type type ideas.
The former really is something that you can do on a computer and you can't do otherwise. Here's an interpretation of your work is just so, so, so hard to get in any other form.
[00:42:00] Ben Kornell: And what we find too is that when math moves to abstraction, that's where you have kids losing their deeper understanding. And actually when you can make math visible and interactive, even abstract math gets concrete again, and now all of a sudden you're shooting something from a cannon.
Now you understand concepts that you know in a textbook are at worst. You know, purely abstract at best are purely static. And you know, I think the, you used the digital space as more of a playground for experimenting and understanding the mathematical concepts. What I interpreted from what really the Desmos curriculum did.
Is it moved away from functional calculation of just going through the mechanics of, I've memorized a way to solve this thing. I take the A plus B out and I do this, and it actually pushes students to really understand the relationships between all of these variables or between shapes. I mean the geometric we've played around before with some of the geometric configurations.
As you start understanding these relationships, not only do you answer the question. The student has new questions. Well, I wonder if I do this, or what about that, or, that's strange. It looks like the cir a circle again. And so I think, you know, that really, it almost brought the spirit of almost like project-based learning or like exploratory learning, grounded in, you know, pretty intensive conceptual curricula.
So let's talk a little bit about Desmos professional version. So now Desmos is growing up, going to the workplace. Tell us a little bit more about how that's going. What is the product suite? What's the plan?
[00:43:58] Eli Luberoff: Yeah, so this is brand new and I think that maybe nobody outside of this podcast has even heard the words Desmos professional.
So get excited, uh, get excited. There we
[00:44:06] Ben Kornell: go. Let's go.
[00:44:07] Eli Luberoff: Here's the premise is when we started Desmos, our intent was to make an educational tool. We didn't actually have grand plans about what would happen after that. And it's been kind of an open question for me. For a long time of what happens when you graduate high school and with your handheld calculator, you're like probably giving it to your younger SIB or maybe reselling it.
'cause for some reason those things hold value better than gold. You know, it's, it's wild. Or maybe it's something like a word processor or like Excel where this is now a tool that feels like an extension of your mind and that you wanna use in other circumstances. And so you go to college and you're still using word processing and you're still using Excel and you go into your job and you're still using Excel.
You go into your life, you're doing personal budgeting and we honestly didn't know which one Desmos would be. 'cause we're kind of in a weird spot. In between those, there are ways in which it's spreadsheet ish and there are ways in which it is handheld physical graphing calculator ish. And the signals that we've been getting more and more is that folks are indeed bringing Desmos with them into higher ed and into their jobs.
And so one, one recent example of this, I, I visited a classroom in at bu of engineering students and, you know, said, Hey, are any of you still using Desmos? And all of them raised their hands and I say, any, any requests, any feedback? And they launched into kind of a, a tirade about how mad they were that we didn't support three-dimensional graphing because they were being forced to use other tools that were much, felt, much more user antagonistic.
And they're like, Desmos is the friendly tool that I know and you could obviously make it work for 3D in addition to 2D. And the fact that you haven't is hurting us. I thought it was gonna be a, like, wouldn't that be so great if you didn't? And instead it was like, why haven't you, you know, it
[00:45:47] Ben Kornell: was wild user pain point.
When you find a pain point. It, it is a really, it's a strong reaction.
[00:45:54] Eli Luberoff: No joke. No joke. And so we built a 3D graphing calculator and what's been happening is, you know, Desmos launched 13 years ago, something like that. And so the people who are first learning about us in eighth grade are now graduated from college and going into the workforce.
And so we've started to notice now this increasing pattern of people using Desmos at their jobs as well. One of my favorite examples of this is at video game developing companies, they have teams of technical artists and they're thinking about all of the kind of algorithmic ways that light bounces off surfaces and combines and how we perceive it and shadows and, and these kinds of things.
And all of this they're trying to model and they're trying to understand, and they're also trying to communicate with their colleagues. And across the board, we've been learning that Desmos is one of their favorite tools for doing that because it is this kind of balance of intuitive and powerful that makes it better than some of the much more powerful tools, especially for communicating with colleagues and much better than some of the weak tools that don't have that power.
So there we're seeing it happening in finance a bunch. We've seen some just like really wild applications and engineering, a bunch of car manufacturers used Desmos internally and in various ways, you know, modeling, how fast should the trunk close and how much pressure should it take to decide that you don't wanna close so that it doesn't chop off a finger, but it can still, you know, go through snow.
Whatever the math is of, of all of this, it shows up all over the place. And so the epiphany that we had is that this is just a wonderful, wonderful opportunity in our mission, right? We want for people who are using Desmos in their work life and in their life. We wanna get that back into the classroom every single time that someone says, when am I gonna use this math?
If you can be like, well, do you ever play a video game? Have you ever been in a car? Do you ever take medicine and wonder about the dosage? The people who are doing that are using math, and not only are they using math, they're using the same tool you're using today. Like, what a beautiful circuit, if we could help close it.
But the thing that we realized is that there's a set of capabilities that we'd need to add to make it so that you could use Desmos in a workplace that are different from if you're using it to do your homework for free. And so we're starting to add in this layer of features that let you control privacy and who has access and that kind of thing.
It's not gonna change in any way the free product that everybody uses right now, but it will be an extra layer if you want to use us kind of in a workplace in a, in a safe, secure, authorized manner.
[00:48:22] Ben Kornell: Yeah, it seems to me like it speaks to the universal applicability of math. And at the same time, like as you get to higher levels of math, it's connected to decision making in the professional context.
That is pretty high stakes decision making where either the confidentiality or the ability to share your work. So, you know, we have these artificial boxes around, oh, you're in K 12, the math stops here. Like, leave your tools at the gate, like move on. I just find it really fascinating that. As you've built something that's really more a learning product rather than an education product, like education is about the institutions and learning is really about the act of expanding your understanding.
I think that this is a great testament to that fact.
[00:49:15] Eli Luberoff: Oh man, what a dream to try to live up to. A hundred percent. Yeah. It's, I think it's both. It's, it's the learning and it's the communication part that Yeah, hopefully are, are really universal. And, and I think you're spot on that the dream of what we do in education is that combination of, of skills building, how do you learn and how do you communicate your ideas, how do you stress test them?
How do you try out hypotheses? And when we nail that in classrooms, we end up with things that are applicable the next year and four years later and 10 years later. And in your job and in your house. That's the dream.
[00:49:49] Ben Kornell: Eli Luberoff, thanks so much for joining EdTech Insiders. I'm a friend, a fan, and a father of a student who uses Desmos.
I really didn't come up with that beforehand, but the alliteration was profound. So I just wanna appreciate everything that you've done and, and for those who wanna learn more about Desmos and Desmos Professional, what's the best spot for them to go to?
[00:50:12] Eli Luberoff: Yeah, so desmos.com is the best way to learn about all of the stuff that we're doing.
And I'm eli@desmos.com and reach out anytime day night questions. Just always, always happy to learn with and from our community.
[00:50:24] Ben Kornell: Awesome. Thanks so much.
[00:50:26] Eli Luberoff: Oh, thanks for having me.
[00:50:28] Alex Sarlin: We are here with a very special guest on this week in EdTech. We are speaking to Rebecca Winthrop. She's a senior fellow and director of the Center for Universal Education at the Brookings Institution, where she focuses on global education research with an emphasis on skills development and innovation from marginalized youth.
She leads the Brookings Global Task Force on AI in education, which we'll be talking a lot about today, and has advised numerous governments, international institutions and organizations, including the UN Secretary General's Global Education First Initiative, UNESCO and More. She's everywhere Prior to Brookings, she's spent 15 years in education for displaced communities, and she has authored multiple books, including her most recent, the Disengaged Teen, helping Kids Learn Better, feel Better, and Live Better with Jenny Anderson.
Rebecca, welcome back to the pod.
[00:51:20] Rebecca Winthrop: Thank you. It's lovely to be here, Alex.
[00:51:22] Alex Sarlin: I'm so happy to see you, and I wanna give you, first off, huge congratulations. You are just celebrating a big win. You are the recipient of the World Education Medal for Leaders 2025. Congratulations.
[00:51:36] Rebecca Winthrop: Thank you. I was very honored to get it and humbled because so many incredible finalists that I've learned from and talk to every day are doing incredible work.
[00:51:45] Alex Sarlin: It's amazing. And you are also celebrating a enormous launch, this huge paper from the Brookings Institute about ai. Ben, you wanna dive into some of this amazing stuff happening in this paper.
[00:51:58] Ben Kornell: Yeah, so the paper is called A New Direction for Students in an AI World. Prosper, prepare and Protect, and before we get to the Prosper, prepare, protect, which is really your recommendation to the world.
You kind of start off with the premise that the AI risks are outweighing the AI benefits in today's climate for kids. Can you just talk a little bit about why and what your research found?
[00:52:26] Rebecca Winthrop: Absolutely. Well, the report finds that we did not start our research with that. What we did is try to have a very objective data heavy research process where we looked at three things.
We said, what are the risks of generative AI to K 12 kids learning and development? How do you mitigate those risks and how do you harness the benefits? So that's what we did. We were doing a pre-mortem, which was to say, look, we. Don't know exactly how generative ai, adjunct AI embodied AI is going to evolve in the next couple years.
But we sure as heck know a lot about student learning and development. Mm-hmm. And so let's just get a temperature check. Are we on the right track or do we need to change course? That was our purpose. And what we found was we are not on the right track. Not because there aren't incredible benefits that generative AI and AI new developments could provide students, but that that's not really what the AI implementation currently is looking like.
And so the risks are overshadowing the benefits.
[00:53:33] Alex Sarlin: But what's nice about this report, as you say, it's very fair-minded. It's very objective. You looked at hundreds of research reports. You've looked at so many different aspects of the AI landscape, and even though the risks may be overshadowing the benefits at the moment, the report also says, that doesn't mean we give up and throw up our hands and say, AI is just not good for education.
And this is really the heart of this framework to make it work. We have to think about prosper, prepare, and protect. And this is not the order you put them in, but I'd like to end this conversation with the prosper because it's like exciting. Sure. It's, it's the up note. Let's start with the prepare, because I think it's a really interesting aspect of the report.
What do you recommend when it comes to preparing for this AI world?
[00:54:16] Rebecca Winthrop: First, before we get into the three big pillars of our recommendation, I just wanna flesh out a little bit where we see the benefits and where we see the risks. So we have in the report our big six, our big six benefits, our big six risks.
And the benefits are wide ranging from saving teacher time to expanding access to education for kids to who aren't even in school, especially helping neurodivergent kids or English language learners or personalization, new forms of assessment, this type of a thing. And we often find that AI can enrich kids learning if it's done with very pedagogically sound implementation with vetted content.
So that's the context we want. What's happening is AI is not just being interfaced by students inside the four walls of the school. AI is everywhere in kids' lives. There is a blurred line between entertainment, communication, and learning and education. And we found a huge number of risks that really undermine the fundamental learning capacities of kids to even take advantage of those benefits that exist in the first place.
So it's things like risks to cognitive, social and emotional development. Degrading trust between students, teachers, which is very, I'm particularly worried about, about this. 'cause if you don't trust each other, how are you gonna pay attention? And then of course, student safety, which we've heard a lot about, and deepening inequality divides So.
That's where we are in terms of risks versus benefits. So how do we make sure that the risks that I will say are often because kids are having unfettered, unregulated, unstructured access to AI often outside of school. So how do we make sure we get to the good place, the happy place with ai? And you wanna start with prepare.
And so prepare is really about holistic AI literacy and supportive infrastructure. And when we say holistic AI literacy, we are not talking about rolling out how to use a chat bot training. We are talking about diving into what many people in the computer science education world have talked about for a long time.
I think about my colleague, Haie Petro and Pat Young Pratt, who for a long time have been saying, alongside biology and physics, we need kids to understand how computers work, how machine learning works. And I think that's absolutely crucial because kids are living not just in the physical world, they're living in the online world and.
It's holistic in terms of the whole school needs to tackle this. This is not something for just the IT focal point of the district or the computer science teacher or the elective robotics club. It is for every teacher, every staff member to understand holistic in, in terms of it's absolutely something that students need to be at the table with educators figuring out how AI shouldn and shouldn't be used.
So don't just do this in your teacher staff room with your principal and roll it out. We heard over and over teachers and students are learning alongside each other about AI and students are freaked out too. They care, they wanna learn well they care. But there's this weird paradox where they don't also wanna be left behind 'cause their other peers are using AI and doing better.
So do student councils where kids beta test products before you purchase them, have student AI councils co-create AI policies, have students try to get around AI privacy protections to make sure you've got things well set up. Like that part is really important. And I would say the third thing around holistic AI literacy and for everyone is make sure it includes ethical considerations.
Make sure it includes considerations around not just how to use and hallucinations, but how do you create. And solve problems with ai. How do you center what you wanna do in the world and then use AI as your additive instrument to help further that? Ooh. And the fourth thing, there's a lot in this report.
Fourth thing in terms of holistic AI literacy is don't forget the families. Mm-hmm. This is not a problem that is gonna be solved by the education community buckling down and raising the four walls of the school. You have to do AI literacy for parents and caregivers and families urgently, and that has to be done together because this is actually where a lot of the risks are taking place
[00:58:56] Ben Kornell: in both the prosper and prepare sections.
You talk about titrated AI use, knowing when not to use AI and when to use it, and this idea that positive uses can still have potential, but this idea of AI substituting cognitive lift or rigor. Or AI substituting relationships. Basically, whenever AI is a substitute, it mm-hmm. Has a potential negative.
Whereas when it's an enhancer or deepener, it's more positive. Can you talk a little bit about the data that came through that led to that conclusion? Mm-hmm. And also the so what of what that looks like in practice.
[00:59:37] Rebecca Winthrop: So the data that came through was broad and vast. So we reviewed 400 studies. We did focus groups and interviews with students, parents, teachers, ed leaders, technologists, and 50 countries.
We ran a Delphi panel, which is a methodology that's really useful when a field is new and emerging and doesn't have existing frameworks. So what does this actually look like in practice? So what we ultimately found. Was, it really depends how AI influences the participation in the capabilities of people and the relationships between people in the instructional core.
The instructional core being that fabulous interplay of exchange between teacher, student, and content. And that instructional core is crucial. We know from 50 years of education reform research that you can maybe have new content, but if the teacher doesn't use it to shift out their teaching, it ain't gonna do anything.
And that shifting the relationships in the instructional core is what actually changes education. And in this case, if AI shifts the relationships. In the instructional core to exclude people. So there's a big inequality issue to reduce the learning capabilities of students and teachers or to damage the relationships between them.
Then it's gonna diminish learning. Now, if it helps, do the opposite, if it brings new. Learners into the instructional core. There's incredible examples of, we found of really creative education innovators using AI with, you know, lots of humans in the loop to educate girls in Afghanistan who've been banned from secondary school through WhatsApp.
Like it's incredible AI's bringing new. Participation into the instructional core. There's also great examples of AI increasing the capabilities of people who are already in the instructional core. You know, neurodivergent Kids is one incredible use case for ai, so incredible examples of kids with aphasia, so who have trouble communicating, creating synthetic copies of their voices so they can talk with their teachers and their peers in the classroom.
Incredible, right? Yeah. Deep capability, enrichment and ai, enriching the relationships between teachers and students and content. I have to say, we found much less examples there, though. People are working on it.
[01:02:28] Alex Sarlin: I think this is what this year is gonna be about people working on it and trying to get past this paradigm that you're saying.
And, you know, your protect section is about, you know, regulation and privacy and security and making sure home use is, is done properly as you said. And, and I think a key aspect of it is avoiding those sort of addictive dark UX type of product development that we often see in consumer tools. Avoiding doing that in ai, which obviously it is being done right now, but at the same time the ed tech world is trying to keep balance against it and trying to build that kind of relational social ai.
I I, you're seeing these pockets of it and studies and in product. I'm so excited that that was so core to this message that maintaining trust and relationships is key. I'd love to hear you talk about how you think the EdTech field can be on the positive side of that uh, equation. Right.
[01:03:18] Rebecca Winthrop: One of the things that I think is gonna be really important this year, and you guys have done a great end of year series teeing up sort of reflections from the crazy year we had last year, is having what I call a big picture view.
So not just in the four walls of the school, but like a child's life. It's affecting, it's affecting their ability to learn. And part of the big picture view is for the EdTech sector. A read this report and you know, there's a million people cited in it, and B, really think about what do the risks imply for you.
So for example, one of the big risks we found is erosion of trust. Totally across all components of the instructional core. We found some superintendents saying, you know what, I'm not using any AI content because it takes in very, I would say, politically sensitive school district. It takes so much to get curriculum passed.
And you know, my district is constantly being sued about the books in the library. Are you freaking kidding me? I'm gonna bring AI content, tenant, and we're gonna go all back to primary sources. Like the Constitution and the amendment like. So, you know, think a little bit about the context people are in.
I've talked to, you know, there's people doing incredible work on things like feedback can ai, because AI is actually pretty good at giving feedback for students and that can really help teachers expand and change that relationship with students. Talking about the positive side of, oh, this is great, my students will get more feedback from me.
I don't have bandwidth to constantly give them feedback. I've got 30 kids, right? I can, I'll review the final essay and they'll get feedback before on their drafts and it'll be better and they'll feel, you know, it'll be great. And what we found was, well, it depends if your students are having less trusting relationship with you, sometimes they're interpreting that feedback as less care and attention from you, that AI feedback.
So you've got to look at the big picture. Don't just narrowly hone in on what AI can possibly do. Look at it in the context of what's happening. I think that's the biggest takeaway. And I was, you know, as I said, really honored to receive this award, which is sponsored by hp, and I saw it as a, as a sign that the field is really serious and technology companies are really serious about figuring out how to do this right and looking at the big picture and investing in evidence and data and research.
[01:05:52] Ben Kornell: Yeah, I mean, I think, you know, part of the word itself is just validates how important research will be to figuring out these great use cases, and then also most importantly, identifying those that are potentially harmful. I think the other piece that stood out to me in your proposal. Is that despite the concerns you address, there's an optimism in that in the report.
Yeah, about a path forward. You know, as a researcher, sometimes you can get mired in some of the negative stories or negative outcomes. How do you keep a balance but also frame research in a way that is actionable and gives people a path forward? Because this isn't the first research report on AI and education.
There's been a lot, and it's certainly not gonna be the last, but I do feel like this is one of the rare ones that balances both. Here's what we found, but also here's the prescription and like how did you approach that as a researcher?
[01:06:56] Rebecca Winthrop: Well, I appreciate that we spent a lot of time making sure that we could have something that people could do.
At the end, again, this was a report I led, the Brookings Global Task Force in AI and Education. But there were many, many people, many people on my team who co-authored this, including a student author and many people involved in the task force. Hundreds actually with an incredible steering group. So everyone said, we've gotta figure out what to do.
So we didn't cherry pick the data. If the data was literally, oh geez, this is terrible. Never use it. It's horrible. Children should never use it. We would've said that, but we didn't find that. That's not what we found. What we found is that there's really cool, cool use cases, but they have to be. Bounded and controlled and what's happening now, it's sort of every person for themselves, I would say, especially in the us, less, especially in Europe, and a slightly different scene in Latin America and China, for example.
So we just, you know, really spent a lot of time looking at the data and seeing what really are the benefits, what are the risks, and spending a lot of time figuring out risk mitigation strategies. And I will say that the hardest part of the report was trying to figure out those. Three P frameworks because we probably had 75 recommendations in the first draft, which is overwhelming.
So part of your answer, Ben, is for people to take action, you have to whittle it down to the most important things and in a framework that people remember, everybody will remember, prosper, prepare, protect. There are 12 recommendations for under each. You don't have to agree with everything in the report, but we absolutely are asking everybody in the ecosystem to pick at minimum one of the recommendations and advance it.
And that is how we're gonna move towards a bending the arc towards the positive, enriched version of life where AI supports kids learning and development.
[01:09:03] Alex Sarlin: The 12 recommendations really give a lot of people lots of entry points to really take action. I will tell you my favorite recommendation, the one that just felt like the bumper sticker for the entire EdTech space was use AI tools that teach, not tell.
I just mm-hmm. Loved that. I know we're just about out of time here, but can you just give us like the 32nd overview of what that means, just as a way to get everybody who's listening to this to get their wheels turning around that use AI tools that teach, not tell?
[01:09:33] Rebecca Winthrop: That's right. And we have a lot of technical recommendations for people who are developing models like try antagonistic development, try progressive, unfolding the answers.
And the basic gist is any AI tool that replaces human effort and cognition and motivation is not a good thing.
[01:09:53] Alex Sarlin: There
[01:09:53] Rebecca Winthrop: you go. There you go. So, so try something else. There's many, many, many other ideas out there that are percolating and doing cool things, and we should lean into those.
[01:10:04] Alex Sarlin: Phenomenal. It is really an amazing report.
It is comprehensive. I recommend everybody who is listening to this pick it up right away. It's out now, but it's very fresh off the presses right now as you're listening to this. So definitely check it out. Coming from Brookings, Rebecca Winthrop. Absolute pleasure to speak with you, and congratulations again on this incredible award, the World Education Medal for Leaders 2025, founded and sponsored by hp, as you mentioned.
Really, really appreciate your time here. We hope to talk to you again soon on EdTech Insiders.
[01:10:35] Rebecca Winthrop: Thank you, Alex. Thank you, Ben. Take care.
[01:10:37] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.