Edtech Insiders

Week in EdTech 10/29/25: Alpha School's Backlash, Chegg Layoffs, Kaplan’s AI Pivot, Mem0’s “Memory Layer,” Big Tech vs. Higher Ed, and More! Feat. Rebecca Winthrop & Jenny Anderson, Authors of The Disengaged Teen and Justin Reich of Teaching Systems Lab

Alex Sarlin and Ben Kornell Season 10

Send us a text

Join hosts Alex Sarlin and Ben Kornell as they recap a post–New York EdTech Week full of highs and hard truths.

Episode Highlights:
[00:00:00] Alpha School's backlash and what it reveals about AI-based education.
[00:06:58] EdTech’s new K–20 alliance with Microsoft and Google for responsible AI.
[00:10:09] The risks and lessons from Alpha School's rapid rise and fall.
[00:21:53] Chegg cuts 45% of staff amid AI disruption and market pressure.
[00:26:01] Kaplan launches AI tools built on 85 years of learner data.
[00:31:02] Mem0 raises $23M to build a universal AI memory layer.
[00:38:10] Cal State’s OpenAI deal sparks debate on Big Tech in higher ed.
[00:44:18] The media’s anti-AI narrative and its impact on innovation. 

Plus, special guests:
[00:50:24] Rebecca Winthrop, Director of the Center for Universal Education at Brookings, and Jenny Anderson, award-winning journalist and co-author of The Disengaged Teen, on student agency, engagement, and the four learner modes.
[01:11:14] Justin Reich, Director of the MIT Teaching Systems Lab, on AI in Schools: Perspectives for the Perplexed and how educators can experiment safely with emerging AI tools. 

😎 Stay updated with Edtech Insiders! 

Follow us on our podcast, newsletter & LinkedIn here.

🎉 Presenting Sponsor/s:

Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 30 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.

Ednition helps EdTech companies reclaim rostering and own their integrations. Through its flagship platform, RosterStream, Ednition replaces costly data providers and complex internal infrastructure with direct, secure connections to any SIS or data source. The result: scalable integrations, lower costs, and seamless experiences for schools and districts of every size. With Ednition: Reclaim rostering. Own your integrations. Cut the cost.

Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.

[00:00:00] Ben Kornell: My fundamental truth here is that AI is great at some things like going to the learning gym and practicing your math, practicing your language arts, like there's a bunch of wasted teacher time going through repetitive tasks and with slow feedback loops. The Wired article really hits on interesting pedagogical things like extrinsic versus intrinsic motivation, stretch goals versus core goals.

That's not an indictment in my view of Alpha School's. It's basically saying, look, these levers are at play and how kids respond is highly variable. And I would say this is like an opportunity to learn more and think more about those things. 

[00:00:44] Alex Sarlin: We are in such a funny moment now, it feels to me in the AI space where like it is now ubiquitous, right?

Everybody thinks about it. Everybody's trying to figure it out. Many organizations, many individuals are jumping in, writing books, putting together frameworks, putting together professional development, launching companies. We've seen lots of students and teachers and people entering the ed tech space and launching companies.

There's so much momentum around it, and yet I feel like we're still grappling with some of the absolutely fundamental concepts of what AI in education is gonna look like. And this idea of is there an advantage in being a 85-year-old company versus a 2-year-old company? Feels like it should have an obvious answer, and yet I don't think it does.

And that's. Kind of crazy actually.

Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders. 

[00:01:50] Ben Kornell: Remember to subscribe to the pod, check out our newsletter, and also our event calendar.

And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod.

Hello EdTech insider listeners. It's Ben and Alex back with another week in EdTech. We are fresh off New York Ed Tech Week feeling high. After all of the great conversations with so many great entrepreneurs, system leaders, innovators, technologists, all committed to transforming, learning and teaching. And then we get smacked in the face with the headlines this week.

This week. So here to grapple with it all is my partner in Noncrime. Alex, how are you doing? Alex, have you fully rested up from a intense week of New York Ed Tech Week 

[00:02:50] Alex Sarlin: almost? I'm almost there. Back home. It was so fun to see so many great people. We are editing all the video that we took at the event and we're planning on doing a episode with all of the Shark Tank winners.

They had some really amazing Shark tank competitions. We're interviewing them later this week for a upcoming interview. EdTech, we, you know, it felt fascinating this year. It felt sober this year than other years. EdTech, I think EdTech is filled with idealists. It's filled with people who are visionaries, want the future to be really exciting, want change to happen.

I felt this year, like there's a real acknowledgement that we are in a complex time in terms of government, in terms of funding, in terms of school policy, in terms of mental health and absenteeism. Like there's so much happening in the education space, and then of course, even us who are very positive and excited about ai.

When you see headline after headline and all these pushback about the technology, I think there's a realization that there's some major hills to climb. We stay really positive. I'm still very optimistic about EdTech, but wow, it was, you just heard again and again. I sat on a panel with some superintendents from LA and from Tulsa, and they're just dealing with so many different.

Aspects of the system sort of cracking, including from the technology side. It just feels like there's a lot of work to do, but it's still an exciting time. And man, we should get into some of the headlines today 'cause there's a lot of sort of polarized, some really exciting stuff happening, some pretty scary stuff happening.

What stood out to you this week? 

[00:04:13] Ben Kornell: Yeah, before we dive in, I mean my three takeaways. Oh yeah, please. I agree. Like enthusiasm on the long term vaa, sober reality on the near term. Exactly. I think the three main points I took away was one, the genie's out of the bottle. Kids are learning directly with the major LLMs, but the lions are blurred between what is learning, what is cheating, what's entertainment, what is social.

And so like one fact kids are using it whether you like it or not. And students are using it of all ages. Two is like systems are slow and schools and universities are struggling to keep pace and stay relevant. And they've got all these other like. Real deep social challenges that they're navigating.

And so it's not just change management, it's also all the other things that a school is supposed to do. And of course, people talked about mouth of school, and we'll get into that in a bit. And then third, I think is ed tech is scrambling. There's just limited defensibility for AI features. And so distribution and quality content are still the most important things.

And perhaps there's some data moats and some use case moats, but the fast following is just really common. So you know, from a tech standpoint, your latest innovation is just buying you a small amount of time, and especially in B2B, because the sales cycles are slow, it's highly likely that your competitors could catch up if you're onto something that's fired.

So a really interesting time. I think the backdrop of this is a lot of news. EdTech coming out to me. The three biggest headlines are layoffs, so it's Amazon and Chegg. So just macro. Second is we're now starting to see the backlash that you predicted or the take down that you predicted on Alpha School's.

So in K 12, like grappling with how innovative are we really willing to be and the press cycles of, we have the inverse heroes journey here. And then I think the third one is higher ed. Really grappling with how to make AI integrated into the learning. And I think my bigger picture take is moving AI from the what to the how.

And so this idea that computer science. AI majors and so on, like, let's learn the what. And now I think there's a shift of like, oh, this has gotta be integrated basically into everything we do. Much like the internet has become integrated into everything. So those are some of my top lines. What stood out for you?

And we can double click on any of these 

[00:06:58] Alex Sarlin: too. Yeah, those are great. I'd love to start with the Alpha School's article today. Those stood out to me as well. A couple of things that caught my eye this week. One, EdTech announced a collaboration, a sort of K 20 collaboration to try to shape what they're calling responsible AI and education.

They're thinking about open standards for responsible AI when EdTech is sort of one of the major standards bodies in education and EdTech. And they launched this week with an event with Microsoft and Google. And I think that's an interesting thing to keep an eye on just in terms of figuring out how to put all the pieces together and get into, just make AI and education less of a clash, which we just continually are seeing places in which they clash, whether they're integrity or bias or hallucinations or not enough training, or there's lots of places where the system sort of rejects AI or interoperability, all sorts of things, and.

Now I think you're starting to see some really serious initiatives to put them together. So that stood out to me. Google announced a new skills platform that's putting together a huge number of different education initiatives, including lots of AI initiatives. And knowing Google, I'm sure that Gemini and their AI tools are gonna be part of that.

So I think that was actually worthwhile to cover and think about, especially for adult education. But they're continuing to make big moves in ai. And then one headline that stood out to me, and I'm not sure I can make a lot of sense of it quite yet, but is that Microsoft, which has been sort of in catch up mode.

I think they also announced they're doing a web browser following on with open AI and perplexity, but they also announced they're making an ai that quote, you can trust your kids to use, which is interesting, an interesting positioning for Microsoft, which often is not as much a part of the cutting edge frontier model conversation.

I just find that intriguing. And then we should talk about the higher ed as well, because I'm seeing this very clear narrative start to emerge from the journalists who will cover this stuff, which is basically trying to say that. Big tech is trying to sort of enter education. They've been saying that forever, but why should we trust a Google or a open AI with education, even at the higher ed level, they're just for profit companies.

They're in it for money. Why should we trust them? And that is, you know, it's like almost like a populist message coming from journalism. And I find it very dangerous. I find it really scary because industry has so much to offer, especially to higher ed. And the idea of turning this into a purely suspicious, because any company is for profit, just what it does is if you get it right, did you shut everything down and then you're back to square one and you've made schools have nothing to do with the workforce or, or with industry or with tooling.

And it's like that's not a win. We also saw Hollan IQ put out a lot of reports this week, of course, with their top a hundred and fifties in different continents. So there's so much to talk about. Can we start with the Alpha School's article, Ben? 'cause that one just. Threw me for a loop, even though yes, I'm sort of predicting that Mackenzie Price and Alpha School's and their two hour learning model, because they've gotten lots of attention as sort of what AI education might look like as a standard bearer, they're making themself a huge target.

And we saw a sort of first arrow hit the target this week with a big article in Wired, basically going really deep into Alpha School's, Brownsville, and talking about parent reactions. And you have some really interesting experience with this from your alt school background. Tell us about what you make of the Alpha School's article.

[00:10:09] Ben Kornell: Well, first, let's just call out the trope. We have heroic model that will save all and then built up by the press, and then yes, torn down by the press. That's right. So what are we to believe? Well, ultimately, there's probably some truth in the middle here that it was never like the heroic be all, end all.

And it is also not what's described in this wired magazine as like. This hell hole that kids are trapped in and they can't escape, they can't eat. Literally they can't eat. Let's be real too. I mean, this is a private school, so you have choice if you go to the private school or you don't. And I understand in Brownsville it's a low income community and part of the Wired article raises concerns about advertising or marketing the success of the Brownsville as the case for change for other things.

But again, like this is not just an Alpha School's thing. This is how basically every for-profit education model in the public zeitgeist, ghost. Yep. And I think we also saw a version of this with Sal Khan, where he was basically like AI's gonna change everything. And then everybody got out the punching gloves and just said, con Migo is X, Y, and Z.

So in that sense, my heart goes out to them. Because it's so tough when you're like riding high. You're like, oh my God. People believe in it. And of course, you have to have irrational exuberance to start anything in education. So you're like, yes, we believe in it and it's authentic. And then you feel it coming down with these hit pieces and you get defensive.

And I think the main thing that they have going for them at Alpha School is that they have a billionaire, Joe Mont backing them, so they're not subject to capricious. Funding cycles. Two, they're still really small in terms of the scheme of things. So their ability to continue to tune and generate quality is high.

And then I think third is they're inspiring a lot of like fall on innovation. That's true. And so part of their theory of change could be like, we're gonna open a bunch of Alpha School's, we're gonna open some of these charter schools, but we're also just gonna inspire people to think differently about how to use AI and learning.

And my fundamental truth here is that AI is great at some things like going to the learning gym and practicing your math, practicing your language arts. Like there's a bunch of wasted teacher time going through repetitive tasks. And with slow feedback loops, the Wired article really hits on interesting pedagogical things like extrinsic versus intrinsic motivation, stretch goals versus core goals.

That's not an indictment in my view of Alpha School's. It's basically saying, look, these levers are at play and how kids respond is highly variable. And I would say this is like an opportunity to learn more and think more about those things. But in our black and white press culture and thumbs up or thumbs down world, I think this is probably just the beginning of a wave of take down pieces.

And I think there's also real forces at play that like the system, that is the way that it is. So, last thing to say, I think it's really interesting that this article really talks about more of a rote learning methodology, but it's important to note that Montes Auditorium. montessori.com, which is an elementary curriculum around personalized Montessori style learning is Alt School was acquired by Higher Ground.

Higher Ground is acquired by Alpha School, and so Alpha School is trying to adopt some of these like learner-centered, more learner driven methodologies. What comes out in the story is that it's kind of top down, but I think the intention, and you've interviewed McKensey, we've met the team. We knew Alpha School before.

It was called Alpha School, like I think at their base, the idea of self-directed learning and kids unlocking their potential has always been at the core. So call me an apologist. Call me what you will. I've seen this story and this arc play out before. And I actually think if we throw everything away from Alpha School, we miss a profound learning opportunity to really rethink how we instrument education in the A IH.

[00:14:50] Alex Sarlin: I agree. And I think your point about that Alpha School is sort of is planting a flag in a particular area of, hey, an entire school model can be redefined with AI-based education as a significant part of it. Just planting that flag conceptually for one thing, that's why they got so much attention. I mean, there are other reasons.

The Mont and and Mackenzie and this Bill Ackman, like there's other reasons they got attention. The article mentioned that Linda McMahon has visited Alpha School's and talks about it. So like they've built momentum and snowballed momentum in some ways. But I think at heart it's just planting a flag in a new concept that people haven't really thought about before that draws attention.

And I hope that FLAG does create. I mean, we know it already is. We're already seeing John Danner has created Flourish Schools, a set of schools that are trying to do some of the same model as Alpha School's. I don't know exactly what the overlap is, but e explicitly for low socioeconomic status students at the core, not doing it as a high cost private school.

Obviously, Alpha School's is thinking about that as well. I think that's the brightest side because we have to hold two really distinct ideas in our heads at once. Like I had been worried about Alpha School's bringing down the concept of AI based education. Basically by planting a flag there and having that flag on a piece of land that falls off the cliff.

We think about Lego, a Wiley Coyote cartoon, right? If they plant a flag and it falls off the cliff, it could bring down the whole concept of AI school models. But if there's a fast followers, if there's others doing it in sensitive ways, and if Alpha School's, as I'm sure they will. Adapts and starts to think about how do we make sure this isn't the beginning of a sort of downward momentum?

Because obviously as a private school reputation is incredibly important for them, right? If the last thing that most parents have read is how office schools isn't working and they don't let their kids have snacks and they're spending their time all in IXL till they want to cry, like that's some of the things in that article.

Parents are not gonna send them there, so they have to think about publicity. That's its core of the model, so they're gonna make adjustments. Others will come along. I think we should try as an industry. Not to get pulled too far in any one direction, but yeah, this happened a lot faster than I thought. I mean, I, I wanted tot my own horn here because I saw this coming really quickly.

As soon as that really positive article about Alpha School's came out, I'm like, we could start counting down until the takedown happens. But it was months, it was like two months I, there wasn't even a honeymoon period where Alva Schools got to, or maybe three, three months or where Alpha School's got to sort of start to build and build all those campuses in different places and start like the pushback was very rapid.

And even within our EdTech community, we've seen a lot of pushback, right? We've seen a lot of pushback within the WhatsApp group of the EdTech Insiders subscribers. I think there's an instant bristling when you see somebody try to take the mantle of a movement and just run with it in sort of. A very particular direction.

And I think that's part of that instinct that the journalists have. They know that people love the heroes and villains and to say, oh, you thought this person was a hero, actually, they're a villain. It's like, it's such a catchy story. We should just try to, if possible, avoid going too far in any direction.

[00:17:50] Ben Kornell: Yeah. I think the lesson here is you could take two lessons. One lesson could be keep your head low. As much as press might seem to be a good thing for you, it always turns on you. Yeah. Just like grind and don't put your head up or you'll get cut down because education has really complicated politics.

The other lesson you could take is you know what hater's gonna hate and like, if you really want to build something bold and innovative, you've gotta be willing to put it out there. And then you've gotta prepare your community that you're also gonna have people wanting to take it down and. I think that the Alpha School's team is mature enough that I think they picked option two.

And if you look at the type of people that are backing them, they have a mentality of like, we're happy to piss off other people 'cause we're gonna do things the way that we're gonna do it. And I think we're in a political climate where climate, the divide on lots of issues means that people who have financial backing and are willing to go against the grain are more bold in what they're doing and less apologetic about it.

Now the question will be one, does the model really work at scale? Two, if it does, can it actually reach many, many more kids, perhaps all kids? Given that there's a political, there's brewing a lightning rod around AI replacing teachers as a core storyline. And then I think the third one is, what's the economic model here for affordability and scale?

Yeah. Because right now with the tuition that's charged, it's out of range for like most families or most schools. 

[00:19:42] Alex Sarlin: There's one last thought about Alpha School's, and it's based on that affordability piece. I mean, I think one of the things that really surprised me about Alpha School's from the beginning actually is, you know, I talked to a number of people at EdTech Week about this.

It's like there's the idea of using technical tooling to do a lot of the direct instruction and to shorten the period of time that it need to be doing that is, I think, a rich idea. I think there is power there, and I think that's part of the appeal of that model. The fact that they were using IXL. And that now according to this article, IXL is like closing the contract with 'em 'cause they don't want to be pulled down in the mud here.

That's a really strange choice to me. The idea that, as you know, Ben, we've chronicled hundreds of hundreds of companies and many innovative, really interesting ones that are trying to figure out how to use AI in education in all kinds of ways, in multiple subjects, in project-based learning, in immersive ways, in creative ways, just so many different ways, authenticity, and the idea that they're like, oh, we'll do ai, we'll use.

A pretty old school tool slash very old school tool that yes, maybe uses some AI in its adaptivity, but like the idea that IXL becomes the core tool for representing AI based learning and education is just, I think, a little bit laughable. And I think to me, like as an EdTech person, that's their cardinal sin.

If you're gonna say I'm, you're gonna use technology to help students learn more quickly and in a more personalized way. You have so many options. You have so many interesting options right now. And to just go there is such a knee jerk dopey. Strategy that they deserve. The negative press just for that, and hopefully now that I XL is according to the story at least, it's sort of not gonna be the solution.

Maybe they have the opportunity to be a little bit of a distribution network for some of the most interesting and exciting AI in education. That's my last point, and that is, it relates to the affordability piece because IXL is not an expensive tool. It's used by many, many public schools, but there could be some really interesting affordability mechanisms where you can get really high quality technology into a school model that may not have to be 40,000, $60,000 a year.

Anyway, we've talked about this a lot, but it is a lot. Let's talk about the chug piece. I think the layoff story that you mentioned is also really key. Are you cool moving to that? 

[00:21:53] Ben Kornell: Yeah. I mean, with Chegg, I don't have much to say other than. They were our first chat, GPT killed X, Y, Z company. Yep. Story that we ever really covered on the pod.

And I think we immediately said, whoa, this is overblown. Because there's a lot of fundamentals that are good with check. I don't know, maybe we were wrong. Maybe it wasn't overblown because the, I guess what I had not fully accounted for is that so many of the use cases with Chegg were about getting answers to homework.

Yeah. And chat. GPT is quite good at replacing that. And so this big cut and refocus on like AI study tools feels like also an acknowledgement that their like market potential is much lower. I think the thing that probably is a bigger ed tech story is that just, we've been in a period where when the interest rates went back up, there was a bunch of layoffs at big tech companies.

Right sizing from the pandemic and cost of capital was changing. It feels like there's another wave happening right now, which is AI and our efficiency. We're pivoting our business and there's another round of belt tightening, which just floods the market with really experience candidates from an Amazon.

And then, you know, in EdTech you have these layoffs and it's already hard enough to get a job in EdTech. It just makes it a really tough labor market right now. And I'm seeing postings for director of product jobs that are like at 1 21 30. That stuff was like 180 6 months, two years ago. So I do feel like we're in like a economic recession period here in EdTech Land.

Yeah, it's 

[00:23:50] Alex Sarlin: a great point. And the headline was that Chegg cut 45% of its staff. They also are bringing Dan Rosensweig back in as CEO and Nation salts, who's the current CEO is moving to the board or sort of a board advisor or something. And I think they have tried to do some other things in the last couple of years, right?

They've been trying to build up a Chegg skills business. But yeah, I think maybe we were overly. For once maybe didn't anticipate that AI was gonna be as impactful at that time. Way back when, Ben, it was like they had mentioned in an earnings call that AI was having some impact on their business and basically everybody fled the stock.

And I wonder if that sort of assumption that anything AI touches in a negative way, anything it negatively affects is gonna go off a cliff. It felt overblown at the time, but I think maybe. Now it's starting to become more of a common wisdom or, or a more commonly expected that if AI moves into a business space and can drastically change the economics of it, then the whole space is at risk.

One interesting headline I caught this week, it was just a press release, but I think it's worth noting just for the anti community, is that we saw Kaplan this week release a whole suite of AI learning tools from essay grading to test prep, to AI tutoring. And what interested me most about the headline, I mean, that makes sense, right?

All the incumbents are doing things in that space, but what interested me about the headline is that it, they talk about it being based on over 85 years of data. That's how they're trying to. Frame it and frankly it's true. Kaplan's been around a very long time and it just struck me as something we've been talking about forever here, but the idea of what is the value of proprietary data in AI in creating a real moat, a real differentiator.

And the question here, I think just one for all of us, and I'm curious what you think about it too, Ben, is like, if Kaplan is building this set of tools based on decades of data from many hundreds of thousands of students, if not probably many millions of students, are they gonna be better tools? Are these gonna be tools?

Is Kaplan's AI tutor is Kaplan's predictive model that predicts learners test scores early on? Are there essay greater gonna be better than those from companies that are much, much younger? I'm curious what you think about that and if that's something that we talk about proprietary data as a moat, at least in concept.

This is it in action. Will it work? 

[00:26:01] Ben Kornell: Gosh, you know, on the proprietary data standpoint. I think the more specific the use case, the better the moat is. And I think early on we covered Bloomberg creating their own LLM. I think when it's narrow around test prep, let's say, like that can be a definitive moat because others don't have that data and it's such a specific knowledge base.

Yeah. But for this particular case, I worry that there's so much public available data that's already been hoovered up by the AI models. And most of the knowledge is not so specialized that I do wonder about the technology play here. In a weird way, and, and I talked a little bit about this at the EdTech week, Kaplan's physical presence is actually an advantage over, uh, like LLM based digital learning.

And I wonder whether in a world of AI where. Yes. You know, proprietary data should feed in, is that the moat or is the moat that you actually have a physical location where students can show up and engage in, in-person learning? That is actually a differentiator. So if you could combine those two and say, look, it's best in class.

You know, you got basically a dual value prop. I think students and parents could buy it. Uh, the last thing I'll say is price sensitivity for these kinds of things tends to be really low. So the idea that I have to compete with X or Y, a lot of students are gonna be X and Y for stuff like high stakes testing.

And so in that sense, like maybe I don't buy the premise that it has to be totally differentiated because what we've seen over the last 20 years is that. Out of pocket pay for test prep has basically had no pricing bound. 

[00:27:59] Alex Sarlin: Yeah, it's a really good point. So, I mean, when you think about companies, big incumbents like Kaplan releasing AI tools in the test prep space, and then you think of new entrants like a, a e or atypical that are doing test prep in the AI space.

They don't have to necessarily be indirect competition. It doesn't have to be that you're trying to, you know, force students to choose between one or the other. There is the possibility of using multiple ones, using 'em for different reasons in that period where they're, where a test prep is, is top of mind and has really high stakes.

I think it's a really, it's an interesting point in this particular space in such a funny. Moment now, it feels to me in the AI space where like it is now ubiquitous, right? Everybody thinks about it. Everybody's trying to figure it out. Many organizations, many individuals are jumping in, writing books, putting together frameworks, putting together professional development, launching companies.

We've seen lots of students and teachers and people entering the EdTech space and launching companies. There's so much momentum around it, and yet I feel like we're still grappling with some of the absolutely fundamental concepts of what AI in education is gonna look like. And this idea of is there an advantage in being a 85-year-old company versus a 2-year-old company?

Feels like it should have an obvious answer, and yet I don't think it does. And that's kind of crazy actually. 

[00:29:16] Ben Kornell: Yeah, I mean, well, one. There's the advantage of distribution that is an advantage. Yes, there's the disadvantage of being quick, nimble technology enabled. When you're that old. The kind of typical calcification of a longstanding company would give you a disadvantage versus others, but theoretically you could fast follow or acquihire capabilities.

So that one, I think can be neutralized. I think the third one though, about data moats and how real or how differentiated are they? If we're being truly honest, I just don't buy that as a sole, you know, lever of differentiation right now. You know, we had Justin who worked on a chemistry startup that was all about chemistry specialization.

There's so much like niche specialization in there, and. It's the TAM is so small, not a lot of people are going after that. Those are the types of companies where I feel like the differentiation of data is a superpower. Yep. But when you're talking test prep and it's so broad, 'cause Kaplan's gotten so big, it's very hard to see that really changing the game.

And you know, my son does like math competitions and stuff like that. Every single week there's like a new company that like preps for a math competition and you know, can any of them make enough money or get enough scale? The answer's probably no. But are they useful tools for a student? Yes. So, you know, yeah, that's probably what learning looks like is.

Value destruction for education companies, but potentially a better learning experience for self-motivated learners. 

[00:31:02] Alex Sarlin: Yeah, you can add a lot of value with some really niche players. A company called Debater Hub caught my eye. I know your son is a debater as well, but you know, there's a company all about debate practice that is a very niche and very specific field.

It's not that niche. There are a lot of people who do it, but it's pretty niche. You know, the idea of focusing on that, building data around that, creating features specifically for that use case is really powerful. One other big tech story that went a little under the radar, I sent this to you directly, Ben, I wanted to chat about it 'cause I think it's an interesting harbinger of things to come.

We've talked in the newsletter about how one of the real impediments to AI in education, especially AI in terms of like longitudinal. Personalized education is that ais don't always sort of use memory in a very clear way, right? They forget things between chats. You have to sort of put pieces together to get it to remember it.

Remember sort of certain things about you if you let it. But those things may not have anything to do with you as a learner. They may just, you know what recipe you asked for six weeks ago? You might like, you know, chicken kiev, that doesn't matter when you're learning math. A company called Mem0 Today, they announced this week that they raised over, I think it was like $23 million, 20 something million dollars, and their goal is to build the universal memory layer for ai.

And this is not an EdTech story, right? This is the universal memory layer for ai, period. It would be for business, it would be for productivity, it would be for health, all sorts of things. But I am excited to see that competition starting about the universal memory layer for ai, because I think that understanding of what memory should look like feeds into something we've all talked about a lot in this space, which is what it looked like to have a persistent.

Learner profile to know who somebody is year to year within school or between high school and college, or between college and workforce. So many ideas. I'm curious what you made of that story and whether you think this memory piece is gonna play into the future of AI in the short term. 

[00:32:56] Ben Kornell: So my real question about the memory layer is, is it interoperable with basically every model one would want?

And I think that's the premise of the startup, is to hold the memory independent of these other platforms. If I were one of those platforms, one of the best ways to create lock-in is to solve the memory problem myself. And I think that the platforms themselves have a lot of leverage to prevent somebody else from owning the memory layer.

But let's just be real. This is data storage. This is Dropbox for the AI generation, let's call it what it is. It's, it's, it's actually data repository. How is it that Dropbox, how is it that some of these data storage sites actually made it like, and made it for a really long time? It's because the mining and the like storage and the specific use cases are pretty painful.

And when you've got like amazing technologists who are trying to be on the frontier of an ai, you know, a GI model, those types of engineering questions are challenging. By the way, Dropbox did open up a new AI agent with the idea that it could live on top of your own personal data. Total fail launch, like totally, you know, web, I don't know, I think they're Web 2.0, but it's a like a web 1.0 kind of feel to it.

But you know, I think you and I are both excited about this and the real question is in the value stack, is there room for, for something like this? Yeah. The other thing I've been doing is just like creating the Google folder of my like core intellectual property and then like uploading it into the different LLMs.

But I think Gemini is probably the best just because they have surface areas that allow you to kind of connect it with Google Drive, Claude is connected with our company's intel, you know, and so it can draw from that. But I think this would be a breakthrough for learning if we could know you longitudinally and also just imagine what a gold mine it would be for advertisers, et cetera, to dive into that.

So that's, that's my fear. If somebody owns this. How secure is it? How shared that was one of Dropbox's primary value propositions is that, you know. It's protecting your data from the outside. 

[00:35:24] Alex Sarlin: Yeah, I mean, I think that universally accessible and secured are probably the two sort of core killer app features that you would need to do.

And you know, speaking of Google, Google, three weeks ago invested in a 19-year-old founder doing something similar, a company called Super Memory, that's also trying to build a memory layer. So that might be a move for them to start to, you know, lay seeds. I'm sure they're thinking internally about what you just said about, you know, the idea of having a Google Drive, something that's built on top of everything we know about you from your Google Drive, which is often everything there is to know about you in many ways, and being able to work from that.

Right. I think that's a really good point that the Gemini Google Drive connection could be a really strong player in this space. What's intriguing about it to me is that this is now becoming a known problem, like you're saying with Box and Dropbox and others in that space. The idea that, oh, in this internet age, we all have so much data, so many videos and photos and documents that storage, huge amounts of storage and Google Drive of course, are incredibly important.

That created this whole space of storage companies and now it feels like there is. Storage companies may morph into these sort of AI memory companies, but there's just this gonna be, this new category of AI memory. I think one of the most interesting things about it for a, for the EdTech space is you can maybe start to presume that this is going to happen at the, some of the highest layers of technology.

And like you mentioned, it might be tempting to build it yourself. And if you do do a good job of it, that's a real, it is a real differentiator. But if I were an EdTech company, CEO right now, I don't think I'd put. A lot of resources are trying to build the memory layer for the reasons you just said. It's incredibly complicated.

It has to be super secure. There's lots of gotchas there, and I think it's being pursued by companies that are gonna be, you know, tailor focused on it. But if you are an ed company, maybe you can assume that two years from now there may be a universal memory layer and there may be a way for students to just have their super memory account, which has everything about them, everything they've ever learned, all their SIS data, all the different pieces, and be able to build on top of it.

Hard to say, I mean, it is a, it is a tricky space, but it just feels to me when I saw this news, I'm like, okay, this is now a space being pursued really diligently by multiple different players in big tech with lots of money behind it. It's probably going to happen and it's most likely, I think it's gonna happen outside of VE tech, although the compliance and regulatory nature of VE tech might mean that, you know, a power school or an SIS might find it.

You know, there still might be a real moat there because the educational use cases might be that companies like this have many. Faster ways to make money than working in with education as we see in many parts of tech, but it's, it's an interesting space. Can we talk about this article? I don't know if you've got a chance to look at it.

This one about big tech makes Cal State, its AI training ground. 

[00:38:10] Ben Kornell: Oh yeah, yeah. Another hit piece. Yep. Two big time. You know, like my breakdown on this one too is one, the interesting question is like how do we move AI from the what to the how, and what does higher ed that's AI enabled look like? The second one is what should people pay for it?

And there's a lot of criticism in this that, you know, the Cal State system was paying all this money and then Google gave it away for free to another university system. And like, it's just so hard when you're a school and you're trying to bid out stuff and then you get these. Like press folks coming after you when probably there's a lot of like services agreements in here.

You know, it just, I know how these things work and it's just painful to read it in the New York Times because the New York Times is gonna have a better story if it's like you paid 15 million and they've got it for free. When in reality it probably looks a lot different. I agree. And then third is like, we need to reimagine what like community college and state colleges look like and they need to be better at preparing kids for real jobs and also real citizenship in the future.

And like almost predisposes that everything was going fine and then now they're trying to put in ai. It has not been going fine for quite some time, you know? So you and I, we talk a lot about the first one, which is like, what's higher ed going to look like? But man, the press is not doing us any favors in, in like making it easier for administrators at institutions to think, uh, differently about how they would proceed.

I mean, just, you know, for context too, the Cal State budget is so insane. I, I don't know what the exact number is, but it's billions of dollars. Yeah. And so like if you have an open AI and you could do a $15 million contract where you have their full attention and they're doing services and working with you, I'm sure that the counter argument is like, what could you have done with that 15 million for scholarships or whatever.

But like, if this is a technology that's going to reshape the future and you have a major player who's going to make you the landmark university that they're going to invest in. That seems like a really 

[00:40:36] Alex Sarlin: good deal to me. Me too. I agree with everything you're saying. There is about defending the status quo about this idea of trying to compare the, the cost of contracts is so, yeah, I think it was like $16 million for giving open AI to like the entire Cal State system.

And they quote the person in there being like, it was an amazing deal. Like they gave us a huge discount. It was an amazing deal. And yet then the piece has to go on and say, but you know, they could have gotten it for free from Google, so what are you gonna do? I, I, Natasha Singer, she wrote the article a few weeks ago that we covered about the coding movement.

You know, I'm not trying to do a hit piece on a, on any particular journalist per se, but I do think that we're getting to this point where I read Mateo Wong in the Atlantic and. Every single headline he says is about how some big tech AI company is trying to like run the world every article, every week.

And you're like, there are these journalists who I think see the enthusiasm and the sort of unbridled, maybe naive enthusiasm, let's be real techno optimism happening around the AI space. And they're like, I'm gonna make a name for myself by going the opposite way, by being the contrarian, by being the truth teller.

And they create these narratives where, as you say, it's like this is a moment where let's be real. It is not yet proven. It is not yet evidence-based that that a Cal State or a col, any individual college, if it brings up a big tech company like an OpenAI into its ecosystem, that it's going to have huge.

Positive effects on graduation rates or on grades or on, you know, career outcomes. It's not like people are doing this because they've studied it for 10 years, and that would be the ideal. They're doing it because it's incredibly exciting new technology. Kids are using it, professors are using it, people are really excited about it, and people want to be on the cutting edge of this new world, and they're trying to, they want to shape.

What this looks like. They wanna be part of the story of how this can work. So the idea then, then these journalists come in and try to pull the rug out and poke holes like on day one, like before anything has happened, positive or negative. I just think it's malpractice and touch singer specifically, I feel like I'm being a little mean here, but like she's about to come out with a book about how big tech companies have created this whole coding movement.

How they've injected themselves into education, how they've created this entire, you know, how, how they've really like, think they can do better than schools. And it's like, it's this incredibly specific narrative, but underneath it is the presumption, which I think is exactly what you're saying. That this is a system that is like doing just fine on its own and big tech is sees, is hungry and it's coming in to wreck it and uh, coming in to inject its own.

Techno optimist, DNA, and then, you know, run away with the money. It's like, you know, on this show we have interviewed people from IBM, we've interviewed Microsoft, we've interviewed Meta, we've interviewed Google, we've interviewed Anthropic, we've interviewed people from all over the ecosystem, and I have never met a single person who remotely thinks like that.

And they go, come from the exact opposite perspective. They're like, we wanna improve outcomes. We want to give access to college students. We wanna make things better. We wanna create the college career pipeline. Like they're so positive about why they're doing it. And then people come in and say, this is, this is some kind of land grab.

I'm like offended by it. It feels like it is insulting to the entire education technology space. It's insulting to education administrators. It's a very dark worldview that I think is popular right now. Right? Because we have seen. Malpractice from big tech companies. They're like, we can't pretend that they don't do bad things, but like the idea that they're trying to eat education for their own nefarious purposes.

I just find illogical and horrific. And the idea that the coding movement that people trying to create coding classes in school with some kind of self-serving like trick is absurd. I just feel so, 

[00:44:18] Ben Kornell: wow. I'm, let's get Natasha, the singer on the pod. Let's do it. I've written down, I've her, 

[00:44:24] Alex Sarlin: I've invited her on.

I want, 

[00:44:26] Ben Kornell: but you know, I think the reality is if you just take, uh, who's right, who's wrong, what's going on? If you just said, what do we all care about? We care about learners reaching their full potential and having choice and opportunity in their future when we have these like destructive narratives that are all take down whatever you're taking down, whether you're taking down existing systems or new systems.

What I find troubling is that there's no alternative solution offered. And so what is the alternative? Like if, let's take the extreme view that every company is acting in extreme self-interest in anything they're doing in education, ultimately has like a bottom line profit motive or something like that.

One, they must be very bad business people to continue to go after education. 'cause man, there's a lot of other profitable verticals. And by the way, like finally we're getting some attention from big tech, like, please don't go away. But you know, even if you took that at face value, ultimately a student or a student and their parent, if they're under 18, they also have their own.

Then you have to apply the same ruthless self-interest from them. If they're attracted to that and they're saying, this is what I want too. Then you have to acknowledge that something in the equation is working and you'd have to basically believe that the kids and their parents or the, the students are getting totally fleeced.

And you know, I think there's worthwhile arguments to have on that. But at face value, when you ask a parent, or over the last 20 years, if you had asked a parent, should my kid learn coding skills? Would you like access to that? A parent who only wants what's best for their kid would say Absolutely yes, because they see a big industry and big opportunity and you know, a future state where having that skillset could be a differentiator.

So that's where like, let's be constructive. Let's not tear everything that's new down and what ends up happening. The actual effect of all of this is it just paralyzes institutions. And then that's where we then you come back to. Okay, so we're just gonna keep doing the same old thing where we can see the outcomes are not what we want.

[00:46:49] Alex Sarlin: Other countries, I mean, Europe has done a apprenticeship models with for-profit companies for decades and decades and decades and decades with Siemens and, and many big European companies in South Korea. Samsung has huge impact in education. Like, it's just such a strange idea that everybody should sort of stay in their lane and that schools should do their thing and tech should do their thing and they should never, if they cross paths, it's because tech is trying to, 

[00:47:12] Ben Kornell: I do wanna say for our listeners, we are conscious that we are a podcast talking about EdTech and then we're criticizing press.

I know, I will say we don't see ourselves as press. Like we're not investigative journalists. We're really like. Making meaning of what's out there. But I do think the ALPHA, it's funny that we're bookending with the Alpha hit piece and the Cal State hit piece, but I think one of the things we care about, 'cause we care about kids and we care about learners and the EdTech space, is that if we let the narrative go and the public conclusion is AI is net, net bad in all ways, right?

For learning the outcome is more of the same. The opportunity is to shape the narrative in a way that connects to use cases, that have evidence that says this could be good for kids. And that's where I think we'd love to have a press corps that is able to tell a more nuanced story. And we'd love to have like visionary founders and leaders that don't over rely on like a black and white story of peer success either or overstated success.

So not that either of those organizations do that, but it's like that's, I think, the context. So before we wrap, anything else you wanted to end on? It's been a crazy week. And here we go, like Sprint to the finish of the end of the year. 

[00:48:43] Alex Sarlin: You know, I just wanna introduce, we have amazing guests coming up on this podcast, so you please keep listening because these are two incredible sets of guests.

We were talking to Rebecca Winthrop and Jenny Anderson, the authors of The Disengaged Teen, which talks about the sort of four ways in which teens can be engaged or disengaged in school, and what EdTech companies can do about it. Fascinating interview. And then we talked to Justin Reich from MIT Teaching Systems Lab, who just put out a really interesting report about, uh, AI for the perplexed, basically advocating doing local experimentation rather than trusting that we have any kind of clear narrative on what to do with ai, yet really interesting conversations with really smart people.

So stay tuned for that. Otherwise, let's take us up, Ben. 

[00:49:25] Ben Kornell: Yeah, thanks so much for joining us. If it happens in EdTech, you'll hear about it here on the in EdTech. Talk to y'all soon. 

[00:49:32] Alex Sarlin: On this amazing episode of Week in Ed Tech, we are talking to the authors of The Disengaged Teen, helping kids learn better, feel better, and live better.

First off, we have Rebecca Winthrop. She's the director of the Center for Universal Education at Brookings and an adjunct professor at Georgetown. She leads global research to transform education systems and advises governments and organizations focusing on evidence-based strategies to help every child thrive.

She's currently leading the Brookings Global Task Force on AI and education. Jenny Anderson is an award-winning journalist, author, and speaker with more than 25 years of experience. Her work has appeared in some of the world's leading publications, including The New York Times, where she was on the staff for 10 years.

Time Magazine, the Atlantic, the Wall Street Journal, and Quartz. Rebecca Winthrop and Jenny Anderson. Welcome to EdTech Insiders. 

[00:50:24] Rebecca Winthrop: Thanks for having us. Great to be here. 

[00:50:26] Alex Sarlin: Yeah, I'm so happy to speak with you. We've been crossing paths at education conferences. The book has been making a big splash. It's such an important topic.

Jenny, I wanna start with you. You come from a journalism and science of learning background. When you and Rebecca set out to write the book, what first drew you to disengagement as a topic, and what were some of the myths about disengagement that you've exploded as you've done this research? 

[00:50:50] Jenny Anderson: Yeah, we had a really simple question when we set out on this book.

It was, why do so many kids hate what they do in school all day long? I mean, it was a really very basic, A lot of kids seem really unhappy. There was a thesis that it was all social media. We had a sneaking suspicion that that is powerful and I'm sure we'll talk about that. But there was another component that, something about the sort of how kids spend their days really wasn't working.

What was that? What could we learn about that? So that's really where we decided to dig in and we. Came to engagement really from a kind of open research question. We looked at the evidence and it was so profound, sort of in third grade, 75% of kids love going to school. They love what they do at school. By 10th grade, that's about 25%.

That's a problem. If you don't love what you're doing all day, you're not gonna put in the energy and the motivation to do it. So it's like, Hey, what can we do as parents, as educators, technology? What are all the component pieces so that we can make the experience of kids' days better, more engaging, more exciting?

So we kind of landed on engagement. We didn't start out to go looking for it, but it was certainly where we landed with a bang. 

[00:51:53] Alex Sarlin: It's really the elephant in the room. I think in a lot of the issues in school, the idea that students come and they are disengaged, they're just not always interested in what's happening from the learning perspective.

They're not always connected to each other or feel connected to teachers. And you have really gone deep on this, Rebecca. The book emphasizes the idea of helping kids build agency and curiosity and emotional regulation. And these are things we don't always think of when we think of what happens in school.

Sometimes we think of worksheets and grades and assignments and essays. From your research, how do you think about agency as a leading force for how teenagers can actually be involved in school? And what can we do to encourage that agency? 

[00:52:31] Rebecca Winthrop: This is a very good question. And again, it came out very much from our research.

One of the framing studies, I remember reading and thinking, oh my gosh, what are we doing to our kids? Was a survey that showed that adolescents in the United States are subject to twice as many rules as incarcerated felons. And if you think about it, adolescence is a time when kids are trying to learn to stand out and fit in.

They're beginning to think about what they wanna be, how they wanna contribute, who they should connect with. Like it's a very exciting time and the kids are amazing, but the context they're in are squashing their ability to do all those things. And so we really realized that engagement, which is where, as Jenny said, we landed, is not so simple as kids are engaged versus disengaged.

Right? They. Show up in a bunch of different ways, and that actually you could pull apart engagement and agency. You could have kids who were super engaged and becoming great followers of instruction because that is the context they were in and what they were told to do. And we found that those kids, and I'm sure, Alex, maybe we'll talk about the four modes in a minute.

[00:53:47] Alex Sarlin: That's right. 

[00:53:47] Rebecca Winthrop: Or do you want me to dive in here? 

[00:53:49] Alex Sarlin: Start it off and then we'll then further? Yeah. 

[00:53:51] Rebecca Winthrop: Yep. Anyways, those kids who are becoming great followers have sort of a fragility to them because they're not actually learning what they want, how to navigate the world on their terms, where they wanna go. And so we define agency as the skill and will to set a meaningful goal.

Then go meet it, which includes maybe asking for help and that when paired with very active engagement really unlocks and unleashes kids. So in short, we found that kids show up in four different modes of engagement, passenger mode. Achiever mode, resistor mode and explore mode. And explore mode is where agency and engagement really meet.

And that is actually where kids are being prepared for a world of AI and fast change and uncertainty. And it's where there's very little time for kids to spend in an explore mode and especially in middle school and high school. 

[00:54:46] Alex Sarlin: Let's unpack these modes. 'cause I think this is an incredibly powerful framework to understand engagement in schools.

As you say, it takes us away from the duality of engaged versus disengaged. Are they head on the desk or are they hands in the air? That's really not how it works. Jenny, unpack these four learning modes, the resistor, the passenger, the achiever, and the explorer, and how do they show up in schools? And we won't get too deep into this, but how might EdTech companies start to identify them as well?

[00:55:09] Jenny Anderson: Sure. So passenger mode, which is the most prominent, it's about more than 50% of kids say that this is how they show up on passenger mode, is kids who are coasting along, doing the bare minimum. They're showing up, but they're not actually really doing the hard work of learning a cheaper mode. Rebecca is just talking about this.

These are the kids who aim to get gold stars in everything they do. They're the go-getters. Everybody loves 'em. Parents feel good about 'em, teachers feel good about 'em. And yet, as Rebecca pointed out, often very fragile. They're fragile learners. They do not like to take risks, and they are learning to follow.

They're learning to jump through the hoops we put. For them and not actually identify the hoops that they care about. So that is a real risk, especially in an age of AI and especially with technology that really wants to make a lot of those decisions for you. So that agency becomes very important.

Resistor mode is kids who are acting out, withdrawing. We know them, they're dubbed the problem. Children, we argue they're children with problems, usually always something going on there. We need to get to the bottom of that. And then explore mode is where we want more kids to spend more time, especially this is the skill kids will need in an age of ai, as Rebecca said, that ability to identify goals and like a toolkit of strategies to go after it and some resilience.

Right. That didn't work. I'm gonna try this chat. GPT didn't write my essay. I'm gonna have to try to do that first draft. Making that really, really intentional decision to do the harder thing that is gonna be a, you know, a component of this explorer mindset being brave. With learning and being able to take the risks of learning, and I just wanted to put it in the context of a question you asked earlier, which I didn't answer, which was myths.

I think there's a myth that a kid that is not trying is lazy. I think there's a myth that a kid that is getting all A's is set up for life. I think there is a myth that a problem child is going to be a problem adult. All of those are myths, and our modes really unpack why those kids are in those modes because of the context they're in, and we as adults around them can shape those environments to really shift 'em into explorer mode with technology, but also being very intentional about how we use that technology.

[00:57:05] Rebecca Winthrop: The one thing I would add, Alex onto that is your last question, what Jenny was just touching on there about technology. The thing that we really struggled with when coming up with these modes was that achiever mode is what the system is asking kids. To do. 

[00:57:22] Alex Sarlin: Yes. 

[00:57:23] Rebecca Winthrop: And what we need them to be in is explore mode.

And in fact, when kids do have agency and are deeply engaged, they actually get better grades. So there is no trade off on actual academic learning. 'cause that's another myth. But that trying to distinguish between achiever mode, where kids are doing well, they're getting good grades, they're organized, they have goals, there's good skills that come with being in achiever mode and explore mode is a nuanced and sometimes subtle difference, but it's really important.

So the most common thing that we heard from kids, 'cause we followed a hundred kids, in addition to partnering with Transcendent, doing a huge survey, the most common things we heard from kids was. Who are really in achiever mode was the things that they liked least about school. Like, you know, what was your worst part of your day?

Oh my gosh, my teacher just won't tell me what to do to get an A. Right. And that is real learning fragility. And so anytime that tech can optimize for explore mode and pushing kids into explore mode and not just keeping them stuck and let in basically learning the content to learn the content. That's where I think technology is gonna really add huge benefit.

[00:58:43] Alex Sarlin: Let's double click on that. 'cause I think what you just brought up, what both of you just brought up is maybe I think one of the most important questions for Ed tech field and one that I think could have some interesting ramifications in the world of ai, as you mentioned. So I think of the sort of drill and kill, the worst type of ed tech, right?

The drill and kill, where every kid is sitting on their own computer trying to answer their math questions, right? That is sort of the stereotype of the worst type of ed tech. But in that world, you think about your different modes, right? The resistor is not doing it right. They're walking around, they're talking to their friends, they're sending notes, they're just, I don't wanna do it.

The passenger is kind of disengaged, maybe moving slowly, they're just being carried along. The achiever is going as fast as they can, trying to answer all the questions. What is the Explorer doing? I would say there's actually, and I raised this to bring a bigger question, which is that I think sometimes the structure that Ed tech puts into the classroom doesn't even allow for exploration.

Exactly. Not not all ed tech, but the sort of stereotypical ed tech. Tell us about what it would look like for ed tech to create explorers. What type of ed tech can do that? 

[00:59:44] Jenny Anderson: I'm gonna lean into one and I'm sure Rebecca will have a bunch. But I am super encouraged by, I wanna go back to this question of bravery because I think it is very hard to ask a question.

We were really surprised in our research to learn that a lot of kids say they only ask questions when they know the answers. And so again, a really great opportunity for EdTech is to sort of coach kids to be asking these questions, which can be harder to do in a big classroom. And so this idea of sort of, how do I ask questions both of a teacher of my classmates, I think that prompting is really encouraging.

So I saw Deto, I don't know if you know Deto, but she's course, course Mojo. You know, really they're seeing extraordinary results in a product that was built for a specific, very specific curriculum for an age group to help kids who are very uncomfortable with learning, reading, help them coach them when they can't get through something.

Hey, I think you can do this, but I think you need to try this. What did you try that reflection capacity, what did you try that worked? And then what do you think you need to do next? So then putting it back to them, it's not answering always for them, here's what you need to do, even here's the question you need to ask.

Practicing that question, maybe offering the question the first time and then the next time around saying, what question would you ask? That to me is like an example of a product that is really, I mean, I, I don't think you were looking for a product, but it was an example of a process that I really value.

[01:01:10] Alex Sarlin: And AI is at the heart of that, right? Because the ability to have that conversation and dig deep and respond in a real way through technology is really new. Rebecca, what do you think? How can we encourage exploration as an EdTech? Yes. 

[01:01:21] Rebecca Winthrop: First, I wanna underline what you just said, Alex, that the structure.

That we provide through technology, whether it's ed tech or even just straight up normal pedagogy is the thing that we need to innovate 

[01:01:35] Justin Reich: mm-hmm. On 

[01:01:37] Rebecca Winthrop: and not get distracted by the code. And technology can be incredible for this. One of the things I'm really excited by in our research on an AI and education around leading this Brookings task force is the potential for multimodal ways of learning.

So ar vr, that brings kids to a different place in the world. You've had some of these incredible initiatives on your podcast, Steven, where they're learning about different people, and then it generates in-person discussion and inquiry, so you're not just. Stuck on a path to finish something that has been preprogrammed creative.

There's an a suite of creative AI tech that kids are creating their own stories. Some of the things that are super exciting are the ability to practice. So I'm a proud board member of Junior Achievement on their international board and them as well as plenty of others are using AI as practice sessions for kids to get feedback.

So Junior Achievement does a lot of entrepreneurship education and they usually practice pitches in front of a business mentor. Hard to scale that. They do do it, which is amazing. But kids can start practicing in real time with real feedback through ai. It's incredible. Like so that is all amazing and excellent, and really you have to focus on the design principle first.

Then the tech later, and I'm not, I should say Jenny and I are not at all opposed to direct instruction. There's a time and place for direct instruction. Kids need to learn the phs and the alphabet and their times table like you need core content knowledge that's essential. It's just that is not the end goal.

Of education and it is not the end goal of explore mode. You need that content knowledge to do something with it. And that is really what we, I'm speaking for Jenny here, but what we would hope EdTech starts focusing on more, 

[01:03:42] Alex Sarlin: a hundred percent. I think what we're both bringing up are such brilliant points about what exploration can look like in EdTech, the, the metacognitive type of work that, of course Mojo does, allowing students to think about how they're learning what they need to do next.

You know, strategic planning and, and self-regulation in that way. So huge. Virtual reality or any kind of environment where students are offered open worlds to explore and try different things that already takes you right off a path. Creativity. These are really exciting visions and when I hear you both talk about this type of tooling, that feels like exactly what we could be embracing as an EdTech community.

It's really exciting to hear. I wanna ask about identifying disengagement. So this is another thing that there are a number of different ed tech companies that try to do this. There's a number of different ways to think about it. But given your framework, how would you encourage both ed tech but also you know, just parents or, or administrators or teachers or principals, anybody in a school environment to start to think about looking at your classroom or looking at your student body and saying, here are the passengers we need to turn the passengers to explorers.

Those are the resistors. We probably spotted them 'cause they're the, the ones that acting out. How do we turn them to explorers? Like how can people think about identifying where students are at and how to move them to a, a better place? 

[01:04:52] Jenny Anderson: I'll kick that one off and then pass over to Rebecca. I think number one thing for parents is to recognize that in the teen years, teens are really looking to be respected.

They're looking to gain status and to earn prestige. And so number one thing we need to do is to have really good conversations with our kids, but we can't go in there being like, did you cheat on this? Right. And what grade did you get on that? And which, you know, highly rejected college are you hoping to get into?

Like, I think when we really focus on the outcomes, it really distorts the process. And if we can focus the conversation on the process, you know, the reflection around the studying, you come back with a b. What's your take on like what you did well and what you didn't do well? Like what was the process that went into this?

I think we can have way more sophisticated conversations about learning itself, the metacognitive bit that you were talking about and about the content of their learning. I think both of those things can happen and Ed tech can be really powerful for this because kids have natural interests and we can dig into those.

And then as parents were coming in and saying, Hey, let's reflect on your learning. Let's talk about what worked and what didn't and what you could do better next time. Let's build that toolkit. Let's also really lean into the things you're learning that you're interested in and like let's build on those.

And maybe, you know, I have a kid who's super interested in art. I know literally nothing about art. I can use chat GPT to help get me smart on art and ask good questions of her and know what she's doing and then have better conversations and then just more comes out of that and it's a really generative experience.

Rebecca, what would you 

[01:06:20] Rebecca Winthrop: add to that? I would add two things, Alex. I would add one that the entire second half of the book is an engagement toolkit, right? And we do have things. To know if your kid, whether you're an educator and a parent, the book is really for parents and educators, but we're finding ed leaders love it and are using it to know if your kid is in this mode, two kids go between modes, right?

They shift between modes and they can shift really quickly. And that is really important to know if they go into an incredible virtual reality experience in chemistry and learn about the, you know, inside of a cell and they can touch it and move it and explore it, unlike they've never done before, they're probably gonna be in explore mode, even though the subject before they were in boring English class, I'm making this up and we're just coasting along so kids can move between modes quickly based on what we adults give them.

And then the third thing I would say is to add to what Jenny said for EdTech. Designers. Your question was how could a tech designers know what kind of mode kids are? The number one thing I would urge everyone to do, and this is gonna be a big collective solution that we all need to move towards, is change your metric of engagement.

Meaning engagement, meaning spending, how much time you spend on a platform equals success. Because I actually don't think that is going to incentivize companies. So it's actually maybe VCs need to change and then ed tech folks need to change and then we all, we all need to change. That is not gonna incentivize well the type of ed tech design that's gonna really unleash explore mode.

We want kids who are gonna be using ed tech to explore and then generate offline interaction. Mm-hmm. Not stay on the platform forever. Hmm. 

[01:08:11] Alex Sarlin: That is a fascinating suggestion and so one that I'm sure I hear Ed tech people listening who are operators or founders saying, that's right. All our metrics of engagement are time on task, their number of questions completed, assignments completed, number of pitches done, you know, whatever the model is, it's about just doing more.

And that feeds right into that achiever framework, right? You're finding achievers, but you're not necessarily finding students who are actually making sense of it or thinking about their own learning. That's a really interesting suggestion. Instrumenting. That is another piece, right? How do you get that loop?

It's a 

[01:08:43] Rebecca Winthrop: big, thorny topic and it gets to financing, it gets to measures of success. But look, we can do it like we've designed this magical technology that you can talk to like a person. We can change the fricking metrics for how to measure EdTech success for kids. Wow. You know, we can get a big working group together and do it.

[01:09:04] Jenny Anderson: We can be very intentional about incentives. I mean, I think anyone who's studied economics 1 0 1 knows that incentive shaped absolutely everything, right? And so we just need to be that, you know, we missed the boat on recognizing social media incentives and how that shaped behavior and how those behaviors then shaped, you know, outcomes for kids.

I think we can be better this time around and say incentives matter a lot. So what are we gonna do to sort of protect kids and their time and the things they need to develop well? And how do we use these tools to supercharge explore mode? 

[01:09:35] Rebecca Winthrop: I have one more little story to share, Alex on this. On this topic of shifting incentives, we're in the middle of this massive research project on the Brookings Global Task Force and AI and education, and we have done consultations with parents and teachers and students and ed leaders in ed tech folks across 50 countries.

And in one of the last consultations, a CEO of a really large ed tech firm said, you know, I do have a problem because my product with AI has gotten so effective that teachers are using it half as long to do the same amount of work. And the teachers love it. It saves so much time. But I got grilled at my board meeting.

That my engagement rates and usage rates had fallen right. And I was like right there. We have the wrong incentive structure. So it really is something that I think actually the ed tech community would totally get behind if we could align, you know, investment community alongside them. 

[01:10:38] Alex Sarlin: It's a really, really interesting topic.

We've gotta come back on and talk about this more in depth. 'cause I think there's a lot of very actionable pieces to your work for EdTech, for teaching in general about how we can think about engagement and disengagement and agency and autonomy and all these amazing things. Metacognition mindset. These are the authors of The Disengaged Teen, helping kids learn better, feel better, and live better.

Rebecca Winthrop, director of the Center for Universal Education at Brookings. And Jenny Anderson, award-winning journalist, author, and speaker. Thanks so much for being here with us on EdTech Insiders. 

[01:11:10] Rebecca Winthrop: Thank you, Alex. Thank you for having us, Alex. 

[01:11:14] Alex Sarlin: We have a great interview on the week in Ed Tech this week.

This is a legendary ed tech researcher, journalist, professor, author, written all sorts of things. Justin Reich is an MIT associate professor and director of the MIT Teaching Systems Lab. He's the author of Iterate and Failure to Disrupt and hosts the Teach Lab podcast. He's earned his doctorate at Harvard and was a Harvard X and Berkman Klein fellow.

He's also a former high school teacher, and his work has appeared in Science The Atlantic and more. And recently the MIT Teaching Systems Lab put out a report called A Guide to AI in Schools Perspectives for the Perplexed, which interviewed many teachers about this really weird, transformational, confusing, perplexing moment with High Perplexity moment in ai.

Justin Reich, welcome to EdTech Insiders, 

[01:12:04] Justin Reich: Alex. Thanks for having me. 

[01:12:05] Alex Sarlin: It is really great to speak to you. We've known each other in passing for a long time. We talked to each other way back in our California days, but you have been doing incredible work, really trying to be a truth teller in EdTech for quite a long time.

Give us a little bit of an overview of your career in your relationship to the education technology field. 

[01:12:25] Justin Reich: Well, I was a high school history teacher in the early two thousands, just as the web was becoming more accessible in schools. So when I started teaching, I had a card of laptops in the back corner of my classroom, we had this cool intranet program called First Class, which kind of on local servers did just about everything you could do with Google for Education now where you could send messages, you had shared documents, you could collaboratively write.

But in 2002, which was a lot earlier than other folks had access to that technology and. I've often had the instinct that new technologies can improve experiences for learning, but I've also been really attentive to the fact that our greatest hopes for them are essentially never realized. And maybe that's okay.

That a great thing to do with technologies is to work really hard to incrementally improve learning environments in specific places, in specific subjects, in specific contexts. My colleague Ken Inger at CDMU says that step change is what 25 years of incremental change looks like from a distance. And so I'm often advocating for this shoulder to the wheel approach to improving learning with technology.

[01:13:30] Alex Sarlin: Yes. That's a very well put, I think your book Failure to Disrupt, which is a fantastic book, mostly about the sort of MOOC movement, which we were both involved in. You were working with edX, I was working at Coursera at the time, and how it had this incredible promise, this belief that this could transform education, it could transform higher ed, it could increase access, it could democratize education for those who wouldn't have had access, and that you went very deep into this.

You've done all sorts of large scale experiments around it. Can you tell us a little bit about some of the thesis of that book and how it's sort of a case study of what you mean about the difference between the promise and the reality of education tech? 

[01:14:05] Justin Reich: Yeah, there are a few thesis in there. I would say education technology advocates, developers are always hoping that you can like download a thing onto people's machines and transform learning, and that especially in systems, like almost never happens.

It basically never happens in education. Maybe it happens in individual learning. Our technologies are only as powerful as the communities that guide their use. If you want a new technology to improve human development, you actually can't just like download it onto people's machines or hand it as a machine to them.

Teachers need to develop new lesson plans. Systems need to develop new curriculum. Principals need to develop new rules. Students need to learn new routines. Families need to learn new ways to support things. That's one core idea. Maybe quickly, three other core ideas are. Computer scientists and learning scientists have partnered to improve learning for 70 years.

Since there are computers, the size of your living room, people have worked on this problem and basically with every new technology generation that comes along, I think one of the first things we should do is be like, wait, what happened with the thing that was like this last time? Because what happened with the thing that was like this last time can really help us maybe make better decisions about the thing that we're doing now.

Two really common patterns. Teachers use new technologies to extend existing practices. The first thing that teachers will do with the new technology is whatever it is that they were doing before. And it actually takes quite a bit of time, quite a bit of coaching, quite a bit of learning, quite a bit of collaboration, experimentation to do things that are sort of different.

Yeah, and new technologies disproportionately benefit the affluent. They benefit people with the financial, social, and technical capital to take advantage of new innovations. We don't really understand how to democratize education with technology and we shouldn't bet on that as like a first order thing that will happen with any new technology.

So those are some of the ideas and failures. 

[01:15:51] Alex Sarlin: Yeah, they're very powerful ideas and I think they're in some ways the ghosts that haunt the ed tech field, right? This feeling of we're, we're repeating mistakes from the past that we're not learning that every new flashy technology, and I am probably the guiltiest of this, of anybody, every new flashy technology, we hope it's gonna be that step change major.

You know, the moment inflection point where education truly becomes democratized, where it can help people pull out of the under socioeconomic status, they can actually succeed anyway. And again and again, we sort of hit that systems wall that you're talking about. So we are now in another one of these moments, we have AI coming in.

There are many people, again, including myself, who still have a lot of hope that this technology could be a game changer for what education looks like. Yet there are major, major systems that would need to change and they're all wrestling with it. Your new. Guidebook is all about this, this guide to AI in schools perspectives.

For the perplexed, you'd interviewed over 90 teachers. Tell us some of the patterns that emerged as you talked to people on the ground, about how this interaction between flashy technology that's being downloaded onto the machine and these massive systems in place that don't necessarily accept it. How is it working in practice?

[01:17:03] Justin Reich: Yeah. Unevenly would be the number one thing that we heard from teachers. There are some teachers who said, I'm conducting experiments and I'm really excited about these experiments. I'm really seeing new kinds of interesting things happening with my students. We talked with plenty of teachers who had major cause for concern.

I think the causes for concern are pretty widely circulated now. One of the big ones is kind of cognitive offloading, the idea that the machines do too much thinking for us. There's like a whole network of. It kind of policy advocate, I don't know, sort of technology futurists who are like, we just have to get past the cheating conversation.

Cheating isn't the right way of think about this. And I would just like to say, if you talk to teachers, the thing that they want to talk to you about with AI is cheating. Yeah. That it is a thing, which is making what they're doing, making learning environments not work. Right. And if you wanna connect with teachers where they're at, I would discourage you from going down.

The cheating is not the real thing route. I would say that in addition to that, a thing that I'm thinking a lot about right now is teachers will find different ways of telling you, even aside from cheating, how the output from large language models, damages and violates trust. The most powerful technology that we have for efficiency in complex social systems is trust.

Schools are not places where like individual people shove coal into boilers. Like you don't make the system more efficient by being like shovel faster, or here's a lighter shovel or something like that. What happens in schools is zillions of little interactions between people. What makes those interactions work efficiently is trust.

I have a student this semester in my class at MIT who's from Russia, and she's like, oh yes, we have a lot of research in this in Russia, because trust is extremely low. Markets don't work very well because every time you wanna have some kinda interaction, people have to lawyer up because they don't trust each other.

What LMS do. Is they predict sequences of text and then encourage us to represent those texts as our own thoughts. Sometimes that's cheating and that is a violation of trust, which is a huge problem, even if it's not, even if it's like explicitly authorized constantly. You as a teacher are sort of asking yourself like.

Okay. A student wrote that, but do they actually understand it? Did they actually think it? I think students and parents are asking themselves like, all right, I got this newsletter from my school. It has these kind of like markers of chat GPT production. Do they actually want me to read it? Do they actually care?

Like my teacher made this thing. Do they actually care? Like clearly they didn't write it like some machine wrote it. Is this actually a good learning experience for me? All of those moments of like is this, I think are grinding the gears of schools and I think we misunderstanding efficiency. Efficiency is not about making people's capacity to generate text.

Faster efficiency and complex social systems is having really high trust interactions so that we're not slowed down at every interaction by all these questions of like. Does this person really think this really care? Those kinds of things. So those are some of the insights that we had across the 90 interviews.

[01:20:02] Alex Sarlin: Yeah, I think incredibly interesting. I mean, the focus on integrity and cheating, the need for trust, the need for understanding, for literacy, and sort of understanding how this stuff even works if you want to use it when you get down to the, actually whether the rubber meets the road. These things are really important.

I mean, I think part of my read on the ed tech field over the years, and I'm curious about your thoughts about this, is that in moments like this where the system and the technology go directly head to head, like the cheating issue you're mentioning or when I, I throws, you know, sand in the gears of the system.

I think people who come up through a technology angle, say. Good. The system should have sand thrown into it. It's not working enough. It doesn't work for the underserved students and maybe it needs to have throw the sand in, break the system and build a new system. And then people who are operating the system say no.

The system is come up for a reason. There's a high level of trust. We've built this really immersive and exciting learning environment and this thing is coming in and running it. And those are such conflicting attitudes. What always confuses me is this feeling of, if you're attacking technology, does it mean you're defending the status quo in schools?

And I think the answer's gotta be no. But I'd love to hear you talk about how that works. 'cause you know when people say, oh, I hate ai, it's just for kids cheating. And you say, okay, well what would happen if you just banned this forever? Are you doing your students a service? It's really unclear at this moment what the right answer is to that.

[01:21:25] Justin Reich: I really appreciate bringing uncertainty and humility into the conversation, sort of saying we don't know. I think most people who know schools really well are not blinkered at all of the ways they fall short. Most people who spend their lives in schools do not have overly rosy perspective on these things.

To me, the sort of empirical question is. When new technologies come along, do they lead to new systems that have substantially improved outcomes? Mm-hmm. And I mean, historically the answer to that is no. They do not do that. Right? So then you have to say like, these are not well-resourced systems. These are systems that operate on the slimmest slimmest of margins.

If three teachers call out sick, like middle school doesn't work, they're not organizations that are staffed and resourced with like tons of extra resources. So every dollar that we spend on some kind of initiative is an opportunity cost for something else. So to me, people who have a more conservative point of view, they would sort of say like.

It is just not the wisest spending of our funds to say, okay, let's assume this time is the one, which is the inflection point. Let's like really pour our teacher dollars, our resource dollars, our teacher time into that moonshot, because I think this is the one that's really gonna work. There's no school in the world that wishes they had invested more in smart boards earlier.

Hmm. There's no school today that's kicking themselves. Like, man, if we had just bought some more of those smart boards and spent some more money on that, we'd really be a head. Right. You know, I mean, probably virtually every school who invested in that is like, oh, all of those dollars could have helped kids more in some other direction.

Hmm. So I think for people who take a more incremental tinkering view, it's not opposed to the hope that we could make schools radically better. It's more like what is the savviest spend of a dollar right now to improve things? For instance, in many schools across the country, one of the most substantial challenges is chronic absenteeism, is kids like simply not showing up to school.

If your number one problem is kids not showing up to school, having people from the technology industry coming and yelling at you and be like, you really have to make some progress on ai. It's pretty easy to look at those people and be like, no, we have to make progress on the kids showing up to school.

There's nothing else that works. You know, and there might be a handful of places that sort of see a link between those things. Be like, oh, maybe there's some cool stuff we could do with AI that would get more kids in the building. But if you are advocating to help those schools and you're not meeting them where they're at with like the very serious, you know, we have a, we have a teacher we, we have this seven part podcast series called The Homework Machine, which is also a way that we've released a set of stories about this.

It's at teach lab podcast.com or the homework machine wherever you find your podcasts. And we enter the teacher who says like, my building doesn't have hot water. Like, I think a thing my kids should have is to be able to wash their hands in Hot wa. I actually think I, as a teacher, should be able to wash their hands in hot water and she's in San Francisco, so it's a little bit weird to have folks being like, this is a major inflection point in schools.

Like we really need to see dramatic change. And she's being like, well, maybe, but like one dramatic change I'd like to see is hot water. And so, you know, I think there are good reasons to think that if those marginal dollars are really scarce, that spending them on more tried and true, more proven approaches to improving teaching and learning might be a better thing to do than to spend a lot of money on speculative approaches.

Particularly speculative approaches where it's like, ah, history hasn't really, like, you know, it doesn't look kindly on these kinds of 

[01:24:57] Alex Sarlin: bets. Your point about speculation is a really interesting, I mean, all these points are interesting. You're right, that the track record for technology coming in and making that kind of transformational change that everybody wants it to, that the promises trying to be there doesn't have a great track record.

So there's a reason for, you know, some jaded. Attitudes about technologists trying to come into school at the same time. I mean, that hot water example, obviously it's true that if a school is so thinly resourced that it can't heat its water, it doesn't have a working boiler, then that's the sort of first order, you know, the bottom of the Maslow's hierarchy of needs.

It's like you need to be clean, you need to be able to flush a toilet. At the same time, I think we in this space, in the ed tech space get into this funny debate of like school where people at the front lines of, of teaching say, we just need basic supplies. We just need basic resources. We just need to do what works.

But then when there's more funding to do that, you don't see results. Right? We also haven't seen major improvements in the traditional school system in a long time. So, you know, it's easy for, I think the two sides, and I paint myself a little bit on the technology side of this. It's just how I'm wired a little bit to sort of point to the other side and say, well, you say that, that what I'm doing doesn't work, but it doesn't look like what you're doing works.

And then they go back and forth and do that. How can we get outta that cycle? 

[01:26:13] Justin Reich: I guess I, I have two things. One is I, I mean, I do think history suggests that when we make concerted efforts to focus on things for a while, schools can get better. Schools have the capacity to prove, I think we spend those dollars most widely and that people's time most wisely when we're realistic about the sort of pace and scope of those improvements.

Like a school, school system can usually get better at like one or two things in a two or three year period. You try to do more than that. I don't think you'll find a lot of places where you're like, man, that was super successful. I think if you went and interviewed all of the schools that are most recorded like that, have the.

For instance, recovered best from learning loss in the pandemic. Like find all those schools and interview all those principles and say, how many of you would raise your hand and say you made one or two huge transformational changes and things got better, and how many of you would say you did a hundred little things a little bit better every day?

And I think every hand would go up in the second category. So let's help folks. Like if that's what works. I mean, I would like to live in a world in which people are like, no man, like our school was broken and we did this amazing thing and now stuff is way, way better. I just don't think there's empirical evidence for that perspective.

I think there's empirical evidence for the shoulder of the wheel perspective. Yeah. A second thing I think we can do is learn from history. So for instance, the thing I've been thinking about a lot. Is what happened when the web arrived in schools. There was a lot of enthusiasm to like rapidly offer some people guidance about like how to deal with this brand new things.

Kids were having access to it at home. It was changing lives. And almost all the instruction that we provided to education professionals from maybe 2000 to 2015 on that topic was wrong. We taught millions of teachers to demonstrably incorrect approaches to teaching kids to search the web. And they went on to teach tens of millions of children demonstrably incorrect ways of searching the web.

And now when you do system-wide, like audit studies, people are horrifically bad at things that we need them to be good at for democracy to work. So what lesson do you take away from that experience? Like a lesson I take away from that experience is. Do not rapidly disseminate advice that is either A, wrong, or B, you're not sure it's right.

You know, in some ways it sounds commonsensical. I would say it's actually a sort of controversial provision. So for instance, in this AI guide that we published, it is very clear in the guide that we do not know if any of our guidance is right. There is no good empirical evidence supporting the guidance in the guide.

We, you know, we say like, this is like publishing a guide to aviation in 1905. We just don't know what airplanes are and what aviation is and what an air traffic controller is, and like all the other parts of the infrastructure. What we can give you is some hypotheses, here's some ideas that teachers have about how we might improve the system.

You know, I'd be willing to make a bet that like that's something that we could do different and maybe better, like. All the AI guidance is coming out right now. What if we were really explicit to people that they are untested hypotheses and they're gonna have to make some choices about how they test these hypotheses?

'cause it's actually gonna take a while. It could take a, you know, in, in the web it took about 25 years to get really solid peer reviewed evidence about effective ways of teaching kids to search the web. Maybe we can make that shorter, but in the meantime, like a thing we have to communicate to people is.

All the stuff you're learning, you may very well have to unlearn. Like if you get advice about how to write with AI as a fourth grader, it's entirely possible in your senior year someone's gonna come along and be like, remember all the stuff you told you we're pretty sure that was wrong, like that was a bad idea.

But if we tell fourth graders now, like, these are hypotheses, we're just exploring. We're not sure, join the uncertainty with us. To me that's like an exciting way of framing it and potentially a way of doing better than we did in any of these sort of previous generations of technology. Introduction. 

[01:30:07] Alex Sarlin: I think it also feeds into your other points about doing a hundred things at the same time, about trying to, about trying to build a new system up and not sort of come in with a one solution, one professional development session or one tool and assume it's gonna change things.

It's about actually being, joining hands in the uncertainty of this moment, which I think it's very uncertain and not, you know, being overconfident in what we're providing. It's easier said than done. I know we're almost at time, but I wanna ask you about the practicality of this, because I think it's so important at this moment when people hear, oh, it took 25 years to get it right for the web.

Their eyes get wide and they say, we can't afford to wait 25 years for this technology. It moves so fast, it's been here for three years and it's being used by, you know, 85% of teachers. 75% of of of CEOs are saying they're seeing positive, like it's sweeping across society for better or worse in many ways, and I'm happy to admit that, but it's sweeping across in a world where.

Uncertainty. Is there where we are writing that aviation guide in 1905? How can we make those hypotheses? You mentioned something that people can actually build on because just letting every single school be on their own saying, here are a few hypotheses, we'll throw them over the transom, and, but really you have to figure this out.

It feels like it's gonna be very messy. Is there any way to accelerate that process and really start to build that new system, the, the pieces that put together to build that evidence base and actually make this work? 

[01:31:31] Justin Reich: Yeah, there are long answers to that question. I'm sure. I think, I believe we can do better.

That's part of like the reason for studying history. I would say one thing we should not do is race to be fast. I like having talked to a hundred plus teachers and students. It is absolutely true that it is a terrible thing to hear that we do not know what the answers are. That however is the truth. And so I'm, I'm not a huge fan of organizations saying like, we know what the AI literacy standards are, we know what the frameworks are.

We know what the right way of doing this is. 'cause we don't, and I know the people wanna hear that. You know, I think about if you could go back into 2002 to all the people who were like ad, I mean the people who are advocating for the crap test for all this flawed web literacy teaching. If you could go back to those people, would you say just keep going man, the people they really need something to work with?

Or would you be like. Dude, you're wrong. Like this is bad instruction. You should stop. I mean, if I could go back, I would be like, stop. Because we taught tens of millions of people incorrect ways of using these tools, and we might be doing these things now, 

[01:32:36] Alex Sarlin: but if they stop, then what could they do after stopping two 

[01:32:41] Justin Reich: things?

Conduct. Think of what you're doing as little scientific experiments. Think of what you're doing, not as guidance, but as hypotheses that you have to test those hypotheses. The number one thing schools can do is you put out a policy, you tell everyone it's a test. You tell everyone. It's conditional, and then you gather evidence about how student thinking is changing.

The best place of that evidence is in student work. So if you're like, we're gonna let students use AI to write lab reports, you grab a pile of lab reports from 2019, you grab a pile of lab reports from 2025, you shuffle them up, you read them so that you don't know which is which, and you say, are we actually getting better thinking?

Are we actually getting improved learning? It's actually like that is a simple thing. Conceptually, it's quite hard for schools to do. I think we need to reorganize science and philanthropy to be faster at answering these questions than it was before. A good thing is, is we know the way. That we got to better ideas about web literacy.

It's probably a second episode of your show to describe what all of those ways are, but if we want better science, we're gonna have to improve the way we think about funding and conducting research and things like that to try to make it faster. Again, having talked to a lot of people, I'm deeply sympathetic to the folks who are.

We want an answer. I really don't think widely promulgating wrong answers is a great approach in this moment. I think in this moment we could do better of like, okay, let's be honest and transparent with people. If we do not know what we're doing, and if science can't get us answers faster than we're gonna have to be little scientists.

Mm. We have to be mini scientists. We have to be scientists in our own context and saying, not okay, like the fastest people handed us the answers and this is what we're gonna do. We have to say, we got a bunch of hypotheses and until we get really good big science to test them, we have to do good local science to local science.

And I do think that's possible. Also this is not the last technology which is gonna show up in this way. Systems are going to have to be better at operating under extended periods of uncertainty with new technology. So let's try to get better at that rather than chasing false certainty. That's my bet.

And you know, we'll know in 10 or 15 years whether or not it was right or wrong. 

[01:35:01] Alex Sarlin: It is amazing how quickly, like 10 different AI literacy frameworks were spun up by 10 different organizations. And you're right, there is definitely not a lot of empirical evidence about what AI literacy looks like and what it means to be AI literate at this moment.

So there's definitely a lot of truth to that. I love that focus on local science. I wish we had more time. This is really interesting, Justin. We have to talk more. I feel like you, I think serve as a really vital role as, as one of the consciences of EdTech. I speak as somebody who is a techno optimist, who is coming from a very different perspective.

But I really appreciate your perspective and the way you look at this, and I think it needs to be part, like a major part of the conversation for philanthropy, for science, for Technologists for Schools. So I really appreciate all the work you do. Homework Machine. You can find a for a podcast, the Guide to AI in Schools perspective for the perplexed you can find with a Google search from the Teaching Systems Lab and check out Justin's books, iterate and Failure To Disrupt, and his work at the Teaching Systems Lab and his regular Teach Lab podcast.

Wow. You do a lot of things. He also has a podcast with the Teach Lab all the time. Thank you so much for being here with us on EdTech Insiders. 

[01:36:09] Justin Reich: It's been a great pleasure. 

Take care, Alex.

[01:36:11] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community.

For those who want even more, EdTech Insider, subscribe to the free EdTech Insiders Newsletter on substack.