Edtech Insiders

The Map Makers: Inside the Edtech Insiders' GenAI K-12 Education Market Map with Alex Sarlin, Laurence Holt & Jacob Klein

• Alex Sarlin • Season 10

Send us a text

In this special episode, we go behind the scenes of the Edtech Insiders GenAI K-12 Education Market Map, a first-of-its-kind framework connecting real classroom needs to AI-powered tools. Co-created by Alex Sarlin, Laurence Holt, Jacob Klein, and the Edtech Insiders team, this evolving resource helps educators, entrepreneurs, and researchers make sense of a fast-changing landscape. 

💡 5 Things You’ll Learn in This Episode: 

  1. Why the GenAI Education Map focuses on pedagogy and learning needs over tools
  2. How to identify "white space" opportunities for AI in education
  3. What makes AI feedback and teacher coaching some of the most promising use cases today
  4. How to move beyond the 5% of students typically served by EdTech
  5. Why "team teaching with AI" might be the next big frontier

✨ Episode Highlights:

[00:02:36] Holt & Klein introduce the origin story of the Generative AI Use Case Map
[00:03:29] Starting with teaching and learning needs, not tech capabilities
[00:06:03] Why learning science and research-backed practices still aren't reaching scale
[00:11:11] How and why the GenAI Map was built as a living, evolving framework
[00:15:20] Imagining “team teaching” between human educators and AI assistants
[00:20:16] The 5–10% problem: Why most EdTech tools miss the broader population
[00:23:18] Personalization, motivation, and redefining what success in school can look like
[00:30:05] From tool overload to comprehensive suites: the shifting EdTech landscape
[00:34:12] AI-enabled feedback loops that support both teachers and learners
[00:39:00] What’s ready for scale now—and what still needs real breakthroughs
[00:44:55] Risks and red flags: from dopamine loops to student dependence on AI
[00:48:35] Will GenAI in education be incremental or transformative?

😎 Stay updated with Edtech Insiders! 

🎉 Presenting Sponsor:

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.

[00:00:00] Laurence Holt: Anyone who's been around this sector for a while knows this is a long, difficult road, but we now have these amazing new tools. Let's be grateful for that and acknowledge, and it's not just in our sector, right? The capability of the technology has gotten a long way ahead of how society can make use of it, and we're part of that.

[00:00:25] Jacob Klein: Some people would say AI and education, you know, really kicked off two and a half years ago with chat GPTI would call a student watching an AI powered TikTok playlist when they should be listening to their teacher or doing homework. I. That is AI in education, and so we have really failed in this last decade to properly regulate this giant dopamine experiment that's happened on students worldwide.

[00:00:58] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding round impact to AI development across early childhood. And 

[00:01:11] Ben Kornell: work. You'll find it all here at EdTech Insiders. Remember to subscribe to the pod, check out our newsletter, and also our event calendar. And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben.

Hope you enjoyed today's pod.

[00:01:38] Alex Sarlin: We have amazing guests. Today on this episode of EdTech Insiders True Legends in the education and technology space. We are talking today to Lawrence Halt and Jacob Klein. They are the co-creators of the generative AI use cases in Education Map, but also have a lot of amazing experience in EdTech. So let me give their intros and then we'll jump right in.

Lawrence Halt has spent the last two decades leading innovation teams. In for-profit and nonprofit, K 12 organizations. He's a senior advisor at the XQ Institute and the Teaching Lab Studio and the author of the Science of Tutoring. Jacob Klein is the Chief Product Officer at Teach fx, an AI coach for teachers.

He previously founded and led the learning games studio, motion Math. Jacob also advises early stage founders and ed tech companies on product strategy and on creating delightful learning. Experiences. Lawrence Halt and Jacob Klein, welcome to EdTech Insiders. 

[00:02:36] Laurence Holt: Thank you, Alex. Great to be here. Long time listener, first time caller.

Thanks, Alex. Great to be here. 

[00:02:42] Alex Sarlin: I'm really excited to have this conversation. I've spent a lot of time with the two of you over the years talking about EdTech, and we are here in a really special moment. Let's start. So Lawrence, when we first connected, the two of you did this incredible project. It's now, I think like 18 months ago or something, a year plus ago, where you put together.

Dozens and dozens of use cases in education from the perspective of what is the actual goal? What is actually happening in the classroom, what are the needs for students and teachers, and how can AI address them? Rather than starting from the what did we make and how could it fit in? It was incredibly clarifying for us at EdTech Insiders, and I think it's been clarifying for the whole space.

So, Lawrence, lemme start with you. Can you tell us what inspired you to frame the AI landscape around teaching and learning needs rather than the tools that were being developed? 

[00:03:29] Laurence Holt: Thank you for saying that Alex. Well, Jacob and I, it's almost exactly two years. I think we were walking along the boardwalk in San Diego and thought, how do we make sense of all that is starting to happen in ai?

That was back in the, you know, back in the days when you could count the startups on, on both of our hands pretty much. And one thing that came immediately. Was, maybe I'll put it this way, if you think about the landscape of everything that goes on that teachers do, that students do and others do for teaching and learning to work as an actual physical landscape.

So you've got everything from preparing for a class, putting resources together, running a small group, looking at data, assessing kids, you know, all of that. Then what started to happen was AI innovators could be teachers, could be startups, could be whoever. Were just taking pieces of that map and trying to make it better and smarter and quicker and easier.

With ai, that was sort of phase one and to some extent we're still in that phase we thought.

We thought if all we do is sort of write a list of who's doing what, then you miss a lot of the white space in between, which is kind of where the interesting things often lie. So we started, as you say, with a map of everything. We could think of all the use cases. I think we start, we had 40 in the first version.

And we feel that past tech advances in, in education like laptops or algorithm personalization tended focus on very low hanging fruit cases. So this was a way for us to sort see as that going on here. Automating things we already know how to do or is it beyond that and can it move beyond that? That was the idea.

[00:05:26] Alex Sarlin: Yeah, and it was a really clarifying way to look at it because as you said, the white space in between the things that students and teachers and and others in the education space are dealing with, the don't yet have AI solutions create this incredible opportunity space for all of us thinking about how to support them and how to improve education.

Jacob, you jumped in early and you helped shape the second iteration of the map working. Really closely with Lawrence, sort of putting together all these post-its putting together all these ideas, thinking about what is needed and who's out there, and how they map together and what's missing. What appealed to you about Lawrence's approach and what brought you into this project?

[00:06:03] Jacob Klein: Yeah, I mean, I, I just love Lawrence's approach of starting from what students and teachers need rather than let's look at all the startups that are out there and all the. Ed tech products that are being developed. Um, I think it's the right place to start and continues to be a, a useful lens to look rather than for what already exists.

There's still a lot of white space on that map even two years later, which is, which is exciting really. And I also really appreciate Lawrence's engagement with the learning science literature. I, I really wanna plug his book, the Science of Tutoring. It's quite good among things he's written. And so, you know, a lot of what we know about how humans learn has not deeply influenced school design, hasn't deeply influenced curriculum design yet, and this new, exciting technology of ai, it's an opportunity to redesign with stronger pedagogy in mind.

So I like that there's many spaces on the map that suggest possibilities that the learning science literature that experiments in universities have done, but haven't really. Been fulfilled yet in a meaningful way at scale. 

[00:07:09] Alex Sarlin: Yeah. For those who haven't yet engaged with the map who are listening to this, maybe we could just zoom out and give a li a couple of examples of the kinds of things you're, you're mentioning Jacob ideas that are in the learning science literature or that we, you know, things we know to be true about learning and that educators want to instill into their practice or want to support their students in doing, but that don't.

Always make their way into, into the, the daily usage of technology or education. Um, what are some examples that jump to mind that, of things that are sort of in the literature, but have not quite made the full leap into our daily lives? 

[00:07:45] Laurence Holt: Yeah. Do you wanna go? Maybe I'll jump in and then, and Jacob can add, but.

Some that are, are probably close to all of our hearts, but project-based learning would be an example. Competencies, so kids getting opportunities to improve their collaboration skills or empathy or self-efficacy. All all of these things are known from the literature to be powerful, but are not widespread.

Active learning in general, good active learning practice, not widespread. Alex that, why are those things not widespread? And in part it's because they require a lot of effort on the part of the educator and and others. And so that's kind of what's exciting here. If it is possible for AI to make that much easier, then maybe we'll see these evidence-based practices.

Become, become much more widely used. That again is a reason to, to paint those parts of the map where it isn't happening yet. Right, because it's, we've still not seen in those areas I mentioned we've still not seen much progress, but we're hopeful that that will start to get filled in. Also wanted to, just to build on Jacob's point about the research.

There's still, I know you have and Ben have covered this, Alex, there's still not that much research behind many of the claims made for AI efficacy, and that's in part because it's also new, so that's fine. But there are, there's, you know, there's another way of looking at the map and sort of overlaying what do we know to be.

Alex, uh, to think about that, that's sorely needed because it would be pretty sparse right now. 

[00:09:23] Alex Sarlin: Yes. So we, you know, EdTech insiders, we've been supportive of trying to take your incredible work of building this generative AI map and putting it in, giving it sort of a permanent home on the web where people can visit it, they can dive deep into it, they can explore the use cases, they can read your, you know, really comprehensive descriptions of each one, Lawrence, and how you think about them.

But they can also see all the tools and how they map. And one of the things, as you mentioned, that we're all. Before the EdTech space is the evidence, because you know, we have evidence from the last 50 years of research about, about education practices. We have some evidence, not as much as I think any of us would like, but we have some really good evidence.

But we have almost no evidence about this new field. So as you say, we've been working with Stanford, the Stanford Scale initiative has put together an EdTech research repository. They're basically have collected I plus studies already about how people are making sense of the space, and we're now. As you start to look at a use case in education, you can find out what is the most up-to-date research on it, which is really exciting.

So one of the things, you know, we kept talking about as we started working together, and I know you care a lot about this Jacob as well, is that this is all a very moving target, right? I mean, of course I. The ed tech companies constantly change. There's new ones, or ones pivot and or add features. You know, there's just lots of change in the ed tech space, but there's even actually change in the use cases.

There's change in what we know about education, so there's a lot of moving pieces, and as we set this up, we said, we don't want this to feel like. Crystallized and calcified. We want it to feel like a living framework that breathes, that can incorporate new research, incorporate new educator ideas, and of course incorporate new use cases and tools.

So with that in mind, you know, lemme start with you Jacob. What do you hope that users take away from the map, given that it's this living document and how would you like to see it evolve over time? I hope it inspires people to 

[00:11:11] Jacob Klein: see, think of so many of the use cases and possibilities for ai. I hope it clarifies the value to learn, the potential value to learners and teachers to cut through some of the marketing hype around certain tools and the kinda shiny newness of AI that I think can distract from focusing on what, what exactly is our pedagogical theory here that's gonna help learners and, and hopefully also raises the bar for.

New entrepreneurs to see, wow, there are some really smart people working on tools already in these spaces. How can I differentiate? How can I build on what's already being done? And also a higher bar for the buyers, for, for teachers, for administrators, for educators to really. Look at many different tools and hopefully have a really high bar for what they bring into, uh, the classroom.

I'm just excited for the map, not just to populate for, but for new use cases to spring up. I really hope that the categories continue to evolve as we think more and more about where this can fit in and just new models for 

[00:12:15] Alex Sarlin: learning. Yeah. And how about you, Lawrence? I know you've been through many iterations already.

What do you, where would you like to see it go as it evolves? 

[00:12:22] Laurence Holt: I think a big question about future evolution is how we move from hundreds of points of light now on this, on this landscape, and you know, we should acknowledge it Completely outgrew Jacob and I was just too much of a nights and weekends project, so we're so thankful insiders.

And professionalize the whole operation. So, and, but, so, you know, firsthand, Alex, it's a tough job to track all of this stuff and we should, we should thank our friends at Overdeck Family Foundation as well for making that happen. But a question's gonna be, teachers don't need another 60 tools. I, I think we can agree.

What's gonna happen there? Do we, do they, do we start to see some consolidation or do and or do big publishers start to take some of these innovations and put them into their programs or the platform operate? You know, does Google Classroom start to incorporate some of these things and we're beginning to see that happen?

I think that's

what. Trying to make something more coherent than we have today. 

[00:13:34] Alex Sarlin: I totally agree, and it's been incredibly interesting to see the appetite. I mean, before we got involved at all, the appetite for this kind of framework of making sense, this incredibly complex, fast moving space and trying to make sense of it through a pedagogical lens and, and a user lens.

Not just who's out there doing things, but what do we need? What's covered, what's not covered, what are use cases that are, that are nascent, that people are only starting to touch and which ones are, are already saturated Because everybody had the same idea the first time they tried generative ai. Like it can make multiple choice questions really quickly.

Yep. And we do have like dozens of companies doing that. 

[00:14:07] Laurence Holt: Yeah. 

[00:14:08] Alex Sarlin: And just to add two more, '

[00:14:09] Jacob Klein: cause you asked earlier, you know what, what cases haven't we seen? Teachable agents, great research from Dan Schwartz at Sanford Start. There's a few places but certainly hasn't reached scale where students are, the teacher and their teaching and ai, which is a great way to consolidate learning.

Another one is space repetition. Hasn't really, I, I haven't seen that well incorporated into a core curriculum. That's an opportunity. And another whole category on there that I don't see any company actively pursuing yet is really this idea of team teaching with ai. I don't even have it necessarily a clear vision, what that would look like, but a teacher teaching with an AI present in the classroom.

Filling in, supporting the teacher, not handing off students to a laptop for an AI to teach, but really trying to bring the best of, of both live into the classroom. That's an exciting possibility. 

[00:15:05] Alex Sarlin: Yeah, that, that idea, when I saw that, and I think that came from Jean Claude Biard, right. Lawrence. 

[00:15:10] Laurence Holt: Yeah, he said, he said something like team teaching with AI as a member of the team.

That's pretty interesting. Yeah. As Jacob says, not seen it. Not seen it happen yet. 

[00:15:20] Alex Sarlin: I've seen some people heading in that direction and I, I don't think I've seen a tool that I would quite name that like it. It's fully there, but there's some people sort of trying to bring AI in from the data analysis grouping recommendations.

I. Side. And then there's some people trying to bring AI really in a, almost like actually classroom management, like shutting down all the computers when they need attention, sending differentiated supports out to different students as you go. Almost like acting like a, like a teaching assistant in real time.

There's like, people are sort of biting off different parts of it. And I think there's a, it's a really interesting space, but I agree. I don't think any company has framed it that way of like, you're a teacher and you really should be in a team and AI can serve some of the, uh. The areas of the team consultatively, and in real time it's, it's a squishy barrier, but a really interesting one.

Lawrence, one really interesting insight from this project. Both of you from the beginning were thinking about tools that you have students as their core users and tools that have educators as their core users. And that's sort of a big split in the categories here. And one of the things you quickly realized is that, well learning.

Is not meant to be always the smoothest, most frictionless, easy process. Like, like something that's just always fun and, and, and easy, productive struggle, grit. Actually wrestling with something is an important part of learning. So good AI tools for, uh, students actually often introduce some kinds of friction, but for teachers, for ones that are supposed to aid.

Grading or aid lesson planning or build efficiencies. It's not really about adding friction. Teachers have enough enough friction in their lives. So that was an interesting split. How do you sort of create friction for learners, but remove it for teachers? How did you come up with that insight and where do you see that going as you have more and more companies serving both audiences and even single companies trying to serve both sides of the equation?

[00:17:05] Laurence Holt: Yeah. So one of the things that sort of took us a while, all of us a while to figure out was that the, out of the box, the frontier models, as amazing as they are, kind of have the wrong stance for teaching, right? They, they want to, they have been trained to be helpful and tell you all the answers. So everyone started writing prompts saying, please don't do that.

And you know, in capitals, right? And we've all tried that and it sort of works, but there is still an open question about how do we teach these models to understand what learning is like and how it can be most effective. Some people think that means.

Another is even if you create something effective, it often turns out that it's effective for a very small percentage of students because it doesn't really fit in the fabric of a classroom, so it doesn't get used. And so I'm regularly sort of meeting entrepreneurs who have something. That is really cool and I would wanna use it, but when you say, and how would you make sure that you get past this kind of limit and you're only serving sort of five to 10% of the kids?

They haven't thought about that. It's almost like we don't need more amazing products that only serve 5% of the kids. We need, we way to get beyond that. Hopefully as part of that, and it goes to what you were saying about that's really helping teachers make it a coherent part of what believe it's helpful

and haven't paid enough a. And those, you can tell those actually are not tech problems. Doesn't matter. Hiring more engineers doesn't, doesn't as much as I would. I love doing that. Doesn't actually help you solve those problems. So impact is gonna come from that kind of sort of deep thinking about how this technology can actually be used.

And we're at the beginning of.

To do it well, you need to do what a teacher does, which you gonna have to be able to put together the pedagogy. Your objective today, this set of kids data on this set of kids, whatever the district's telling me to do, whatever the pacing count is telling, I've gotta mash all of that together and figure out what's the way to deliver this maybe differentiated experience.

That it remains extremely difficult for humans and certainly beyond machines right now. 

[00:19:38] Alex Sarlin: Yeah. So brings up so many thoughts hearing your answer on this. You have been a really vocal advocate of this approach of not just limiting ed tech to the, the 5% and, you know, the, the few, you know, really trying to build education technology that works for the masses for, you know, the vast majority of.

Children in the K 12 system that need support. Can you actually, can I, before I even ask any more questions, can I ask you to double click into that idea? 'cause I think this is an idea that you've been very, you've been championing and it's been a really important idea in our space. And I'd love to hear you unpack it even, even more, even further.

I think our audience of listeners, really everybody will have a different reaction. I think they'll find it really valuable. 

[00:20:16] Laurence Holt: Sure. Yeah. It, it is very simple finding that many of the tools that you'd be familiar with in a classroom that are ed tech tools, when you look at their, they have evidence of effectiveness and they kind of have to have the evidence in order to get.

Through the purchasing process. And so they all report that. What they also put in their reports is how many of the kids that they were in, the population they studied actually got that. And to be fair to them, it's about getting sufficient dosage. You know, as the medical world would put it. Clearly, if you don't, if you don't use my product, you can't really blame me.

You can't really blame the drug for not getting the impact, but it is very telling that when you look across a whole range of products, the percentage of kids who actually are getting the dosage they need is very, very low, five or 10%. And so we don't really know why that is at this point. We've got some hypotheses.

I.

Well, why don't you know, we make these beautiful product, why didn't use properly? But then I think people

gets such a low. And so it's a really active area to figure out what's going on there and how can we make products. And again, AI opens up, doesn't tell us the answer, but it opens up lots of possibilities for how that might work. One of them is around teacher practice, which is Jacob specialist area, though just sort of using ai, try and help teachers change their practice over time 

[00:21:56] Jacob Klein: is.

I think it's right that a lot of these products only work for very self-motivated learners, but that's something to be celebrated. This is gonna be a golden age for that subset of students that want to accelerate really quickly, work with an AI system to quiz them, and even if that doesn't benefit all students, that has value for the student and for society.

I think part of the limitation here is. We have really narrow bands for academic achievement. So maybe it's only five to 10% of students that are motivated for our current definitions of academic success. But I think it's the teacher's job. I think it's the family's job. It's everyone's job to find what really motivates that student.

Maybe it's not traditional math and ELA. Maybe it is designing leotards. Maybe it is native plant gardens. Maybe it is a very niche. Interest. And I do think AI is gonna be helpful in letting that kid pursue their passion, uh, in certain ways, not by itself, but once, once their community is, has helped them find that passion.

So I'm still optimistic that there is gonna be more fuel for self-motivated learning, and if 90% of kids aren't motivated to learn, let's, let's change the curriculum. 

[00:23:18] Alex Sarlin: Yeah, you're both bringing up such incredibly important points that I think cut really to the, the heart of EdTech, of the challenges we have and the opportunities we have.

Speaking about the, the 5% piece, just really quickly, you know, I think of this in my mental model for this is sort of the denominator problem. It's something I encountered a lot in my time with working with MOOCs, which is all about, you know, self-motivated learners in my time working with bootcamps, where you have this feeling of, okay.

Everybody in ed tech wants to help. The most people. Very few are, are in it for, to sort of support only a niche group of learners. But what ends up happening is there are all these hurdles that, that have to get over. You have to get the product sold into a classroom or into, into, somebody has to start using it for one reason or another.

They have to keep using it, and then they have to get to, as you say, Lawrence, the dosage they have to use it, implement it in the way that was, was originally prescribed, whether that's a certain number of minutes a week, or the teacher does a certain amount of. Preparation and training, or there's all these sort of hurdles that you have to jump over and every time somebody doesn't jump over the hurdle, the denominator of the people who are exposed properly to the technology gets smaller and smaller, and then you end up, as you say, with only evidence.

It's not, it's not that the evidence says that it doesn't work. For most people, that's not. What it is at all. It's that the only people who we even know whether it works for or not are the people who jump through all these hurdles. The five or 10% who actually get implementation fidelity or, you know, in a bootcamp setting.

It's the people who went through every part of the bootcamp and said they wanna get a job out of it and did all the, the pieces of the job training and attended every session. And of that group, how many people actually got the job. And the numbers get so strange. And then I. Is the core to everything, and I, this is what I'm most excited about, about AI is that there's so much disengagement for school age kids and teachers, frankly, in in our system.

And there's so much opportunity if we really think about it, to expand the curriculum, to expand motivation to meet. Kids where they're at. I think it's just a really amazing moment where some of the things we take for granted, like the implementation fidelity issue or the motivation crisis, the fact that you just can assume that most people aren't gonna do the thing that they're asking 'em to do unless there's some huge penalty against it.

I hopefully, I think we're gonna turn these things around. Jacob, let me let throw this back to you. I, I'm sure you both have lots of thoughts about that, but Jacob, lemme turn back to one. This EdTech AI map is. That are ai, you know, native in various ways, and we've gotten at least a hundred more from people submitting.

There's a so many tools out there and there's a lot of feeling that it's a little bit of like a patchwork quilt. We already know from Learn platform that teachers use hundreds of tools, if not thousands in any given school year, and you're both product people. How do you think that the ecosystem can be a little more cohesive and help educators sort of make sense of this crazy number of tools and needs without it just feeling like they're constantly drinking from the fire hose?

[00:26:14] Jacob Klein: Yeah, I, I mean, I don't expect most teachers to explore even a fraction of those tools. They have enough responsibilities. There's, you know, there's a bleeding edge of early adopters that are gonna go out and experiment with a lot of tools, which is great. There's also other teachers that are gonna say.

Stop, put away the Chromebooks. Let's take out paper and pencil to work on these math problems or write essays. I think that's a fine approach, maybe healthy approach as well. The problem's not gonna get better, I think. I don't think it's gonna get easier because the investment needed to create software, it's gonna continue to radically decline the investment needed to write comprehensive, full adaptive curriculum.

It's gonna continue to decline The reading an enormous state procurement checklist, that's gonna decline. That's exciting, but I think it's gonna mean a continued proliferation of more and more tools. I was really excited about your interview with uh, CC recently. You know that just the ambition to create a.

Comprehensive school platform wouldn't have existed a few years ago. It would just be too daunting from a technical, an investment perspective. So I think there's gonna be less moats, more competition, which ultimately is a good thing. But I don't think teachers should feel the burden of what's the latest five tools that came out last week.

I think it's an ecosystem problem, and hopefully the ultimate. Winners will be ones that really build trust with educators, listen to educators, and are built on sound learning science. 

[00:27:40] Laurence Holt: Yeah, and I, I think that's a hundred percent correct. I, I would say also, you know, it's a journey. I think what often happens with tech.

People are incentivized to make exaggerated claims so that they can kind of rise above the noise. So somebody always can be relied upon to stop making exaggerated claims, and that causes the critics to point out that they're exaggerating and it makes it look like a lot of the whole field working to try and figure out how we can improve.

Teaching and learning is sort of foaming at the. And the truth, you know, truth is, anyone who's been around this sector for a while knows this is a long, difficult road, but we now have these amazing new tools. Let's be grateful for that and acknowledge, and it's not just in our sector, right? The capability of the technology has got a long way ahead of how society can make use of it.

Something worth, uh, jumping outta bed in the morning to do. 

[00:28:45] Alex Sarlin: Yes. And, and Lawrence and I think something that you realized very quickly while making these use cases and the, the sort of map was that there are a handful of companies that have been going outta their way to try to become sort of as close as we have to, you know, comprehensive solutions, having many like suites.

We call them tool suites, right. Dozens of different. AI specific tools for educators that are all under one roof. We saw just this week we saw, you know, two of those school AI and brisk teaching both get, you know, mul 10 plus million dollars, I think $25 million for for school AI in 15 in brisk recent round.

So there's a feeling of, hey. Some of the companies that are trying to break through the noise of all these point solutions and try to become a, a cohesive suite of solutions like cc like you mentioned, Jacob is trying to do for sort of the school management systems are getting a lot of traction and they're getting, getting lots of teacher users and starting to really have a very clear value prop.

How do you see that evolving? Do you think that's gonna continue to happen where it'll, this field will start to merge and all of these. Interesting use cases like the teachable agents or being able to auto grade or all these different things are gonna, you know, basically these, these handful of companies are gonna just start munching around more and more use cases, or do you think there'll be a limit to what they can do and there'll be all sorts of different players?

I'm curious how you see that evolving. 

[00:30:05] Laurence Holt: As you've described, it's exactly right and I, I think it's just part, part of the journey, right? That early on you could, there were lots of things that were quite easy to get a model to do that were kind of fun and amazing. So like, take my unit rate lesson and put more Halloween in it, because it's Halloween right now.

It's like, oh, okay. It.

Someone should study that maybe Halloween pop, but the people, everyone maybe is, is realizing that that was the first wave. And to go beyond that, you need to start putting, what makes it difficult is you have to put together these. Different dimensions. What, how the learning science, right, as Jacob mentioned, and the data on this set of kids and the lesson objectives and how to motivate them.

And like some of that is, is just very difficult to get into a machine and until you have, so I think just even as we know, just getting data. 

[00:31:12] Jacob Klein: It's difficult to get to a human. I mean, I think that's the point. You know, as you're talking about all the skillful moves that a tutor makes, AI is very good at taking something that a human can do and then scaling it across lots of different use cases and doing it much more efficiently.

It's not great at figuring out these subtle human moves of. The social interactions that are at the heart of teaching and learning, because it's really hard for, for humans to do it. So I think it's asking too much in a way of ai. I'm not saying we won't get there, but it's much harder than than a lot of the Gen AI tasks.

[00:31:48] Laurence Holt: Right. We were, I'm an optimist. I think we will get there, but we won't get there by claiming we've already got there. 

[00:31:55] Alex Sarlin: Right. Well, I think it speaks to the sort of sycophancy piece that you were naming before, Lauren, so that you know, the ai, the sort of default AI personality. I think of it, I always say, I think of it as like C3 po.

It's like a little bit of a subservient. Butlery, oh, what a great idea you just had and let me help you and, and do it for you. And that we know that that is not what great tutors do. It's not what great teachers do. It's not what great therapists do. You know, there's so many different types of, it's not what great guidance counselors do.

You know, there's so many different types of human interactions that do not fit that pattern at all. Where challenge and, and showing your own passion and, you know, there's so many different. Pieces of the skillful moves, as you call it, Jacob, that we just don't know much about or, you know, that, that we as humans do, but that it's hard to program an AI to do.

It's hard to sort of teach it so far how to act like that. I think, you know, we're, we're in a really interesting moment trying to sort of figure out how to take this default personality and, and. Put it in a whole different system. And you know, Lawrence, you mentioned is it fine tuning? Is it prompting, is it retrieval?

Augmented generation? Is it, you know, what are the techniques we can do to actually get AI to support the learning use case in a much more interesting way? 

[00:33:03] Laurence Holt: Right. I suspect that this is, you know, this is real basic research we we're likely gonna need if. If the big lab, big AI labs are, we're listening to this, hello big AI labs, open, AI anthropic, Google, et cetera, they would be saying, well, there's nothing that you guys always say that.

And what we don't have is the data set that allows us to train. And they're kind of right. We don't have all of the things we do. You and Jacob both said, we don't data. What might it be? Audio, video. We're starting to think harder about that, but that's a big gating factor that's gonna slow this down. 

[00:33:40] Alex Sarlin: Yeah.

And my best guess is it is gonna come from a tutoring world. It's gonna come from some kind of provider who has 10 million tutoring sessions and they're ranked by effectiveness, and we see the outcomes of them. And then you can go back and say. What did the tutors do in those, in the one the times that it really worked where students got over the hump, they felt motivated, they felt excited, they actually saw outcomes.

I think again, if anybody's listening to that from the data side, if you have that kind of data set, you might wanna think about seeing how to, how to, in a safe way, of course, but a safe, privacy, respecting way. But you know how we might, how we might use that. 

[00:34:12] Laurence Holt: Fun story. We were working with a, a research team that were trying to do what you described, reinforcement learning using tutoring transcripts.

So what they had to do was go find, what is it, where's the moment in the, in the transcript where clearly the student got something? It'll be when the tutor says you are right, right. But these. They were really great tutor and almost never said you right.

[00:34:47] Alex Sarlin: I remember my, when I was a tutor, I was a tutor for years and the number one thing you'd have to do is just figure out how, when a student tried to, uh, I'm exaggerating a little bit here, but when they tried to sort of take you off track, they said, oh, I know we're supposed to be learning this, but lemme tell you about this TV show I just watched.

Or, Hey, let me, let me point over here so we don't have to do this work. And you have to figure out how to sort of. Gently get them back on track, concentrating on the things you, you all should be concentrating on, and you have to use all your, you know, social skills to do that. It's not like that's a, that's some kind of a obvious thing, but you, you'd wanna do it without making people defensive, without making people feel, without making two T's, feel like they're stressed or attacked.

It's psychology. I mean, and, and that's, but, but look, we've just seen studies that say that that AI therapy is working, it's working for depression. They've literally just put that out, which means if it can work for therapy, I think it can work for teaching, which is a whole lot like therapy. Jacob, lemme pass this to you.

What, what does this all make you think of? You know, you are teach effects. It's a fascinating place, right? We've interviewed Jamie on the podcast before, and Teach FX does incredibly interesting work. It's talking about data, basically, you know, helping teachers make sense of their own data in classrooms, what's happening in their conversations.

What gets you excited about ai? What are sort of use cases or trends that, either through your work with H FX or just your oversight of the space and for many years, what do you think is most exciting about what's happening right now and what, what deserves more attention over the next year? 

[00:36:11] Jacob Klein: I joined TFX because it is a very different use of ai.

We mostly think of AI as a human or a teacher interacting with a chat bot or some kind of digital interface. TFX is about improving what happens. Between teachers and students. So it's really using AI to improve human capital. Not to replace teachers, but to improve teachers. And that's what ex, it's a really humanizing, exciting vision.

That's why I joined the company, and I think there's a lot of. Other use cases where if you ship the lens and think about not how are we gonna replace a teacher, but improve their capacity to be great at their job, to motivate them, to be great at their job, to to assist them. I think it's a different lens and it can be used for tutors, it can be used in, in lots of different jobs and a lot of AI in EdTech, the promise is often the same.

We're gonna save teacher time so they can focus on what really matters. I. Working with students, building relationships, I think Teach FX is, is actually focused on what really matters. So I'd like to see more AI that doesn't try to just emphasize the kind of administrative work. Let's try to minimize that and let's get really good and really excited about the core of teaching work, inspiring students, listening to students, challenging students.

Drawing 

[00:37:36] Alex Sarlin: students out, allowing them to express themselves or to discuss with each other. The, the data that teach FX services is so fascinating. How many open questions were asked, how much time was taken by the teacher speaking versus the students versus, and then what, which students and how many students were participating?

These are the things that are just almo pretty much impossible to figure out on, on one's own. As a teacher. You have to, but such powerful insights. And then combined with, with the learning science research. 

[00:38:04] Jacob Klein: One that we added this year that I love is how much specific praise did you give in this lesson?

Not just saying good job, but I really liked how Terrance cited the text in that answer. I really like how the class is listening now. Those kind of specific praises that. Model for students. Let them know what behavior is expected and also help build student confidence and build relationships in the classroom.

So yeah, impossible to keep a real track of that as a teacher. That's something that our AI can, can pick up. 

[00:38:36] Alex Sarlin: Yeah. Measure what matters, right? And yeah, it's very really powerful. Lawrence, how about you? Let me throw the same question to you. What, what is a use case or trend that gets you really excited with your work?

With Teaching Lab Studio? You work with a, a whole bunch of incredibly farsighted entrepreneurs trying to figure out, you know, what's around the next corner? How do we, to move things forward? What excites you most when you look at, you know, new one are starting? 

[00:39:00] Laurence Holt: Oh, there. So that and unproven one. Sometimes people sort of look at, look at the, the whole landscape or the map, and they say, this isn't really ready yet, or it isn't quite working.

But the truth is there's a lot of variation in the map. There are some things that are, I, in my view, are totally ready for prime time. What Jacob's doing is one of them. Feedback is another, and really what Jacob's doing is, is a version of that, right? Just getting more and more detailed, timely. Feedback in almost any walk of life that would help.

And in school it's relatively rare for you to get detailed feedback on your work just because it takes so long and people were firstly attracted to doing that in writing because language models, no language, and they were worried that the math feedback would just be playing wrong.

I, I've seen for myself where if you, if kids are getting detailed feedback on their thinking, not just what they wrote, but their thinking every, almost every lesson, that itself is transformative and we can do that right now. It's almost like we should, that's the chlorine we should put in the water.

Everyone should have that one. Beyond that, though, I think and teach FX and Jacob's work is a, an example of this change is hard. I installed early on, I installed this little, uh, app on my Zoom that would tell me when I was talking too fast, by the way. Uh, I'm always talking too fast and so it was basically always slow beep.

Run deep, deep. And that's something that's a whole area in itself. Change management behavior that we have. There's lots still that can be learned in that area. 

[00:40:53] Alex Sarlin: Yeah, I, I, BJ Fog has a really interesting taxonomy of different kinds of changes, behavioral change, uh, framework, and I, I, you know, I've never, I've never thought of that actually in the AI era.

The idea of how do you actually change behavior? With that kind of feedback. But I think what both of you're saying in, in common is that there are these limiting factors in education that are often have to do with how many sort of informed experts there are in the loop. Uh, for teachers, they don't have that many chances to get feedback from professional development.

I. Coaches, certainly ones that are observing their classrooms, that just happen so rarely in a, in teaching. And when it does, sometimes it's almost like a high stakes environment, not, not a friendly one sometimes. So the idea of being able to get after every class a detailed report about all the things we were just talking about, open questions, praise how much you were speaking, that's something we've just never been able to offer before.

And then to your point, Lawrence, the same thing is true for students, students. Feedback is really rare. A detailed, specific feedback is really rare, and I think, you know, these AI tools are creating this whole new space for students to try things out in any way, right? They can say, I have an idea for an essay.

Does this make any sense? And they can ask about 50 different ideas before choosing one that that just was not true. Before it, if you were lucky, you had a teacher who would sort of give you the opportunity to provide. Possible ideas and get feedback or discuss in a group or, or submit drafts or that kind of thing.

But even that would be long turnaround. You'd probably get one or two rounds at most. A student now could sit and talk to AI for three hours and come up with a hundred ideas before deciding even what to think about, what to focus on for a research paper or uh, an English thesis. That's just never been true before.

And I think we're all adjusting to that world of on demand feedback that's targeted. It's really exciting. 

[00:42:34] Jacob Klein: I like your optimism, Alex, but I think so, yes. Idea generation is fantastic. I use it as a writing companion all the time. What are 10 different ways to phrase this, but I have many decades of writing and reading experience, so how is that 10-year-old gonna pick the right appropriate?

One to the one suggestion out of 10 or 50 or a thousand that the AI gives back to 'em, that that's something that I'm curious to see how that develops. I'm not sure yet. We know what kind of baseline knowledge a student needs before they can make an an informed decision in that interplay with ai. 

[00:43:16] Alex Sarlin: It's a great point, Jacob, and I was thinking recently about how they used to always talk about how the questions were really what mattered, not the answers.

That was like, oh, is this trope in education? You know, asking the right question is the key, and I feel like that's actually truer than ever in this AI era because you have somebody ready. I. To answer any question you might have, but if you don't know what the question you wanna ask is or how to evaluate the answer.

Yeah, no, it's a great point and I, I don't wanna take that for granted. Just 'cause it's possible doesn't mean that students will want to do it or be able to do it, but yeah, that kind of skill you just named is exactly what we need to be teaching. I think now, same way as when the internet was was born, we needed to start teaching a different type of research.

Now you have these incredible. Conversational bots. You need to learn how to converse with them in a way that's gonna actually get you to your goals. But we're on the very first inning of that. We have one question left, and I wanna send this to both of you because I think you both think so deeply about this.

I'm just gonna lay it up and step back. We've surfaced a lot of different questions here, and there's. Incredible. I am very optimistic and I think there's a lot of excitement in this field, but there's also plenty of people saying, hold on, this is moving very quickly. I'm not sure where it's gonna really go.

There's a lot of open questions. People talk about integrity or the idea of outsourcing, thinking that, you know, the, their word, that students are gonna outsource their thinking and their work to an AI and not do it themselves. Or, um, falling in love with ais or, or becoming dependent on them, or student privacy or bias hallucination, right?

The list goes on and on. But artificial general intelligence, if. We, if we wanna go down that route for both of you, I know you're both optimistic, but what keeps you up at night? What do you, what is the nightmare scenario of where we go if we get this wrong? Jacob, lemme start with you and then I'd love to pass it back to you, Lawrence.

[00:44:55] Jacob Klein: Sure. Well, some people would say AI and education, you know, really kicked off two and a half years ago with chat GPTI would call a student watching an AI powered TikTok playlist when they should be listening to their teacher or doing homework. That is AI in education. And so we have really failed in this last decade to properly regulate this giant dopamine experiment that's happened on students worldwide with social media and mobile phones.

And I give a lot of credit to Jonathan Hate and others that are starting to push back. So I understand the skepticism that says AI's just gonna make this problem even harder. It's going to hurt students' ability to focus. To read deeply. To think deeply, and I think we need to marshal the collective will to be able to say, sometimes stop.

There's spaces where to put away all technology to really slow down and think and to be able to not shame that impulse and to understand that it's gonna be an alternation, I think an alternation between. Technical exploration and also creating spaces away from technology that will help students.

[00:46:11] Alex Sarlin: Really interesting. How about you, Lawrence? How what, what keeps you up at night when you think of how this whole AI movement might, might go off of various cliffs 

[00:46:20] Laurence Holt: that's asking it? Well, Jacob wrote the sort of what I think of as the definitive list of things to worry about in the very first version of the map we did.

I'm feeling too optimistic. I go back and read this beautiful summary he did, and it's really it. It's every worry you should have, and it's kind of cathartic to know that they're there and they're right. But I wanna put you both on the.

Will there be, will we look back 10 years from now and say, well, we got better textbooks, right? With completely convincing video of of Einstein teaching Anything I like, but it was incremental, essentially. And a reason to think that course is, that's what happened every single time so far. You know, laptops are gonna.

Begins to add up to something transformative, or there is a single move that we just haven't discovered yet that is transformative and a fun dinner party game is to ask what? How do you handicap those two? What percentage would you allocate to tow those two scenarios in the next five to 10 years? I'll tell you, I'll tell you my number, but would either.

[00:47:39] Jacob Klein: It's already been transformative for certain students, right? Being able to whip up prototypes a hundred times faster than you would be otherwise able to, to iterate on visual ideas with diffusion models to be able to refine writing, it's exhilarating. And for already, for a lot of adults, myself included, a lot of students, it's an.

Completely exciting, exhilarating, transformative experience on the scale of our entire education system. I think think it's much more incremental, particularly with the headwinds, again, of student attention. Being decimated through social media and mobile phones, and then also a political environment that doesn't seem to be investing in, you know, national education capacity.

So I think it'll be transformative on the individual level and incremental. Globally. 

[00:48:35] Laurence Holt: He's so brilliant at sidestepping these questions. I could not have come out with that. Well, Alex, what about you? Can we get a number outta you? 

[00:48:44] Alex Sarlin: I think, I think that was a good answer. I think it's usually not a great idea to bet against incrementalism because change is hard, right?

As, as weed said earlier in this call. But the thing that I, that gets me most excited, and this may sound counterintuitive. But the thing that gets me most excited about AI is that I think we, our first paradigm of AI is text-based. It's poll based, right? You have to go actively ask an AI to do something for you.

You have to formulate a question or ask it to do it in a certain style. And you have, you know, an image in a certain style, and you have to know what that style is. And that really does benefit people with more life experience or with more cultural literacy or all sorts of things. And I think that those are the sort of individuals you're naming Jacob, there are incredible people of all ages who are doing.

Unbelievable things they could never do before with ai. The thing I'm actually most excited about, it sounds it, this also could be dystopian, but the thing I'm most excited about is the shift where first we get away from text and it starts to be that AI is, is voice and podcast. It's video-based, which is where kids are, it's game-based.

So it speaks in the language that that people wanna speak, but also where it becomes proactive, where the ai, just as to your example, again, dystopian example about TikTok having this behind the scenes. Algorithm that says, I can figure out what you love and I can keep you on this for seven hours a day.

By doing that, like that's what TikTok did. If we could get that in education, if we could start to say, Hey, we understand what you care about, we know what motivates you, and we're not gonna wait and have you have to type something into a search bar to get your answer, we're gonna come to you and say, your homework today is about this.

We know you probably don't care a whole lot about this, so we're gonna do, we're gonna change it. We're gonna give it, make it about leotards. We're gonna make it about all these things that you care about. We're gonna start giving you more motivation. We're gonna tie it into other goals that you have.

We're gonna connect you to other students who, who may have the same opinions as you. We connect you to tutor the proactive nature of ai if it starts being part of the flow. I think that's when it becomes transformational in a way that could be beyond incremental. It could, because then you don't need students to buy in.

You don't need implementation fidelity. You don't need teachers, even in some cases to buy in. There could be systems that literally just are there with no other goal than to help students succeed in their learning outcomes. And they can pull all kinds of interesting ideas to do it. And if we get there, then I, I'm really bull.

[00:51:07] Laurence Holt: We may need. Kind of breakthrough to make that happen, I think, which still agents are not very reliable in the field as a whole, so, but I agree with you that be interesting. Neither of you managed to come up with a number. I'm gonna number, I was 80 20 in favor of incrementalism. Just because that was the right answer every other time, change is hard.

Although a wiser person than me once said, change actually isn't hard. Being changed is hard, so change itself should come. We just have to think about it differently. I think I'm now maybe 70 30 and because I think sort of what you're both saying, it is possible that there are some use cases that just become so compelling and cheap.

The pandemic at least did that for us, right? We have, we have wifi, we have machines that maybe it could, something that is very, very compelling, could catch on very widely. And so it's an exciting time. 

[00:52:07] Alex Sarlin: It is, it is. It's a great question to end on, so thank you both so much. This is Jacob Klein and Lawrence halt the co-authors of the generative AI use cases and Education Map, which is now hosted at EdTech Insiders Continu.

Expanding it with more use cases, with more tools, with more nuance, with more research, and you two have done a great service to the industry and you continue to add value in so many different ways in all the things you do. We appreciate you being here with us on EdTech Insiders. Likewise. Thanks so much, Alex.

This was fun. Thanks so much, Alex. Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the free EdTech Insiders Newsletter on Substack. 

Woo.

People on this episode