Edtech Insiders

Bridging AI and Human Connection in Education with Libby Hills and Owen Henkel of Ed-Technical Podcast

Alex Sarlin and Ben Kornell

Send us a text

Libby Hills co-leads the Jacobs Foundation’s global EdTech investment portfolio, focusing on the intersection of technology, research, and education. With over 15 years in education, including roles in teaching, tech leadership, and charity boards, she also co-hosts the Ed-Technical podcast. Libby holds degrees in Education and Social Business from Oxford and LSE.

Owen Henkel is an AI in Education researcher at the University of Oxford, specializing in natural language processing for foundational literacy and numeracy. He leads the R&D team for Rori, a chatbot for personalized math tutoring on low-cost mobile phones, and co-hosts Ed-Technical with Libby. Owen has a dual MBA/MA from the University of Michigan and is completing his Ph.D. at Oxford.

💡 5 Things You’ll Learn in This Episode:

  • The future of AI-powered tutoring and its potential impact.
  • Key components that make tutoring effective.
  • Jacobs Foundation’s approach to cost-effective hybrid AI-human models.
  • Current limitations of AI in understanding student learning.
  • New UK government initiatives supporting AI in education.

✨ Episode Highlights:

[00:00:22] Generative AI’s limitations in understanding student learning.
[00:00:57] Carnegie Mellon’s AI-human hybrid tutoring model.
[00:04:37] Four pillars of effective high-dosage tutoring.
[00:08:17] History and evolution of intelligent tutoring systems (ITS).
[00:11:26] Rori, a math tutoring chatbot for underserved regions.
[00:17:18] Hybrid models reducing tutoring costs and training needs.
[00:24:42] AI modeling new thinking processes for students.
[00:26:35] UK government-backed AI content hubs and resources.
[00:31:32] Raising minimum quality standards in education with AI.

😎 Stay updated with Edtech Insiders! 

🎉 Presenting Sponsor:

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for Education Entrepreneurs.  Founded by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work just as hard as you do.

[00:00:22] Owen Henkel: Where I think actually large entrepreneurs don't help actually much is. Modeling student understanding. So what they're great with his language so they could ask the question, but they don't have some theory of mind of like, Oh, because the student said this, they probably don't understand this. That would be a totally different, very complex, like student knowledge model.

And so that question of how can you do checks for understanding? How can you ask good questions? I don't think generative actually helps with that. I think you kind of need a different model. That, you know, obviously the LLM would output whatever that model told you in a conversational way, but it's not any good at that.

It's not built to do that. 

[00:00:57] Libby Hills: Addressing this cost issue we were talking about at the start of the session. The second example that I think's, you know, again, doing some really fascinating work to really. Think about and fine tune what this sort of hybrid setup could look like is, is Carnegie Mellon, who are doing some really interesting work.

So think about, you know, how to design these models where you have students working with an AI tutor. So, you know, working independently their way through AI powered software, but then you have tutors able to drop in and provide some Support to students that might benefit the most from that support.

[00:01:33] Alex Sarlin: Welcome to Edtech Insiders, the top podcast covering the education technology industry and funding rounds to impact AI developments across early childhood, K 12, higher ed and work. You'll find it all here at Edtech Insiders.

[00:01:46] Ben Kornell: Remember to subscribe to the pod, check out our newsletter and also our event calendar.

And to go deeper, check out Edtech Insiders Plus Where you can get premium content, access to our WhatsApp channel, early access to events, and back channel insights from Alex and Ben. Hope you enjoyed today's pod.

[00:02:13] Alex Sarlin: Today on Edtech Insiders, we're talking to Libby Hills and Owen Henkel who work with the Jacobs Foundation in Switzerland. Sometimes we Americanize that as the Jacobs Foundation, but the Jabobs Foundation in Switzerland, which is a leader for technology research and practice in education. Libby Hills co leads a global ed tech investment portfolio at the Yacobs Foundation.

She is involved in research projects on AI in education and co hosts a sector leading podcast about AI called Ed Technical. Libby has been working in education for over 15 years, which includes experience as a teacher on the Teach First program in the UK, chief technical officer for an award winning network of low cost private schools, and.

On experience on the board of Lively Minds, a fast growing education charity, Libby holds a master's in education focused on emerging technologies and AI from the University of Oxford, a master's in social business from the London School of Economics and a BA from King's College, London, and she also has two professional teaching qualifications, the QTS and the AST.

Owen Henkel is a researcher at the University of Oxford where he focuses on applications of artificial intelligence in education with a particular focus on using advanced natural language processing to support the teaching of foundational literacy and numeracy skills. He also leads the R and D team for Rori, an AI powered chat bot that provides personalized math tutoring on low cost mobile phones.

He is a former high school teacher and an established podcaster. He co hosts the ed technical podcast with Libby. Owen holds a dual MBA and MA from the university of Michigan, where he studied statistics, ed tech and impact investing, and he's in his final year of a PhD program at the university of Oxford.

[00:04:10] Ben Kornell: Well, I am so thrilled to have Libby Hills here with us and also Owen Engel, the Jacobs Foundation, or for those of you who like to Americanize everything, Jacobs Foundations, you say tomato, we say tomato, but, you know, there are no more passionate ed tech advocates. Then the team over at Jacobs, we're so excited to have you on the podcast today, excited to join and merge our pods.

Welcome to the show. 

[00:04:37] Libby Hills: It's amazing to be here. Thanks for having us. 

[00:04:39] Ben Kornell: So Libby, let's start with you and just talk a little bit about the work you do on efficacy and effectiveness. Specifically, let's talk about high dosage tutoring. How has this method evolved over time and what does the evidence tell us about its impact on student outcomes?

[00:04:56] Libby Hills: Yeah, this is such a great topic to talk about when it comes to efficacy because it's a space where there's actually, we know a lot, which is really exciting. So, I have no doubt that all of Etec Insider listeners know what tutoring is, but just in case some non educated folks have wandered on, when we talk about tutoring, we mean tutoring individual students or small groups of students towards a learning goal, and tutoring has been around for a really long time.

It's probably our oldest one. But these days, unless you're British royalty, it tends to be a supplement to normal classroom instruction, not a complete replacement as it is for some of the royal family. And it's got loads of attention over the last 40 years or so, and kind of increasingly more following COVID.

There's been lots of attention on learning loss recovery and tutoring's being seen as a really promising way to help address that. And what's been really cool for those of us who get really excited about, you know, evidence based interventions is that's been coupled with a real investment on the research side.

So we're now at a point where we know quite a bit about the efficacy of tutoring and not just that it's actually really effective when it comes to supporting students learning gains, but also specifically what it is, what's the secret sauce about tutoring that actually helps deliver some of those learning benefits.

And I'd probably summarize those aspects of the secret sources, four things. So three more pedagogical, and then one more interpersonal. And these things aren't really surprising because they're pretty similar to what works for normal classroom pedagogy. So the first thing I call out in terms of what we know, well, what makes it effective is students practicing at the right level of difficulty.

So the content's aligned with where they're at. Secondly, assessment. So being able to check for understanding, ask probing questions. That's a really, you know, second, really important aspect of what we know makes tutoring work and then thirdly, immediate feedback, rapid feedback, being able to give students feedback on, you know, how they're doing, how they can improve.

And the last one here, which I think is going to be really interesting to talk about today and unpack a little bit in relation to AI is interpersonal relationships and that being valuable for students in terms of motivation and being able to have someone to help grapple with, you know, challenging problems with as well.

And so all of this stuff, if high dosage tutoring, so tutoring that's happening more than a few times a week, if all of this stuff is in place. Tutoring can be really effective. So we know that now, but I think really the question that we're trying to grapple with now is how to do it cost effectively. So how to do it in a way that's scalable and realistic for districts to be able to kind of cover the cost of, and one way to help that is technology.

So hopefully we'll be able to talk a little bit more about that today. 

[00:07:40] Alex Sarlin: That's really interesting. And, you know, I think we're all hearing that of those four aspects, the one that definitely. Can't at least right now be replicated with AI is those interpersonal relationships, but the others possibly could so, you know, it leads to some really interesting questions about what the future of tutoring might look like.

And oh, and I want to pass that to you. You've been studying this in great depth. What have we learned about intelligent tutoring systems and computer based tutoring over the last few decades before the rise of generative AI? And what do you think? Thank you. We should take from those learnings. And what do you think may be changing based on, on what's happening now in technology?

[00:08:17] Owen Henkel: Yeah, absolutely. I mean, I should say I probably only know enough to be dangerous on the ITS history, but I do know a bit. Yeah. So I think that, you know, this was something that really came up in the nineties or early two thousands with the rise of the internet and kind of, you know, normal software computing.

And the basic idea is. The earliest ITSs were probably something analogous to a really good interactive textbook. So you'd have a curriculum, you'd have your practice questions, you'd kind of maybe be shown a chunk of text. You might be able to ask a few questions, but you'll be very rule based dialogue, and then you get practice questions.

And then you'd be able to, in the background, you know, automatically grade. You'd find out if you got it wrong or right, and you'd progress through. You know, some famous ones, Alex. A. L. E. K. S. One of the early ones. These were usually mainly in mathematics and science because the domains are a bit more hierarchical, a bit more well defined.

You know, there's some other examples like mind spark is really interesting example of this. That's in India who does great work. I'm really impressed by that team. But I think that the key idea there was going back to these four ideas was it allowed you to practice at the correct level of difficulty, and it gave you rapid feedback.

You kind of immediately knew how you're doing what I think those things probably didn't do a great job of was obviously this interpersonal relationship angle, but also this idea of kind of checks for understanding, clarifying, you know, kind of understanding when the kid kind of gets it right, but doesn't fully get it right.

That level of detail was missing. And then obviously the conversational interface was extremely limited. Right. So, you know, I think there's some studies, people have some pretty spicy takes about this. You know, I think we could get into a really, really deep dive. Probably boring for everyone but me debate, but I think that I think the evidence is like, probably when they're, you know, narrowly defined and used in a smart way, they work pretty well.

But if you think that like, Oh, the ITS is going to like replace teachers, you're just in la la land. So that's kind of the old state of the art. And I think Maybe quickly. What's interesting about, you know, the explosion of large language models is that it makes the conversational interface at least has the potential for that to be way more organic, way more engaging, way more natural, and it at least potentially opens up the idea that it might be able to tackle these other two areas that the intelligent tutoring systems couldn't do well, which would be potentially kind of having some type of interpersonal interaction with the students in a meaningful way and also potentially be.

Yeah. Opening up a much more rich way of checking for understandings, addressing student misconceptions, circling back and things like that. So at least that's like the potential that these new LMS, if you kind of combine them with the architecture of an ITS could tackle. 

[00:10:52] Ben Kornell: That's so fascinating. I think one of the challenges that we have today is like imagining what the future could look like, and then also.

Dealing with what the reality is today. Let's talk a little bit about the work of NLP to enhance foundational literacy and numeracy skills. We were talking a little bit about tutoring. Let's talk about foundational skill building. What is your insights from your work with Rori, the AI powered math tutoring chatbot?

How has it impacted students, especially on low cost mobile phones and underserved communities? 

[00:11:26] Owen Henkel: Great. Yeah. So Rori is a project with rising academies. Who's an educational organization that works on sub Saharan Africa, mainly West Africa. And they do a whole bunch of great stuff. You know, they partner with governments, they run their own schools, and they also have some really cool ed tech products.

And I'm lucky enough to be helping them out with Rori, which is math theater. I think you guys actually interviewed Kumar. About the tools competition recently, but he's also kind of the head on show for the Levi program, who's backing Rori and a lot of other really cool innovations out there. So, you know, thanks to those guys.

Basically, the idea with Rori is that we're trying to, it's a shot at trying to say like, Hey, can we take an ITS and can we add some of these conversational elements? I think the design case is that it's because it's West Africa solutions that if you want to scale them solutions that based on rely on video or laptops.

Are you going to be cost prohibitive? Like there's not going to work. And our thought was like, but wait a minute, you can do a lot in conversations and WhatsApp's ubiquitous. It's data light. Not everyone, but you know, maybe two thirds of people are starting to have access to at least budget smartphones.

So the idea is like, what could we do? What kind of tutor could we come up with? That's combining ITS and some of these more flexible LM elements to provide a kind of foundational math skills to students. I would say that, you know, it's really cool. We're learning a ton. I think that it's really good at understanding, understanding in quotes, air quotes, like I'm going to use lots of anthropomorphic language here, but interpreting Unexpected or kind of awkward student expression.

So it's gotten really good at interpreting all the ways a student might answer a question correctly. So we could do open response, you know, it can handle out of the blue random questions in short conversational terms. So I think this promise of making kind of the conversational interface I'm really impressed with.

I think that these two other questions about interpersonal relationships, I don't think they're quite there yet, but if there's this whole ethical question, you know, there's a whole ethics of like trying to establish an emotional relationship between a kid and a chatbot. And like, we've kind of said, like, we don't really fully want to go there.

We think it's just too murky. We just don't want to take that on. So I think we, when it gets too personal, We have a lot of safeguards in place to try to be like, Hey, I'm a chatbot. Like, I don't, I don't have feelings about this. I'm mainly good for math. So like, there's this open question, but I think it's just, that's too big for us to tackle at the moment.

I think where I think actually large signals don't help actually much is modeling student understanding. So what they're great with is language that they could. Ask the question, but they don't have some theory of mind of like, Oh, because the students said this, they probably don't understand this. That would be a totally different, very complex, like student knowledge model.

And so that question of how can you do checks for understanding? How can you ask good questions? I don't think generative actually helps with that. I think you kind of need a different model that, you know, obviously the LM would output whatever that model told you in a conversational way, but it's not any good at that.

It's not built to do that. And I guess just that's on tutoring. I think. We don't have to go too far in this, but some of my research that's not related to Rori, but on the side is my own research is on formative assessment. And I think that if it's in low stakes environments, where you're not like deciding if someone gets into college or something like that, but it's like, Hey, like, can they read, you know, a second grade story, these tools are really good.

Like they can do really good stuff for tasks that would take a long, long time. So Libby and I did a few papers on this, but you know, just today. I was getting some results of students retelling a story they read. So they read a short story about like a brave sheep and then you just ask him at the end like, Hey, what happened in this story?

That's the type of thing like a parent might do with their kid. Then I had like expert teachers grade these and the model does almost as good of a job. You know, again, it's very vibe based, like what the teachers grade, so it's not like I'm not saying that this is going to give you, you know, 100 percent accurate, complete mapping of all the students literacy skills, but if you're trying to a temperature check in terms of formative assessment, they're pretty good.

And so I think that's a huge potential area, especially for formative literacy and numeracy, which is lower skill. It's like it's kind of easier questions to sort out. 

[00:15:15] Alex Sarlin: The nuances of how to first, as you mentioned, sort of, you know, reverse engineer what makes human tutoring work and then start to see where that can dovetail with the current technology is a really interesting model and a way to think about things.

We recently talked to Chuck Cohn is the CEO of Varsity Tutors, and one thing that he said that stuck with me is he said, you know, they do. Of course, you know, matching human tutors for the most part with some AI assistants. And he said that what he's seen is that when you just put a sort of chat tutoring bot in front of a student, they don't know what to say.

They don't want to engage. And really, he basically mentioned accountability as a core function of a tutor, which is something, you know, it hasn't come up, but it's sort of implicit in just being, there's a human who cares whether you showed up or whether you got a question or all that. 

[00:16:00] Owen Henkel: 100%. I think also like there's a longer level accountability, but even the short term, there's just like.

Who's leading the conversation, right? And a lot of times we think like, Oh, the student comes to all the questions like, No, if you're tutoring is like actually a lot of tutoring actions like they're kind of, it's mostly led by the tutor. And then of course, the student can pitch in and like lead parts of the interaction.

And you know, an interesting limitation, not limitation, but it's kind of the fundamental nature of a lot of large language models. They've been trained as assistants, they've Train to respond to queries. They haven't been trained to lead a conversation. And so I think they can lead a conversation. But then you have to develop a whole software layer of like the model of how the conversation goes, which you like interface with the LM to proceed with the tutoring competition.

And it's possible in theory, but that's just really complicated because then you have to define what you want to happen in the conversation. 

[00:16:46] Libby Hills: I think the accountability piece is so important. It's so much harder to walk away from an in person tutoring session. I'm just thinking about the constant lingering guilt I have for ignoring my Geolingo.

It's so easy for me to just like put the phone down and not engage, but maybe I'd do a better job if I had a German teacher in front of me. 

[00:17:02] Alex Sarlin: Exactly. And it's interesting as we think about these different models, I totally agree that You know, the accountability matters, but who leads the conversation is an interesting way to look at it as well.

And Libby, I guess the Jacobs, Jacobs, Jacobs foundation does. 

[00:17:18] Libby Hills: We welcome either. 

[00:17:21] Alex Sarlin: There's so much great work in sort of education efficacy. I'm curious so far, you've already talked about some of the research that you are both doing. But when you look at the sort of AI tutoring landscape, As a whole, do you see, you know, models that are starting to break through on some of the issues we just named on the check for understanding issue and the formative questions or on the, you know, leading or motivation or building an interpersonal relationship?

It has anything broken through and how does it compare to traditional high dosage tutoring? 

[00:17:48] Libby Hills: Yeah, I think that AI tutors generally have A lot of potential. I think there are these limitations that we've been talking about. And so I think there are some really promising models that do a really great job of, you know, reflecting the secret source, like harnessing the secret tutoring source we were talking about with these four key aspects.

And some of the ones that I'm like particularly interested in are what some people term is kind of, you know, hybrid models. So human AI hybrid models that I think do a good job of. Leveraging the strengths of the AI, but also recognizing some of the limitations, you know, particularly when it comes to the importance of the interpersonal relationship aspect of tutoring.

So a couple of specific examples that I think are really promising. So one is no surprise, potentially Saga education. So, you know, very well known leading, you know, organization with a really high impact tutoring model. But I think some of the work they're doing to think about how AI can help Lower the costs of the training and coaching that they provide to their tutors is really interesting.

So you still have your human tutor able to provide the same quality of support that we were talking about, you know, and the interpersonal aspects of tutoring. You're using AI to provide. Feedback to those tutors in a more rapid way than would be possible otherwise. And that enables them to work with more novice tutors than they might be able to otherwise.

So that's a maintaining that the same quality and impact, you know, potentially their model, but at the same time, addressing this cost issue. We were talking about at the start of the session, the second example that I think's, you know, again, doing some really fascinating work to really, you know, Think about and fine tune what this sort of hybrid setup could look like as is Carnegie Mellon, who are doing some really interesting work to think about, you know, how to design these models where you have students working with an AI tutor.

So, you know, working independently their way through AI powered software, but then you have tutors able to drop in and provide support to students that might benefit the most from that support. And so I think there's some, you know, really promising evidence coming out of that, that work that they're doing again, showing that this could be a model that could.

Can deliver both this, you know, quality, but at the same time, be able to lower the cost. So really addressing that, like scalability, scalability challenge. So I think it's pretty early days. So, you know, we haven't, this is really cutting edge stuff. So there isn't this sort of huge bank of research impact evaluation that they have yet.

But yeah, I know they're both really committed to building that. So really excited, excited to see where that goes. 

[00:20:14] Ben Kornell: Yeah. Ultimately what is so interesting is we're talking about the technical change and the adaptive change together. And like, what behaviors do we need to evolve or, or double down on? And I think one thing that hasn't come up in our conversation is clearly an empty cursor where you just enter a question.

That is not tutoring. And so there's a little off script here, but when, you know, I'm living day to day in Silicon Valley and I hear a lot of talk around, everyone's going to have a personal AI tutor and assistant, and it very much reflects a non pedagogical view of what that might look like if you had to give advice to the big tech players, like an open AI, a Google, Metaslama team, or anyone else who's imagining.

A future where AI is unlocking access and impact. What would be your kind of core recommendations to big tech?

[00:21:18] Alex Sarlin: We'll be right back. This season of EdTech Insiders is again proudly sponsored by our M& A partners, Tuck Advisors. Thinking of selling your company? As experts in mergers and acquisitions in education and EdTech, Tuck can help. At Tuck Advisors, their motto is, Make hay while the sun shines. If you want to start planning the harvest, contact Tuck Advisors now.

[00:21:49] Owen Henkel: I'm of the opinion that, like, LLMs are fantastic with language, like, language manipulation, interpretation, processing tasks, like, And they're great, but it's a fundamentally different task to come up with like a theory of mind or assess mastery or stuff like that. And so like, it kind of makes sense. Like, Oh, if you train them on this way, I could get really good at like, you know, nudging the student or using the right tone or something.

But in terms of really understanding what the student's misconceptions are, that's just like a different domain of knowledge. And so I think that like, Invariably, you're gonna have to have different models or different software layers that interact. So, like, I think that, you know, maybe, you know, in some sci fi future, 15, 20 years from now, you know, I don't know, the LM will come sentient and it'll be benevolent and, like, everyone actually will have a perfect tutor or some weird sci fi scenario, I don't even know, but, like, any time in the, Foreseeable future.

The LLM is like itself is not going to be monitoring the student's knowledge state and adjusting its behavior independent. And so like, that's going to have to be a different program, a different function functionality that interfaces with the large, large model. It won't be the same thing. 

[00:22:54] Ben Kornell: What do you think Libby?

[00:22:55] Libby Hills: Yeah, I think it, I don't want to be too academic in an answer, but I also think it sort of depends on, on how we define a tutor. Like I think, you know, language models can be super helpful with things like, you know, retrieval practice, like really kind of, you know, building up base knowledge, I agree with Owen that unless we are in a really kind of sci fi world where, you know, students are surrounded by sensors.

And they're able to detect, you know, how anxious or demotivated a student's feeling in a, in a way that's equivalent to how well a human's able to do that just on site. I struggle to see AI based tutors to be able to get to the same level of effectiveness as, as a human tutor without having to make some potentially uncomfortable ethical compromises about data collection.

[00:23:38] Alex Sarlin: It's really interesting to hear this breakdown of how an AI tutor works. My or may not work and what's missing this concept of sort of missing the theory of mind, not being able to understand the misconceptions a student comes up with feels or the emotional state. To your point, Libya feels really important.

It almost makes me think a little bit about AI for what we now consider sort of older use cases like chess or, you know, people learn a lot From A. I. S. In sort of gaming environments with lots of constrained rules, but part of how they learn it is that they're watching an expert at play. Right. And I wonder if there's a capacity for an A.

I. Tutor to act a little bit differently than a human tutor in that it can actually reason out, especially with. Models like, you know, opening eyes new model, it can answer the question, break down its own thought process, much chain of thought and sort of explain it and maybe use that to model for a student what, you know, how they might want to be thinking about it rather than being able to maybe perform the impossible task of understanding what a student is actually thinking.

I wonder if there's something 

[00:24:42] Owen Henkel: there. 

[00:24:42] Alex Sarlin:

[00:24:43] Owen Henkel: really 

[00:24:43] Alex Sarlin: like that idea 

[00:24:44] Owen Henkel: in the sense that there's, in all kinds of technology, we, the two mistakes where you try to make it act like the human equivalent or the previous technology, right? It's the two things when people talk about large language models, they're like, some people will be like, Oh, it's the way humans think.

You're Absolutely not. And some people are like, Oh, it's just auto complete. And you're like, no, absolutely not. Either. That's just like a gross trivialization. It's something else. And so this idea is like, you're like, Oh, what can it do well? And what's it well suited to? And so I think this idea of like elucidating thinking or logical steps and kind of having a student watch someone else reason is something that can do really, really well and you, and should then you're like, Oh, well, like.

We shouldn't expect an AI tutor to tutor the same way a human tutors. It's like, it's a different type of tutoring, a different type of support tool, and then exploring that. And I think that there probably just hasn't been that much deep thinking about this because, because they're so capable in some ways, we're like, we try to copy a human model and then we find out like, well, it's not a human.

So of course it's not going to copy a human that effectively like, well, what can it do? That's a really interesting idea. Yeah. 

[00:25:46] Libby Hills: Yeah. 

[00:25:47] Alex Sarlin: When you play with it and ask it to explain its reasoning, some of these newer models, it really can break it down into some great detail and sort of explain why it did it and how it's thinking about it.

And you're almost seeing like, you know, the high branded work. So Libby, yeah. Both of you, actually, but you'll start with you live, you know, you really have a 10, 000 ft view of what's happening in education and advocacy and what's happening in a I, you work with many different organizations, lots of different, really cutting edge research.

What are some of the most exciting initiatives or projects you've seen, whether they're a I driven or not? What sort of gets you up in the morning and saying, I really can't wait to check in on. X, Y, Z, the progress of this group or this portfolio company. And I'm curious how some of these relate to the research that you're finding as well.

Are there some that are just really putting research into action? 

[00:26:35] Libby Hills: What a fun question. That's a great question. I'm actually going to use it as an excuse to talk about the UK where I'm biased. Because I'm, I'm obviously British and I think it's actually a really exciting time for what's happening in AI and education in the UK at the moment.

And some of that's actually being led by government, which isn't something that folks in our space often hear, I guess, or, or recognize or talk about too much. And so the department for education in the UK Just making some, you know, really exciting moves to support innovation in the space, to encourage evidence building in the space, recognize that there is, you know, some, some, this is a great time for innovation, a great time for experimentation, because we don't know that much.

And so some of the things that I think are really exciting and interesting, the government's setting up what they're calling a content store. So. They're bringing together high quality materials, I think, you know, potentially opening and opening up large datasets to encourage companies to use that, to, to, to build new products, to build new tools.

So that's the kind of, you know, interesting public good angle from the government there. They're also, you know, really listening to the sector. So they've run quite a few surveys, interviews, just to really hear. Okay. There isn't any evidence on this stuff yet, but how are people actually using it day to day?

What are teachers actually finding value in, in these tools? How are they using it? What does that mean for, for what we do from a policy perspective for how we support the sector here? So I think that that's really cool as well, that they've invested so much time in actually not taking kind of position on is this good or bad, but just being like, Hey, teachers are actually finding value in this.

They're using it as a, how can we like support and encourage this? And the last one I'd call out in the UK that I think is cool is a platform called Oak National Academy, which is government backed and the folks there are doing some really cool work to experiment with AI, you know, pretty large scale. So they've just launched a lesson planning tool, which, you know, I think Owen and I really rate.

We think it's, it's, you know, really great example of a lesson planning tool. So I think it's an exciting time for the UK. It's a shout out there for my homeland. 

[00:28:34] Alex Sarlin: Absolutely. Yeah. And as always, we will put links to each of these initiatives that you're mentioning in the show notes for the episode to make sure that people can find out more if they want to dig deeper.

Oh, and how about you? What gets you most excited about, about education right now? You can even go beyond the UK if you'd like, but I

[00:28:56] Owen Henkel: guess the maybe three things that I think are. I'm pretty excited about and two of them are more like short term and maybe the third is a bit more ambitious. The first thing is I think the ability of kind of raising the bar in a minimum quality standard. So I think that sometimes like, hey, is this lesson plan as good as like a really good teacher's lesson plan?

You're like, probably not, but you're like, I was a teacher. I sometimes I had good lesson plans. Other times I had a big weekend and my lesson plan on Monday was not up to snuff. And I think there's idea of like kind of having this. Raising the minimum bar is a really interesting use case for all kinds of tools.

So I think that's, there's one, one area that I'm pretty excited about. I think in terms of in classroom use, I think there's also these. I'm like micro use cases. I think that going back to your point about like these big fancy models, you know, that are off ago and that stuff. Part of the reason they're successful is that games have a really well defined objective function that you can optimize against.

There's a point system, right? And if you say, how do I score an entire class of 30 kids? Like, I don't even know there's an answer to that. But if you say, Hey, your task is to have a 30 second Check for understanding with a student that's a lot easier to kind of narrowly define the task and then you can train a model against them.

So these kind of more niche use cases like during classrooms, there's like the think parent share, right? But like, what if half the time it's like you also chat with the chat bot who like ask you probing questions or something for 30 seconds, right? Like those type of like more narrowly defined use cases.

I think there's a lot of interesting low hanging fruit there. Yeah. And then I think this is more in the sci fi realm. And I think it also raises lots of like really important and tricky ethical questions, but it's like other modalities, mainly a vision and voice. Cause obviously written language is important, but, you know, in the classroom, as you know, as we talked about, like you're reading people's emotions, you're hearing sounds, and there's just so much data that.

We as humans, you're kind of engaged. You're translating all this. And so, of course, I can report it back in language. I can be like, Oh, you know, it was like a little grumpy today in language. But like the data that led to that is totally different. Now, obviously, like, you know, there's all kinds of issues about data privacy and surveillance.

And like, I'm not qualified, you know, or even know the right answer on that. But we were interviewing folks from Edie the other day, and I was reminded of just checks for understanding where you just have students like, raise the number of fingers for the right answer. And I remember being a teacher or whiteboards like little whiteboards make that is such an efficient way check student understanding because it's just like seamless visual process.

And so I think there's lots of kind of like computer vision and computer voice stuff. That's pretty interesting. But then again, there's obviously really serious. Ethical considerations that have to be worked out. 

[00:31:32] Alex Sarlin: I feel like it's just, it's such an interesting through line of this conversation is sort of, you know, what makes great teaching and tutoring and then what parts of great teaching and tutoring are possible with the current technology?

What are not possible? And how might we sort of put the pieces together, whether it's a hybrid model, like you mentioned, maybe, you know, a tutor with AI enhancements. Or an ai, you know, bot supporting them. Mm-Hmm. . Or whether it's that, you know, that kind of using data in new ways. We talked to Teach fx, which is a company that does a good job of that.

Yeah. Or there's some really interesting ideas in here. I can't wait to see how this all evolves. I'm really happy that you guys are on the case thinking about how to take the research about what works, put it into action, get money and. Funding and support into the hands of people who really want to crack these problems and get to the next level and not just, you know, say, it's going to be a tutor for everyone, but just put a, you know, a chat bot, an empty chat bot in front of them, which we've tried.

We know it doesn't work right now. We literally, so. Thanks so much for being with us. I hope we can do a follow up and go even deeper on this, because I think there's so much to unpack here. We appreciate you both being here. This is Libby Hills and Owen Henkel from the Jacobs Foundation. Other Americanized is the Jacobs Foundation, European Foundation supports all sorts of technology and education and Research.

Thanks for being here with us. Oh, by the way, they are also podcasters. They are a podcast called ed technical. And of course we will put that link to that podcast as well on the show notes. And maybe we can come guests on your podcast someday. That would be fun. 

[00:33:08] Libby Hills: Yeah, let's do it. We'd love that. Yeah, we'd love that.

[00:33:13] Alex Sarlin: Thank you so much both for being here with us. Ed Tech Insiders. Awesome. Thanks so much. Thanks 

[00:33:16] Libby Hills: for having us. 

[00:33:18] Alex Sarlin: Thanks for listening to this episode of Edtech Insiders. If you like the podcast, remember to rate it and share it with others in the Edtech community. For those who want even more Edtech Insider, subscribe to the free Edtech Insiders newsletter on Substack.

People on this episode