Edtech Insiders
Edtech Insiders
New Year, New Ideas with Google Part 4: Yossi Matias and Obum Ekeke on AI’s Transformative Role in Learning
Yossi Matias, Vice President of Google and Head of Google Research, leads groundbreaking efforts in foundational machine learning, quantum computing, and AI for societal impact in education, health, and climate. A world-renowned AI expert, Yossi has pioneered conversational AI, driven Google Search innovations, and launched transformative initiatives like AI for Social Good and Google for Startups Accelerator. His work focuses on leveraging AI to address global challenges and improve lives on a global scale.
Obum Ekeke, Head of Education Partnerships at Google DeepMind, champions equitable access to AI education and fosters diversity in the tech industry. An OBE recipient for his contributions to computing and inclusion, Obum has led initiatives that have reached millions of learners worldwide, including founding Google Educator Groups in over 60 countries. His mission is to prepare students and educators for the future by making AI knowledge accessible and impactful for all.
💡 5 Things You’ll Learn in This Episode:
- How Google Research is driving breakthroughs in generative AI and education.
- The global impact of LearnLM and other AI-powered educational tools.
- The role of AI in promoting equity and accessibility in education.
- Insights into Google DeepMind’s Experience AI program and talent pipeline.
- The future of personalized learning and AI literacy for students worldwide.
✨ Episode Highlights:
[00:02:19] Yossi on AI’s transformative potential in education, health, and climate.
[00:09:46] Obum shares how Experience AI empowers teachers and students globally.
[00:19:24] Discussion on LearnLM’s development and its focus on pedagogy.
[00:29:22] Insights into how AI is making personalized learning a reality.
[00:41:50] Reflections on the importance of equitable access to AI education.
[00:55:36] Envisioning a future where AI-driven tools enhance both teaching and learning.
😎 Stay updated with Edtech Insiders!
- Follow our Podcast on:
- Sign up for the Edtech Insiders newsletter.
- Follow Edtech Insiders on LinkedIn!
🎉 Presenting Sponsor:
This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.
[00:00:00] Yossi Matias: That's where we actually need to work with the ecosystem. And once we do that, and suddenly we have these models that we could measure against other models and base models and see, Oh, we can actually give a lot of advantage by doing that. We can suddenly unlock new capabilities. We can now provide more opportunities.
And now from this point, you can start dreaming about how to actually. But the news how to make learning and education more equitable, how to help out teachers, educators, tutors, parents.
[00:00:31] Obum Ekeke: The goal there is not for them to necessarily go to university and study AI, but for them to know enough, right, to be able to play a role in AI, or just applying it or using it, right?
And then it could also be that actually a certain percentage of those learners today, we spark their curiosity enough. To be able to say, Hey, when I go to the university, I'm going to study computer science or ai, or some of these courses yeah, that I eventually missed to ai.
[00:01:02] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry funding rounds to impact to ai developments across early childhood K 12 higher ed
[00:01:14] Ben Kornell: and work. You'll find it all here at EdTech Insiders. Remember to subscribe to the pod, check out our newsletter, and also our event calendar.
And to go deeper, check out ed tech insiders plus, where you can get premium content, access to our WhatsApp channel, early access to events, and back channel insights from Alex and Ben. Hope you enjoyed today's pod.
[00:01:42] Alex Sarlin: We at ed tech insiders had the great privilege. Recently to be behind the scenes at Google's AI summit in Mountain View, California, we got a chance to sit down and interview all eight of Google's learning leads across the organization, and these episodes are the full interviews with two leads at a time.
Enjoy these incredibly interesting interviews. Honestly, these are some of the most interesting and compelling interviews I feel like I've gotten a chance to do in my entire time at EdTech Insiders is just so full of rich and interesting dialogue. And so many of these Google leads have been at Google for a long time.
They've seen the space from a lot of different angles. These are really interesting, and I think it's a fantastic glimpse into the future of AI and learning. Enjoy. Our next conversation is with Yossi The head of Google Research and a vice president at Google. Under Yossi's leadership, world class global teams are learning breakthrough research on foundational machine learning and algorithms, computing systems and quantum, science AI for societal impact in health, climate, sustainability, and education, and the advancement of generative AI driving real world impact and shaping the future of technology.
Yossi was previously on the Google Search leadership for over a decade, driving strategic features and technologies, and pioneered conversational AI innovations to help transform the phone experience and help
remove barriers of modality and languages. He was also the founding lead of the Google Center in Israel, and supported other global sites.
He founded and spearheaded initiatives such as Google's AI for Social Good, Crisis Response, the Google for Startups Accelerator. Cultural and social initiatives and programs fostering startup sustainability, STEM, and AI literacy for youth. Prior to Google, Yossi was on the computer science faculty of Tel Aviv University.
He was a visiting professor at Stanford and a research scientist at Bell Labs. Prolific computer scientist with publications in diverse fields, Yossi is a recipient of the Godel Prize, the ACM Kanellakis Theory and Practice Award, and is an ACM Fellow. He's a world renowned expert in artificial intelligence.
And has a track record of impact driven breakthrough research innovation, advancing society centered AI to help address global challenges with impactful and transformative technologies. He's committed to advancing research and AI to help improve lives, transform society, and create a better future for all.
We're here with Yossi Matias, the head of Google Research and vice president at Google. Google, welcome to ed tech insiders. Great to be here. Thank you, Alex, for having me. I'm really excited to speak with you. So first off, you are the head of Google research. You research so many different things, including a lot of AI.
Can you give us a little bit of an overview of what it means to be ahead of Google research in this AI era?
[00:04:37] Yossi Matias: You know, Google research, obviously it's foundational to Google since its inception. And when we think about the AI era, people would immediately think about the transformer. This actually was born out of Google research.
So I would say Google research is really, you know, we have amazing scientists and engineers looking into how to have a breakthrough research in various fields that are all relevant to the mission of Google. You know, how to drive new algorithms for optimizations for systems, how to look into various application domains that we are later actually are relevant to our products or to advancing science, how to look into building the next quantum computing and so forth.
Now, with the new era, of course, there are so many new opportunities. In fact, the pace of innovation and technology. And research breakthrough is increasing in unprecedented way. You know, I recently shared the excitement of what I would call a golden age for research in general, not, you know, in various for research across the world and in the tech and academia.
But obviously, you'd. Amazing opportunity, of course, to do that at Google and one of the reasons is that one thing that I've always was passionate about is this notion of what I call the magic cycle of research, which is the opportunity to identify, you know, in a problem that one needs to solve that could make a big difference and then drive some research to solve the problem, which unlocks opportunity and then take the solution and apply it back to actually solve it.
Now, this is something that I've always been very passionate about since early in my career as a researcher in Bell Labs. The thing is that it used to take years sometimes to actually have this kind of cycle. And what we're seeing now is that the opportunities are immense in terms of scope. And their pace is so fast, sometimes you
can actually have it within months of actually coming up with this new breakthrough and actually apply it in a very meaningful way.
There's still a lot of opportunity for long term research. For example, our work on quantum computers is a multi year kind of effort with constant progress. So in Google Research, we're looking into how to advance those areas. And as it relates to the AR era, you know, there are many problems that AI is essentially solving problems in many disciplines.
Now, obviously, people are probably familiar with generative AI, with Gemini, and JGPT, and other ways of, you know, where AI is playing a role. And here, of course, it unlocks some new opportunities that I'm sure we're going to talk about. But AI is also applying to many other areas, for example, in healthcare and in climate crisis.
You know, we're using AI to help mitigate climate crisis, to help do that by finding ways to reduce carbon emission in various techniques. We're using it to address some crisis, such as floods and wildfires and more. But what we're seeing now with the, Generative AI, in particular, unlocks so many new opportunities in areas such as healthcare and education and learning, which I guess today we're here to talk about AI in learning, which I think is pretty phenomenal opportunity.
[00:07:43] Alex Sarlin: Yeah, before we get into the AI and learning, I think, you know, you do research across many fields, and I think the health example is one that feels really intriguing. So, You know, Google Gemini is a general purpose model, and Google has lots of different models, but you've fine tuned models for particular use cases, including healthcare.
Tell us about that story, and then maybe we can segue into how some of that same thinking can work in education.
[00:08:07] Yossi Matias: Right. So, of course, what we've seen past a few years with large language models made possible by Transformer and other techniques is the ability to build these foundational models that then we can use for.
All sorts of applications or use cases where to understand the language and to be able to generate content is a critical aspect. And again, this is pretty exciting because I've been working in search as part of the Google search leadership for over a decade. And at some point realized that the conversational experience is going to be so critical for the future of how we access to information and with large language models, suddenly this is accelerating.
Now, as we started looking into those language models, we started asking ourselves, so. If we want to actually use them within various domains, obviously they were already making a lot of progress. The question was, can we do better now in the health space? There's actually a benchmark, which is essentially a U.
S. style of questions of the U. S. medical exams. And for years there was come steady progress, but no one actually managed to get into a passing score using AI. And we had a team that looked into in a concerted effort to look into can we. Fine tune a language model fine tuning means essentially taking the language model adding to that more training daytime doing additional capability so that actually it learns to operate better for
various tasks can we do that for particular use cases of these medical domain and what we could show is that actually this function model that we called met bomb.
Was actually for the first time past the US medical exam style question which got a lot of excitement of course but talking about the pace by the time we actually had this work published in nature we already had a better model that already past those. Medical exam style questions in an expert level 85 percent and soon after we had met Gemini essentially taking a Gemini model and fine tuning it for healthcare information that passed it 91 percent and adding to that all sorts of capabilities to be able to answer questions, which had images within them, so having multi model.
Yeah. So think about putting a city image and then getting an explanation what it is, which is pretty phenomenal. But interestingly enough, by the time we published the paper, we already had actually players in the healthcare system starting to try out and see how could they use it for their various healthcare applications in a way that could actually be beneficial to people in a pretty large scale.
So this is an example for not only the opportunity to take and adjust. Language models but also to have impact at scale and again this is still kind of working progress in many areas but the opportunities are quite clear. In fact this was actually an inspiration for us to look into the question well can we actually do something similar for education and learning can we actually know when you think about learning and education.
Obviously if you just take any of the amazing language models if you take gemini and other models. You can already experience various activities that you can say, Oh, I could see how that can help with learning. I can actually ask questions. I can get a digest of what, of a document. And obviously, you know, we just demonstrated or made it possible for people to learn about, for example, Papers from using the like of notebook LM and the illuminate, which.
Many people are excited and rightfully so. So obviously we already have these capabilities, but when you think about the more comprehensive opportunity with learning and education, and you ask yourself, what are these capabilities that we'd like to have, and can I get the models to actually build into them more of these capabilities, then we ask ourselves, can we actually build this kind of function model for learning?
And that's how LearnerLam came to be. And it turns out that learning lamb indeed could actually have these additional capabilities. For example, one way to learn material is to have some quizzes and to engage and to be able to have this kind of, for example, when you ask a question in learning setting, sometimes you don't want to give the full answer.
Perhaps you'd like to give it. Partial answer and then encourage to look into the next one. Now, the way to get there is actually to work with the experts with the educators and the way to work with the experts in the community is both in order to set up the goals to set up the capabilities that you'd like to get to, but also importantly, to set up how you evaluate these models.
Because in many cases, the way you build these models is by setting up this objective function. What do I want to solve for? How do I actually measure progress and success much in the same way that we do in any educational system? And in order to do that properly, that's what we actually need to work with the ecosystem.
And once we do that, and suddenly we have these models that we could measure against other models and base models and see, oh, we can actually give a lot of advantage by doing that we can suddenly unlock new capabilities. We can now provide more opportunities. And now, from this point, you can start dreaming about how to actually.
What's the news, how to make learning and education more equitable, how to help out teachers, educators, tutors, parents, how it can actually fit into systems and importantly, Where is it going to lead us in years ahead? I mean, because we're pretty nascent to this technology. And once we start seeing that the opportunities are there, then the question is, well, where are we going to be in the future and how can we drive there?
In a way that would have impact positive impact on society.
[00:13:49] Alex Sarlin: Yeah. So in this magic cycle, it's sort of the distance between research and application is shrunk. You can really take something, prove it out and then actually use it in the real world and testing results of that.
[00:14:01] Yossi Matias: It's a great question. So. One of the things that, you know, my point of view on research is that you don't have this huge separation between the research and the application.
Of course, you have the foundational research, you have the applied research, you have the deployment, and traditionally some research labs in the world sometimes would say, Oh, we're just going to do the research and then we're going to pass it on to somebody else to do that. That's not how we do that. So I think that part of making this magic cycle actually work best quite often is to have it as a much more ongoing way.
Now, it's true that initially we start with the very basic questions that we try to make progress on the research side, using the scientific method, published papers in top conferences, if it's nature or science or in oil reefs and other venues, I see melons, such. So that's really important, of course, to make tangible progress.
But in order then to take those ideas and start applying them quite often, these are the same teams often that do that. And then of course, eventually work with product teams to deploy them. So I think that the nature of Google research is that we have a much more closer collaboration between the developing of the foundations to actually taking it and applying it.
Especially important in times where there's so rapid progress, you're not kind of finding a formula, and then you'll hand it over to somebody to just implement, right? It's all the time getting the know how. Plus, when you're doing the theory, of course, you're trying to ask yourself more. Isolated questions sometimes in order to be able to set to have tangible kind of answers and once you want to apply it then you need of course to see to learn how to do the biggest impact for whatever you'd like to do so i think this part of the magic cycle is indeed to find the right inflow which is ongoing and especially in a high pace this is going on so we have teams that are the same time looking into how to deploy and you know, Research results and breakthroughs that have been developed previous months or sometimes years and how to get them into impacting the experiences or the capabilities of people and users and at the same time also looking into the questions of research of the next, you know, the next questions that would like to do.
And this applies to every area that we're doing. This is true for how to make a large language models more efficient, how to make them more factual, how to find even better models for health, for education, or within education, how to do the various capabilities that we want to do.
[00:16:36] Alex Sarlin: So in your MedPALM example, the model was able to pass the medical exams at an expert level.
And then that sort of, Was the go ahead in some ways to be able to start testing it on real medical problems in the world, obviously, with lots of checks and balances, and it feels like learn LM is in that cycle as well. It's you're using your internal assessments to say the set of learning assessments to say this is actually working.
It's doing some of the capabilities of a teacher. It's teaching effectively. And then. Hopefully, without too much time lapse, start using it to teach in the world. What do you envision that looking like? I know that learn LM is a powerful model and it's just starting to really, uh, reap those types of benefits.
People are starting to figure out, well, what can we build on top of this that could actually test?
[00:17:21] Yossi Matias: Yeah. So first importantly, The development itself and the research itself is done in conjunction together with the community. So it's not like done in the lab in isolation, as I mentioned earlier, you know, even just defining the objectives and the.
Will you evaluate what you're doing this is done already with in a way the experience from the field now indeed importantly once you have something that is in the work that is working and you can start applying it and actually have impact that can accelerate actually you will learn what to do better.
You get better feedback. You can actually bring it back. So, yeah, I think the hope, of course, is to see more of that done. And the way in which this comes to play is either, you know, we just actually announced that it's available through a studio, but it's also some of it is available through Gemini or right.
Or sometimes you have some capabilities that are or will be or may be available through search. And then, of course, there are multiple ways, multiple platforms, multiple. Surfaces in which this could actually come to play.
[00:18:25] Alex Sarlin: We spoke to, you know, Ben Gomes earlier about how some of the LearnLM capabilities that have been trained are finding a way to sort of upstream back into the Gemini model in some, in some ways.
And then we talked to Jonathan Katzmann about YouTube, and I think there's all of this interesting potential for using intelligent learning models in the context of YouTube, where so many millions of learners are constantly learning. Seeking, you know, educational outcomes of various types. I'm curious, as a researcher, you know, what applications of LearnLM get you most excited?
You mentioned audio before, and I think it's a really interesting one. You know, when we talked to, to Stephen Johnson about NotebookLM, he was mentioning that the voices, the, the very nuanced sort of emotional podcasting. We're a product of a completely different strand of research, the ability to do that incredibly good voice simulation, and now it's being used in a learning capability.
I'm just curious, as you sort of play out all of these different strands, you know, audio, video, learning, medical knowledge, like, how might these tools sort of play together to create a really interesting product? Much richer world for education.
[00:19:33] Yossi Matias: Yes, I think one of the parts of AI is first and foremost, the ability to unlock this notion of interaction and communication.
And one reason why I got really excited about, for example, a conversational AI a few years back is realizing that this is the natural way for us to interact. I mean, we work really hard on products, on how to, on icons and visual design and various techniques of how to actually interact. But every kid and every person in every age can just say what they'd like to do and have a conversation and then learn and understand very well the other person just by not only the words that they say, but how they say it.
You know, we had this project years ago called duplex Google duplex, which was an AI system that could actually make phone conversation and make restaurants reservation. So this was one of my baby projects. And that time we actually made it a goal to have a very natural conversation including speech distances because the way to actually communicate and get stuff done and of course with letting the person who's having conversation to know that are talking with an AI.
At that time, it was a little unusual, unlike today, perhaps. But what we found out is that this is actually quite important for people. Once you do that, it becomes so natural that you can actually get things done in a very simple way. And similarly, if you think about how you interact with your device, with your phone, with your computer.
This is a very natural way to do so i think this is one of the opportunities that we unlock we just make it seamless and then we can build on that and reach for example this notion of having things across modalities the fact that you can actually look into document and just listen to it something that you know i've been.
Using now for years, actually, and for me, it's a convenience, but actually a friend of mine who was telling me that his daughter has dyslexia. Once I suggested to him to look into that, he told me it changed her life. I can imagine he's a top student. Yeah. Or how. We used AI in order to unlock communication for deaf people and you know we had this engineer who came one day a few years back even before LLMs and she had this idea of how to help out people who cannot speak on the phone because they are hard of hearing to actually use AI to show the text on the screen and a year later I saw this blog post of somebody posting that this is the first time we ever spoke on the phone.
So, so you can think about these. Basic communication skills now build into that now the ability of AI to also now play a role in getting you the right content in taking a long technical paper and making this beautiful podcast like experience and suddenly you make it accessible to many more people. So one of the premises that I see with the engine is something I like to call become so seamless that I would call it ambient intelligence it just works.
And the beauty of technology is when you actually stop paying attention to that you just assume that it's there and you can think through every discipline about any new technology that initially it's kind of magic when it works and then suddenly you kind of expect it to work and then you actually don't pay attention because it just works.
So think about all these capabilities where you will soon not remember which language actually somebody was talking to you because you can hear it in any language you may not remember if you read it or listen to it and that's great yes or you can enrich it you can now both listen and view and get this approach.
Plus it's going to be you know everybody knows to if you want information just google it now. Even the notion of search has evolved so much, you know, when, when I joined Google, people ask me why journey isn't it solve problem. This was 10 blue links and, and of course the expectation keep evolving. And now just think about the fact that when you're looking for something, you're not going to just find the text for that, but you can actually ask something which is going to be a little more vague, not totally well defined, may have some context.
And with AI actually. You're still going to be understood. You're still going to get the answer. The answer is going to be tuned to the level if you're a five year old kid or 15 or you're an older person or depending on your education and it's going to be just adjusted for you and you just expect it to work like that as you would when you speak with another person that actually you expect them to be sensitive to the context and the situation.
So the premise is really that the AI is going to be good enough to just do whatever we'd like that to do. And importantly, there's always the human in the loop. There's always the teacher and the inspiration and the tutor that are actually going to be the most defining things, but they are going to be more available.
Because a lot of the stuff that they perhaps are busy doing right now can be aided with technology that just works.
[00:24:43] Alex Sarlin: Yeah, search has become so ubiquitous as a way to find information over the last couple of decades that I almost feel like in this AI era people are having to shift their paradigms. I think they're used to they go on an AI and the first thing they want to do is.
Do something that they would put into a search engine. They sort of ask for a factor. They ask for a recommendation in that way. And I think it's really interesting how conversational AI paradigm that you're mentioning. You talk about adding context, adding, you know, purpose, and it really changes everything when it comes to education, you know,
[00:25:16] Yossi Matias: well, yeah, in a way, we've seen it in search, right?
I mean, one of my projects when I was in search was auto completing search, right? And, you know, take it for granted that you just put a letter and, you know, perhaps you need another letter, another one, but you don't need to be overly specific. And you know, if you're searching for MoMA, then probably it will give you different result if you're in New York or in San Francisco.
So, so this is kind of something that you already expect to be right, except that this is just certain people very quickly adjusted those rights. In fact, they just expect it to work. And I think that's what we're going to see very quickly, because eventually. To me, the reference point is what would you ask a friend?
Yes. How specific do you need to be? And people also adjust to that, right? If they ask a stranger and they don't understand, they try to be more elaborate. If they ask somebody who just get it right away, then they would just
ask it right away. So I think it's quite nice. That we just very quickly get used to whatever capability is there and therefore it just becomes there and it becomes part of what empowers everything that we do.
So that's actually my expectation. It's not like, oh, I mean, think about today and think about how people from previous years or generations would think about what we're doing today. And we just take it for granted and if you look at a kid looking and taking it phone and just starting to use it, you know, for them, it's as natural as something else.
So, and that's actually the power of technology when it actually works, it's suddenly an enabler, it's empowering. And then at the end of the day, what really matters is what always matters. The human relationship, the inspiration, who your teacher is, how encouragement are you getting it from your parents, from your friend.
So, I mean, these matters actually, you know, I was once asked, how do you view five years from now, the change? And my answer was, So, you know, in terms of technologies, capabilities, empowerment, et cetera, this is all going to be so much, so much more powerful than ever on the basic values of what's important of experiences or human, uh, interaction, et cetera, these are going to be as they are today, as probably they were 20, 30 years ago, but they're going to be so much more powerful and so much more powerful than they were 20, 30 years ago, You know, in our previous generations, it
[00:27:41] Alex Sarlin: makes me think, you know, in the pre internet, pre Google era, if you ask somebody a question, super specific, you say, you know, what was this person's batting average in 1972, they would just go, how on earth would you expect me to know that, right?
There would just be this sort of universal shrug and, you know, you ask somebody that now they're going to Google it or they're going to ask you to, or the, the, just the assumption of being able to get information of any kind at any time. In your pocket is there the conversational is going to take it to a whole other level, right?
I'd love to hear you just like when you say everything will change and nothing will change, you know, if you ask somebody a question or you're in the middle of a conversation like this and you just want to enhance the conversation with outside information.
[00:28:21] Yossi Matias: Right. You know, I remember when my, one of my children was at school and Wikipedia kind of started and people were saying, Hey, what is it going to do now?
Suddenly it's going to be easier. And my thinking was, well, it's just going to up level the conversation. Exactly. And up level the expectations. I think AI is going to again significantly up level the conversation and the expectation, right? Because it's And again, if you reflect back, some of the conversation is the past was just about discussing facts.
[00:28:50] Alex Sarlin: Yeah,
[00:28:50] Yossi Matias: it's not very interesting. No, no way. I is going to take it to another level because it can also do the more processing of the facts and be smarter about it. But then. Hey, there's so much that need to be discussed about values about ideas about, you know, about things that need to be done. So in my mind, it's really just going to up level what people and children, but people of all ages are going to learn.
And then level of the conversations that they're going to have.
[00:29:22] Alex Sarlin: Yeah, it's really exciting future. I think, I know I've kept you over time. I think it's a good note to end on. This is Yossi Matias. He is the head of Google research and vice president at Google. It's an exciting world that we're entering right now.
And, and, you know, he mentioned that the LearnLM is now available in a, in sort of AI studio, we will put in the show notes, more information about how you might access that because it is a big deal to be able to access something that powerful. And build on top of it, especially for ed tech entrepreneurs.
[00:29:50] Yossi Matias: Well, thanks, Alex, for having me here. You know, education is probably the most important investment that any society should have. This is really the future. So really excited that we can actually use AI in order to make some progress on that.
[00:30:03] Alex Sarlin: You and me both. Thank you so much. In our next conversation, we had the great privilege to speak to Abom Ekkeke, who's the head of education partnerships for Google DeepMind.
Obum is a leader in educational innovation with over 20 years of experience. In 2022, Her Majesty the Queen awarded him an Officer of the British Empire for his work in computing and AI education, and his efforts to champion diversity in tech. Obum currently leads education efforts at Google DeepMind, where he focuses on making AI knowledge more accessible, diversifying the AI workforce, and creating impactful learning experiences for everyone.
His educational programs have helped millions of students worldwide develop the essential skills needed to thrive in a rapidly changing world. Previously, he co founded Google's Educator Groups, now active in over 60 countries. In 2019, the Financial Times recognized him as one of the UK's top 10 most influential ethnic minority leaders in tech.
Abum also serves on the Governing Council of the University of Essex and is a trustee of UK Youth. We are here with Obum. He is the head of education partnerships for Google DeepMind. Welcome to the podcast.
[00:31:20] Obum Ekeke: Thank you, Alex, for having me.
[00:31:22] Alex Sarlin: Yeah. So tell us a little bit about what education partnerships means in the context of
DeepMind.
You work with a lot of different types of people in a lot of different places. What does it mean to have partnerships with educators and education institutions from a place like DeepMind?
[00:31:36] Obum Ekeke: Yeah, great question. So when I think about partnership across the education work we do, I think about how to make sure that we are working.
Across the education ecosystem with diverse community of both learners, educators, but also partner organizations that have access to those communities for many reasons. So, because our programs around a literacy talent pipeline using a as a tool. To power learning to make sure that we are bringing those diverse voices into the program.
[00:32:09] Alex Sarlin: Yeah. So AI as a topic has been around for a little while. This generative AI is so new and educators are really trying to get their heads around it for themselves, let alone start teaching it to their students. I know this is something that is You know, everybody who's paying a lot of attention to AI says, Oh, this stuff is coming.
We really should help people understand it. And it feels like that is a big part of what you're doing with educators is helping them understand AI enough to use it, but also to teach it. What does that look like?
[00:32:37] Obum Ekeke: Exactly. I think that what we've done, the approach we've taken with the work we are doing with teachers around educating them in AI is actually taking a step backwards, right?
So rather than focusing on teaching these teachers or the. Educating these teachers or the learners. on how to use AI as a tool, we wanted to fundamentally help them understand how these tools are developed, right? Who makes the decision on what goes in there? What was the role of data? What are some of the ethical considerations in building this tool?
And I think that's very critical for so many reasons. One being that the more you can understand how a tool is developed, the more you can, you'll just spark your curiosity on how would this be good for society? How do you make sure that you use it responsibly? How do you also play a role in actually making it work for society?
So that's one. And then on the other hand, they also educate us, understand that AI itself, like understanding, having that foundational understanding of what AI is could actually help the antennas. To spark their curiosity, to help them around strengthening their problem solving skills, their creativity skills.
So that's what we then decided in speaking to educators. We hear strongly that they wanted that foundational understanding. We partnered with an organization in the UK called the Raspberry Pi Foundation. Right. To then create what we eventually called Experience AI. Experience AI is supposed to be awesome.
It's a set of lessons, activities, and so many other resources, AI challenges that help to equip teachers with that foundational understanding so they can then pass that knowledge to students. There are always some light bulb moments when a student is able to learn how to create an AI model. And so, oh, I can actually do this, you know, and that's really good for any student, whether you're studying biology or history or geography, you can start to think very early on how can these you apply this to biology to history and so many other things that students do.
And it's also critical in the sense that it helps to prepare learners for. The world that we are currently in, we are moving into, has an increasing role in every area of our life.
[00:35:03] Alex Sarlin: So that spark of curiosity that you're mentioning feels like a really interesting approach. So it sounds like Raspberry Pi Foundation.
So Raspberry Pis are these sort of computers that you sort of can make, physically make and sort of piece together sort of a maker. Concept, which is really cool. Are these projects, ones that are done in classrooms and schools, or are they done at homes or I'm curious how it sort of manifests when people are trying to, you know, sort of get their feet wet and just start to understand this stuff and feel excited by it.
[00:35:31] Obum Ekeke: Yeah. Yeah. And I think that before I get to that, the reason we actually partnered with the Raspberry Pi is because. As you know, they are known globally.
[00:35:39] Alex Sarlin: Yes.
[00:35:40] Obum Ekeke: So they brought the pedagogy expertise, right? So we didn't want to just create
something like yet another program for learners. But it was, how do we bring that pedagogy [00:35:49] Ben Kornell: rigor
[00:35:50] Obum Ekeke: into what we are building that will resonate with educators?
So Raspberry Pi Foundation brought that. And they also, they are really great and amazing, engaging with very deep communities of learners. And then to your question, they also, Engage with both learners in the classroom setting, but also in informal education, right? So they brought all those learning science design by the gorgeous and principles into designing them experience.
I currently work experience. I was developed primarily with. Teachers in the classroom, [00:36:22] Alex Sarlin: but
[00:36:23] Obum Ekeke: we also see in instances where they are increasingly being used in outside after school clubs and other settings that are outside the classroom.
[00:36:32] Alex Sarlin: So you work from sort of the entire spectrum. That would be sort of people's very first exposure to the foundational concepts, whether they're an educator or a student.
And then you work on the talent pipeline side with universities trying to figure out, well, how do we make sure that people coming out of the post secondary world are ready to contribute to this world as well? I'd love to hear you talk about what that looks like.
[00:36:55] Obum Ekeke: Yeah, exactly. So the way we viewed our work in education was ultimately the goal is how do we make sure that this amazing technology AI works for everybody?
[00:37:04] Ben Kornell: Yeah.
[00:37:05] Obum Ekeke: Whether you're in Latin America or in Africa or here in the United States or wherever you are. What communities, what cultures you come from, how do we make sure that we build AI as a tool, not just the tool we are building, but the tool that is built across the ecosystem works for people everywhere.
So for us in education, we then took a long term end to end approach to it. So first is at the very early years, which is where the experience AI comes in. How do we make sure that we are empowering teachers and learners? with that foundational knowledge. Acknowledging that the goal there is not for them to necessarily go to university and study AI, but for them to know enough, right, to be able to play a role in AI or just applying it or using it, right.
And then it could also be that actually a certain percentage of those learners today will spark their curiosity enough. To be able to say, Hey, when I go to the university, I'm going to study computer science or AI or some of these courses that I eventually missed AI. So that's at the early years.
Experience AI was developed for ages 14, sorry, 11 and above. And then at the university level, right? We then say, Oh, how can we actually support those who are already on their journey? To AI, whether they are studying STEM or some more focused computer science, maths or engineering courses, how can we support the undergraduate students to go, those who want to go to study at the postgraduate level, masters or PhD, what role can we play in supporting them?
So we partner with universities all over the world. to fund scholarships and provide mentorship, match those scholars to Google DeepMind employees as their mentors and really focusing on students from underrepresented groups who may not ordinarily have access to studying quality AI education at the postdoc level.
So that's that the middle ground gets people from undergraduate programs at from underrepresented groups, empower them to go on and do masters or PhD level. And also at the end of the pyramid is actually going to, how do we support if you have a PhD? How do we support you to go to a postdoc level and really become a leader?
In AI and leadership in AI could mean different things, right? It could be, you stay back in academia and become a professor and actually helped train the next generation of research and engineers, or you go to industry and work or you go run your own startup, but at the end of the day. The goal is really how do we diversify the ecosystem and make the broader ecosystem of people that are building and developing these tools or practitioners that are using these tools much more representative of society and get their voices and perspective to contribute to building an AI.
It's an ecosystem that will work for everyone. Yeah,
[00:39:55] Alex Sarlin: it's an amazing vision. I remember reading, you know, about, there's been a sort of computer science gap, like a gender gap for a long time, for example. And there was some interesting research that said that women were more likely to study computer science if the field sort of moved away from the sort of like, Loan coder, just trying to sort of like make something happen by themselves and sort of make money and change the world to like a more socially minded, like, oh, coding can change the world can change health that can change all these things.
And I feel like I is a field where it's sort of it's even more true, I think that, like, ethical considerations that they're socially going to be social goods and social bad coming out of the leadership. I'm curious if you think that's going to. Accelerate the trajectory of underrepresented groups going into AI versus, you know, how slowly, unfortunately, they went into computer science over the last couple of decades.
[00:40:45] Obum Ekeke: Yeah, I believe so. And it's not just me believing. So we also heard that from across the ecosystem ecosystem when we were building the experience AI program for young people. We kept asking these educators, you have so many resources out there was going to be different about this team, right? And the example they kept giving the gift to us at the moment is actually as Google DeepMind, you have something that is more relatable or will be more relatable to your point.
Two different audiences at the time. That was like two, three years ago. We had launched alpha fold, which is this great protein folding. Yeah. So we said, well, if you use that, if you bring that into talking about AI, teaching people about AI, it will be more relatable to girls in the classroom and to other diverse communities.
And there are so many examples around. The role that AI is playing in climate and sustainability in education. And these are more relatable to these diverse communities
[00:41:47] Ben Kornell: that
[00:41:48] Obum Ekeke: you typically have coding, which at the time, again, there are different ways of teaching coding now, but at the time sounded more abstract, you know, and it was all based on games and all that.
So I think we kind of leverage on that opportunity to make. more relatable. You can point to these use cases that people are addressing problems that people face on a day to day basis. I'm from Nigeria growing up. I had all sorts of diseases, outbreak around crop and so many other things in my community.
And that was one of the reasons I got into AI because I could see I had a very high gig that this thing can change. And that was some of the problems I had growing up in healthcare, um, in agriculture and so many things. And that's the same message today. If you go back to my community, more people like me can relate to that.
And they will be really excited about technology, these technologies. Again, the ultimate goal is to make access to quality AI education, much more accessible and relatable to people everywhere. It doesn't necessarily mean that they go on and become scientists in the future, but they will play a critical role in one form or another, whether as practitioners and using other AI as a tool in their various disciplines or at the forefront of groundbreaking research in AI as scientists and engineers.
[00:43:10] Alex Sarlin: I would never have thought of that, but that's such a good example of using AI and really complex science for something that is obviously so good for the world and yeah, climate, sustainability, agriculture, sort of solving real problems is something that is so inspiring for people around the world based on, you know, there's just so many problems to solve.
But I love that you're connecting experience AI to the ethical pieces of AI too, because as you say, people, even if they don't lead a giant AI team, they're going to be using it. They're going to be telling other people about it. They're going to be figuring out when and how they can use it for themselves.
And individuals will be able to solve problems with AI, even without the same level of expertise as they used to have in the past, which is really interesting. So the third part of your ecosystem is engage AI for 11 and up, you do a talent pipeline system, and then you also work with the AI research at Google DeepMind.
And in particular, you're sort of making sure that the research is being guided by a wide diversity of different stakeholders in the education ecosystem, among others. Tell us a little bit about that, how that work works.
[00:44:13] Obum Ekeke: Yeah. So in the summit today, we've talked a lot about then LM, which is these fine tune version of Gemini specifically for education and focusing on pedagogy.
One of the biggest challenges in working on this kind of technologies is very hard to verbalize. Parallelogical intuition into a set of standards or evaluations, you know, so it's pretty hard to do. But what we've done there is to, and I won't go into like the whole, then LM team because all that people may have spoken about it, but what we've done there is to use what we call participatory research driven approach and what that means in line with everything that's of settings, actually just figuring out like.
I don't think anybody or any one person knows what a good pedagogical practice is or principles. The people we are building for, like the teachers and learners, what, what do they need? What would be most helpful for them? What would good be like, or what would success be like? We don't have answers to these things.
So what we've done is to take this participatory approach by first internally, you know, we Assembling a group of a multidisciplinary team that involves both researchers, engineers, people with cognitive and learning science background, and so on. But also instantly, more, more importantly is engaging with teachers.
Teachers are the ones that do teaching, right? And they know all this pedagogy, but it's hard for them to even codify
[00:45:43] Ben Kornell: it.
[00:45:43] Obum Ekeke: So we got them involved, got different kinds of students involved, university students. Policymakers and a whole range of people in the ecosystem bear with a view to making sure that we get teachers who teach in diverse communities, right?
So that we can get that diverse perspective. Teachers who teach all kinds of learners and in different settings, right? Again, incorporating all their feedback. And so, so it was, it was an eye opener. In just realizing how, how the differences in, in perspective and opinions in the different cultures and settings and how we then use that to inform the feedback from those kind of engagement through workshops, focus groups, and so many other channels, um, have to then shape what you see today as learn LM.
And that's also not the end, right? Because as it was announced today in the summit. That we've opened up access to n LM in experimental mode. That feedback loop continues right more, and we are encouraging more educators, tech companies, and to play around with the model, give feedback, and we'll continue to use that feedback to refine and improve, um, the, the model that that powers, that The reason that's critical is that.
Eventually, this is something that will be used by both educators and learners all over the world. So it matters so much that we get as many diverse views and feedback that we can get from the broader education ecosystem.
[00:47:18] Alex Sarlin: Yeah. And one of the Things that's also very interesting about LearnLM is, as you say, this broad diversity, there's so many authors on the LearnLM paper, right?
There's so many people come together from different disciplines to make it. And also there are lots of ways to improve any model, but one of them is, is it working? That's really interesting when it comes to LearnLM because learning science has never worked at that scale. The idea of being able to build something, for example, working with LearnLM to teach.
And then, and actually coming back around and saying, well, which aspects of this worked, which aspects of it actually helped students be excited about the material, do better on different kinds of assessments and performance tasks. I'd love to hear what role you think that sort of learning loop, that reinforcement loop might play in enhancing learn LLM in the future.
[00:48:02] Obum Ekeke: Yeah, I think, I think that's really important. I actually made a very good point, right? Because it's not, it goes beyond engagement, right? So you could have, and all the things I've talked about getting these diverse perspectives and all that is really great. But at the end, what you want to evaluate is to what extent is this tool improving learning outcome, right?
For, for learners, that's the ultimate goal. And I think that's where we also taking a humble approach, you know, making sure we continue to evaluate, um, even coming up with a set of benchmarks and all that. So the process we are going through now is as you. Continue to get this feedback is how do we automate me?
How does that lead to improve learning outcome? If we don't see improve learning outcome for learners, then we wouldn't have succeeded. You know, that's, that's the bottom line. Like the end goal is, is this helping you to go from point A to point B and to what degree? And would that happen? Um, so I'm, I'm hopeful that the, the approach you are taking off, like releasing this out there, getting feedback, um, getting people to use it and engage, which is one way, one part of it, but also then using all that to continue to improve the model and actually validate.
That there is concrete learning outcome for learners, but we've also seen that at the pilot stage, right? We are in some of the experiments is very clear that the students who we are using the tutor have performed well than we are not using. So, we've already seen some of those. If you don't exist, then I'm scared.
Yeah,
[00:49:37] Alex Sarlin: exactly. It's so exciting. I remember reading something about AlphaGo and how there was, you know, they similar, they trained it with all of these techniques from GoMasters and it did really well at it. I mean, it was, it was incredible. It still did better than almost any human or maybe pretty much any human.
It did. But then they retrained it with no prior knowledge. They said, you know what, just play yourself and go infinite times or whatever, you know, millions of times and, and learn these techniques on your own. And it did even better. I wonder if there's going to be a moment like that in education. Not to, not to, you know, say anything bad.
I'm a, I'm a learning, you know, engineer. I'm an instructional designer, but I wonder if we're going to start seeing ideas in pedagogy that we've just never come up with come purely out of the AI.
[00:50:20] Obum Ekeke: I think so. I think what we've been seeing with the AlphaGo example you used is how that same learning, you know, has now translated to many other research breakthroughs we've seen to date, and whether from DeepMind or Google broadly.
I think a lot of those learning will continue to apply even in education as a domain. I think it's probably still early days. What we learn from this learn LM, we hopefully open so many doors, you know, so many doors to
what we probably haven't thought of today. You know, who knows what that will be? I'm really particularly excited about the whole idea around personalized learning, you know, and the impact that could have in coming back to the learning outcome, you know, especially around one to one tutoring, the impact that could have for to change this whole thing around education being a one size fits all approach, right?
So how can I. I have two 68 year old twins, and I spend a lot of time with them, helping them in their homework. And it's just unbelievable how different they are, even though they are twins, you know, they learn so in so different ways, right? So the things that this one struggled on the same topic is completely different from the what the other ones struggle.
I'm hopeful that that personalized tutorial using AI could help them. You know, to bring out the best in them, but also help their teachers, you know, to draw insights that will then help them to better support their needs.
[00:51:55] Alex Sarlin: Yeah, we've talked about personalized learning in the ed tech community for a long time now, and there's been sort of these different hype cycles of it.
And I truly feel like. It almost goes without saying that personalized learning is now coming. What form it takes, I think is still unknown, but like AI is implicitly, it can incorporate data from the asker or the learner in so many different ways that we've just never seen before, can differentiate in so many different ways.
That it just feels like we're finally on the right track to actually reach personalization, which is very thrilling for an ed tech person. And, and yeah, twins is an amazing, like controlled experiment. Like it's same genetics, same house, and they still learn differently. Like it's funny. I have one more question for you.
It's a little bit of like a out there science fiction y question. But one thing I've chatted with a couple of people about is the concept of synthetic students. I'm not an AI expert by any means, but I know in AI, sometimes they have these teacher models and learning models. And when you think about like the AlphaGo example, they have AlphaGo or, or, or, you know, Deep Blue or, you know, chess models like play themselves, but in a teaching context, you playing yourself means having a teacher and a student.
So one of the things that's kept us back in EdTech is that we don't want to do huge experiments on real students. It's a, but. Is it fathomable that we might have student models, basically, and then we could try different types of teaching with them and see what works? Obviously, every student's different, so a student model would have to be some kind of average something.
Is that just like totally nuts, or is that something that could actually happen with the geniuses at DeepMind?
[00:53:31] Obum Ekeke: I wouldn't say that's nuts. Yeah, but it's also something I haven't given any thoughts on, actually. Yeah, I'll be curious what researchers think about that.
[00:53:42] Alex Sarlin: Yeah, me too. Me too. I, I, it's either like the worst idea in the world or something really interesting.
Yeah, I don't know.
[00:53:49] Obum Ekeke: Think about that, but I haven't given that any thought actually.
[00:53:54] Alex Sarlin: So Google, do you mind is doing some of the most interesting work? Let me ask one more question. You know, as you're putting together the talent pipeline. The experience AI initiative and thinking about the research, putting them all together, you know, if you zoom forward, you know, three years from now, I think personalization.
Hopefully we're on the track. What do you hope to see in the learners that you're reaching now, either at the, you know, um, PhD level or at the 11 year old level, what do you hope that they feel about the AI world? That's sort of coming at us.
[00:54:23] Obum Ekeke: I think it's optimism, right? To if you know, if you understand it too very well, you even know how it's created and then you have a higher chance of being optimistic.
Or at least better prepared on the outputs that will come from from those two. So I think it's like having a high level of optimism and this thing has come to stay is how do you make sure that we are knowledgeable about not only how to use it, but potentially how to contribute to building it so that it works for everyone.
For the students we are reaching today, they're already in universities. I think what I ideally want to say is this, if we're able to make high quality AI education accessible to those learners equitably, then I'm hopeful that we'll see more research breakthroughs in five to 10 years time. And some of those breakthroughs might not be something that you and I are thinking about today, you know, because they will be driven by those, those learners that are from, you Both multicultural, multidisciplinary, like diverse audiences view something that is actually much more transformational.
I love that we are seeing today. That's my hope, right? If we give everybody access to high quality education, we'll see much more impactful AI breakthroughs than what we've seen today. And that's my hope there. And I think it's also more broadly, like across the whole value chain or. And from any years to all the way to, um, more older students is really making sure that how many years ago we are talking about these two divide people being left behind in the age of all the digital transformation we've seen, but making sure that people have I equipped young people are equipped with this knowledge that the new communities left behind.
You know, um, five, 10, 20 years from now getting government, like there is no one company or organization that will do this is, and that's why I really hate my job. It's so cool. Like getting me involved, partnering with people across the ecosystem. How do we make sure that the government, the corporates like, like us, like literally everyone is playing a role in, in democratizing access to especially AI literacy.
The very basic knowledge level of it so that no one is left behind, you know, in years to come.
[00:56:46] Alex Sarlin: That's an amazing note to end on. Optimistic, empowered, being part of the solution, and the AI future is influenced by, you know, you're reaching now that the wide diversity of learners reaching now. Super exciting work.
Thank you so much. This is Oh, boom. Who is head of education partnerships for Google DeepMind, really exciting work, really shaping the future of AI and education. Thanks so much for being here with us. Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community.
For those who want even more EdTech Insider, subscribe to the free EdTech Insiders newsletter on Substack.