Edtech Insiders
Edtech Insiders
Week in Edtech 7/19/2024: $600B AI Investment Bubble, Karpathy’s AI Assistants, OpenAI’s 'Strawberry', Meta’s Largest Llama 3 Model, US ED Issues Guidance for AI Edtech Developers and More! Feat. Chris Hess of Pearson and Mark Naufel of Axio AI
Join Alex Sarlin and guest host, Claire Zau, Partner at GSV Ventures, as they explore the most critical developments in the world of education technology this week
🎙️ Andrej Karpathy’s New Venture: AI assistants for education
🚀 Meta’s Largest Llama 3 Model Launch
🔍 Anthropic’s Push for Third-Party AI Evaluations
🍓 OpenAI’s new AI model under code name 'Strawberry'
💸 AI Investment Bubble: Navigating a $600B market shift
📉 LAUSD's $6M AI Chatbot Controversy
🤖 $2.3M Contract between the New Hampshire Department of Education and Khan Academy
📘 U.S. ED Issues Guidance for AI Edtech Developers
📊 Quizlet’s AI Survey: Higher education leads AI adoption
📈 Tyton Partners Report on Digital Learning Tools
🏫 KKR emerges as front-runner for Instructure Buyout
🎓 Salesforce AI Tools for HigherEd
💡 $20M Opportunity@Work Grant from MacKenzie Scott
Plus special guests, Chris Hess, Director of AI Product Management at Pearson and Mark Naufel, Founder/CEO of Axio AI
Upcoming Events:
📺 Live Webinar: Solving EdTEch Problems with a UX Mindset with Nicole Gallardo, Founder at Founders Who UX and Alex Sarlin (Thursday, August 1, 2024, 1-2pm ET / 10-11am PT)
📚 Book Club with FOHE: Discussion on Sal Khan’s new book Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing)
Stay updated with the latest Edtech news and innovations. Subscribe to Edtech Insiders podcast, newsletter and follow us on LinkedIn!
This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.
Alexander Sarlin 01:12
Welcome to This Week in Edtech with Edtech Insiders. We're talking on the week of July 15 to 21st. And we have an incredibly special guest host today. Ben is still out on vacation. He's in Europe having a great time. I'm really happy for him. We have an incredible guest Claire Zau from GSV Ventures does the best AI and education newsletter there is out there. Claire, welcome to the podcast.
Claire Zau 01:39
Thank you so much, Alex, it's so great to be here. I've been such a fan of this podcast and everything that you all put into the space. You know, it's so helpful and informative for everyone in the ecosystem. So thank you as well for having me. You're
Alexander Sarlin 01:52
so nice but uh, you know, you are such an important voice in AI and education. And you're, you sort of keep your eye on not only the AI in education space, but the entire AI space, what's happening with chips, what's happening with all the big companies and how it relates to education. It's so valuable. Anybody who's not reading the GSB AI and education newsletter should sign up for it right now, because Claire's work is amazing. So a little bit of quick housekeeping before we jump in. We have some really exciting guests on today's podcast. At the end of this episode, we have Chris Hess, who is the director of product for all things AI at Pearson, the global publisher at tech giant and we have Mark novel, who is the CEO and founder of Axio, formerly known as primer, which is an AI companion for education that is spun out of ASU, and really, really, really interesting story. So stay tuned for that. And then, you know, other podcasts coming up soon. We talked to Scott Kirkpatrick, the CEO of BrainPOP, who is amazing, Joe Connor, CEO of Odyssey, which is doing incredibly interesting work with with education savings accounts. And we have a snoo President Paul LeBlanc coming up soon, who is you know, obviously an ad tech legend. So stick with us. It's been a really fun summer here at ad tech insiders. The event I wanted to make sure people are keeping their eye out for is we're doing a series of UX sort of user experience design in edtech webinars coming up soon. And the very first one is with Nicole Gallardo. She's the founder and Chief Design Officer at founders who UX and it's all about solving ed tech problems with a UX mindset that is a free webinar and check out the show notes for the link to register. A lot of people are already signing up. We're really excited about that one. Thanks so much for being here with us. So Claire, this has been a big week in AI in education. The story that caught my eyes sort of all week. Was this Andre Karpati startup, Eureka labs, can you give us a little bit of an overview of what the news is? And what is your take?
Claire Zau 03:53
Yeah, definitely. So for those of you who maybe haven't followed the AI space for a while, Andre has kind of been this cult character, the star and AI space, he, you know, was a founding member of open AI. He did his PhD at Stanford worked with Fei Fei Li, who is another big name, obviously, in the space. And you know, he also worked very closely with the founders of Coursera, Daphne Kohler and Andrew Eaton, I'm sure many of you have, you know, learn from their insights in the AI space as well. And then he went on to be the Director of AI at Tesla, where he led a lot of their computer vision work. So just massive name and AI space. He does a ton of work on on YouTube, teaching people about neural networks in AI as well. But he finally left open AI in 2024, and just started this new AI education company. And right now, we don't know too much. It was mostly teased in a Twitter post. And the website doesn't really have a ton of information. But what we do know is that he's calling it a AI Native education platform. So he's characterizing it as built from the ground up with AI at its core, and the vision I believe, is to to use Gen AI to create AI teaching assistants that can guide students through course materials. So I think similar to a lot of other companies in the space building, AI assistants are personalities that will work with a human teacher to allow anyone to learn anything. I think in his post, he outlined that teachers would still design their own course material, but they be supported by this AI assistant. But right now, as you mentioned, the website's still very bare bones. We don't know if this offering is something focused on MOOCs like Coursera, or Udemy, for adult learners, or if it's something that's more implemented for real classrooms, and K 12 learners were even just hired at Pacific. What we do now right now is that their first product is going to be an AI course called LLM 101. And that's just an undergrad level AI course where they can build their own AI models and such. So that's all we know, right now, it's definitely really exciting to see one of the biggest names in the space build in the education states and recognize how important you know AI can be for the ecosystem. So excited to continue keeping track on this, that's
Alexander Sarlin 06:01
what jumped out to me is that, you know, Andre Karpati has been at dead center, you mentioned right up Tesla of open AI from the Stanford labs, just like he's been at the dead center of all sorts of cutting edge AI innovation for many years now, and could literally do pretty much anything he'd like in the space like anything. So the idea that he's focusing on an education startup, but as a teaching assistant, he mentions, in his announcement, the idea of like, you know, Richard Feynman as an example of like a legendary teacher. And he's like, Well, you know, not enough people have access to that kind of teacher. But with AI, maybe we could actually combine human teaching with very high quality teaching assistants and feel like we all have, you know, a Fineman on our side. And it wasn't a surprise to me that he sort of was in that and during Daphne Koehler circle because back in my days at Coursera, the language that he's using here reminds me a lot of those early Coursera days, this was mentioned in the, in the tech insiders, WhatsApp channel as well, I mean, the idea of, hey, let's break down the walls of the university, let's break down the walls of teaching and actually make it you know, democratize it, make sure that, you know, he literally uses the phrase, all 8 billion of us, right, every human being on Earth is the vision for this idea. And as you said, clear, very little actually described on their website, very little described, actually, you know, anywhere else, and you know, the LLC is signed, just by him, there's not no investors talking about this yet. It's like, it sort of came out of nowhere, it seems like a little bit of a personal, you know, pet project for him. But he's such a huge deal in the space that the idea that he's even focusing on education, I think is great for all of us in edtech.
Claire Zau 07:39
Yeah, and I'm curious if it, you know, potentially gets integrated into a Coursera like product where you will have access to an AI version of experts. So that's kind of where I feel like or predict where he'll go with this rather than, you know, a K 12 specific AI tutor. But yeah, definitely excited to see what he does with this.
Alexander Sarlin 07:59
Yeah, agreed. And I mean, there's always been this huge conflict in edtech. And a lot of people have, you know, talked about this in different times about scale and quality, right. It's like the idea of having a really high touch teacher or educator who's like paying attention to you who's with you, who sort of has that, that blooms, you know, two sigma problem, they're there with you. And that's the sort of highest quality education and then the highest scale education is you have a Richard Feynman or an Andrew Yang, or a Laurie Santos is like one of the top courses on Coursera, who can literally just record videos, put them out there and then have millions of people take the course. But the quality, the the personalization, the ability to sort of actually asked questions, for example, just goes to zero, when you have that kind of MOOC model, which we've always known, you know, in education and workspace. I think he's trying to square the circle there and say, What if you could have amazing teachers, but also have access to incredibly high quality assistants who basically can simulate whether they're actually trying to act like a like a Richard Fineman, or just can answer anything? It's a really interesting idea.
Claire Zau 09:01
Yeah, definitely.
Alexander Sarlin 09:02
You have mentioned that meta is doing some really interesting movements in the space right now, meta has been this sort of Dark Horse funny player in this because they have been launching their llama models, sort of behind the scenes and then opening them up in many ways and, you know, allowing people to build on top of them. There's rumors about llama three. Tell us about that. Yeah, so
Claire Zau 09:23
llama three, their latest iteration of that is supposedly launching in the next couple of weeks, I believe it's going to be about 405 billion parameters, it's going to be multimodal and capable of understanding and generating both images and texts. And so I think it's quite similar to how you would interpret you know, the direction that a lot of these other models are going in like a GPT. For that is also multimodal is able to take in inputs that are both images, texts, audio and such. What's really exciting is that this model is open source and as you might And I think meta has been very proud about, you know, its its open source approach to AI. And that's kind of been the big differentiator among the other big players in the space, whether that's open AI, or Google, Gemini or anthropic even. But really, this comes on the back of I think what's even more interesting was just this massive week in model releases in the AI space. So I think there was maybe seven or eight that made the news, everything from MS straw, which is the French AI foundation model, editor, they're sometimes touted as the European Open AI, they just collaborated with nividia, to also launch a small model, open AI also launched a smaller, cheaper version of GPT four, which I'm sure we'll touch on later, Apple finally open sourced a ton of their smaller models that outperform Israel as well. So a lot of the models that they use in their Apple intelligence, demos, and then, you know, companies like hugging phase and Grace also released a ton of models. So it's really just been a massive week for new models coming to market. And just really, you know, just seeing the exponential increases in computational and, you know, technical capability, advancements is so exciting. I think of the models, and this aligns a little bit with, you know, meta as well, is just we're seeing probably two trends. One that generally we're seeing more domain specialization and models. So more models specialize across horizontal domains, such as coding function, calling math, etc. You know, the one that Mr. Hall released recently was specifically focused on math and coding. So we're gonna see increased domain specialization for specific domains. And then the second trend is really just small models. And we'll probably touch on this with GPT, four, but just so many smaller models that can run on edge devices, so run on your phones, or IoT, and they're much cheaper, and we'll discuss all of that. But I think those from the big model release week that we've just had, those are kind of the two trends we're observing. I
Alexander Sarlin 12:04
think you put that so well, I see it as almost pulling in opposite directions at the same time, but both are developing very quickly. So they're these sort of massive, you know, multibillion parameter, hundreds of billion parameter foundation models that are basically meant to be totally generalizable. And that's the original, you know, chat GPT your GBT for right now is that a Tropics club model is that for the most part, and Gemini, as you mentioned, and they just keep getting better, and you know, metal, Facebook has sort of played this slightly, almost like spoiler role, or trying to be like the Robin Hood, kind of, of the space by putting out their llama models, open sourcing them, and allowing people to sort of keep up with these incredibly expensive, you know, proprietary models by the bigger companies. But then you also have these many models coming out, I think we should just jump into it. Because it's so interesting, you have the, you know, in some ways, they're getting bigger and bigger and bigger and bigger, the numbers of parameters, number of tokens, huge. And then in some ways, they're realizing, all the companies are realizing at the same time, well, you don't need a nuclear weapon for every situation, sometimes you want to solve math problems, you know, very quickly, or very complex math problems, you don't need to a chatbot that can do absolutely anything to solve math problems, you don't need a large language model that knows everything about the internet, if you're just always gonna ask it the same thing about you know, finance are about so these many specialized models are starting to build and you mentioned this, but let's dive right into it. You know, open AI, industry leader put out their four Oh, mini model this week for exactly the reasons you said so that it's small enough that it can be much cheaper. And then it can be hosted on devices. So that instead of it, you know, always being going to the cloud and being extremely, you know, heavy to access and expensive. It can be much faster do so this felt like a big week for across the board. But this for many particularly seemed interesting because you know, openair has been continued to sort of be the sort of trendsetter in the space. So tell us about the mini model.
Claire Zau 14:00
Yeah, definitely. So it is basically it's GPT four Oh, Mini, and essentially what it is, is a smaller sibling of the GPT forro model that we saw that was demoed, I think back in May, when they did you know, their big demo day before the Liah when they had that Scarlett Johansson boys and they showcase all their multimodal capability advancements. And so this mini model is, as you mentioned, much smaller, it's cheaper. So it's about 60% cheaper than GPT 3.5 Turbo so much more budget friendly. And it actually also outperforms a ton of other small models in the MML. Eu benchmark which is kind of the benchmark for general intelligence. So it's actually surpasses even GPT 3.5 and some larger models. And similar to its big sibling, it's multimodal so it can support both text and vision inputs. So really, what that means for developers and you know, ad tech builders is that if you're using something like a GPT 3.5 A API or Turbo API in your apps, you can now and should switch to a foreign money because you are getting the same or maybe even smarter capabilities without breaking the bank. And it's really good for, as you mentioned, not necessarily, if you want a really big to tap into a big brain, but it's really good at lower logic tasks like translations, rewriting, pulling data from forms, that stuff that is always happening at scale, in enterprise or in a school product, you can rely on a smaller model like GPT, for many. So yeah, just really exciting. I think it trends along with what we believe and that these models are increasingly, I think, tying to what you were saying around meta being the Robin Hood and kind of equalizing these models are increasingly commoditized. And it's much more a plug and play system now where you use smaller models like GPT, for many, or llama three for certain tasks, and then you tap into these larger foundation models for other tasks. And I think this is going to be increasingly the trend, as you see better advancements, even in model routing tools that allow you, you know, depending on the task, it'll automatically route for you or model observability tools. So just all around, I think, you know, flattening of access to or lowering the bar to access to the different tools available to builders out there.
Alexander Sarlin 16:19
I'm so glad you brought up that concept of model routing, because this is something that I think is such a fascinating idea. That is, it's a meta idea. It's an idea, right? I mean, the idea that, look, all of these big Frontier Foundation models are getting better really quickly. But there's also all of these sub models, some are open source, some are made by the big companies like this jet, you know, GPT, four, oh, Mini, but these systems are smart enough, or at least potentially, you know, have the potential to be smart enough that if you ask a question, if you're building an edtech product, if it gives you a totally open slate, he says, Look, ask me anything, you can ask me to write an essay, you can ask me to solve a math problem, you can ask me to get you to brainstorm with you any of that stuff, which is where we sort of are now you know, right now you ask any goes to 440. And it costs a certain amount of money. And there you go. But the idea that if you are asking it to translate, like your example, or if you are asking you to answer a specific math question, it can figure out what you're asking. And then say, instead of going to 4.0, I'm gonna go do this locally on your phone for 60%. less cost. And then but if you ask me for something really, really complicated, maybe it escalates it up back to Foro, that feels like such a exciting future for this entire AI space. So let's talk about the other AI announcement this week. It's kind of a big one, again, going in the opposite direction. Opening AI has had this vision of going to Artificial General Intelligence for a long time. It's this sort of very speculative concept, but they're apparently building a new AI model. And it sounds like it's much more I think, if I'm understanding it's much more of a bigger one, I'm not sure if it's actually bigger or smaller, but it's supposed to be smarter. That's for sure this strawberry, tell us about strawberry, and how that might play into this sort of model routing future.
Claire Zau 18:03
Yeah, I guess they're going both big and small right now in all directions, it seems. But yeah, it's called Strawberry. That's just the kind of secret project name. And apparently, this is built, or people have shared rumors that this is allegedly built on top of the Q star model that I think a couple of months back was teased. But basically, it's a model aimed to enhance reasoning capabilities. So as you mentioned, get smarter. We don't know if it's going to be bigger or smaller. But the project is still under wraps. But really, they're hoping to use strawberry to advance reasoning and human like systems intelligence and AI systems. And so for me, I think, you know, for your average human, it does already seem like these AI systems are highly intelligent. But interestingly, the open AI released a new five level classification systems for their employees, and then they recently externalized it to determine how smart their AI systems are. And it was crazy me but they say that their current systems are only at about a level two, which is so interesting to think about how much more we have to go to get to what they claim to be AGI which is artificial general intelligence. So they previously define that, as you know, a highly autonomous system that could surpass humans in most tasks. But now they've laid it out where today's chat bots are level one, something like a chat GBT. And then level two is systems that can actually solve problems at the level of a person with a PhD. I don't know how you define and measure that. But that's kind of how they're framing it. And then level three is, when AI models are able to take actions on an end users behalf, they're still not really able to do that and work. That's why we're seeing so many startups trying to build in this AI agent market. But level three is basically the vision that you're almost kind of removing human from the loop and You have aI doing things on your behalf. And then level four is when AI actually can help us as a species create new innovations. And then level five, as they laid out is just the final AGI step, which is, interestingly, when AI can help us perform work of an entire org, and it's so interesting to think about all these orgs thinking about how AI upskilling their workforce, but open AI is thinking about how AI replaces the entire company, not even the individual employees, but how do we replace the company itself. So I don't know what that looks like. But it's just interesting to see how they're framing is. And their, their view of AGI is so much wider than how we've been thinking about upskilling and rescaling and AI replacing employees, they're just already thinking about, you know, this century wide, or, you know, species wide view of what AI looks like, working with humans,
Alexander Sarlin 20:53
is a very exciting and bold vision. I'm laughing because I'm thinking about like, you know, if level two is somebody with a PhD, level three is almost like, you know, somebody with a bunch of teaching assistants who do whatever, actually do the work. Right, you could actually go out and, and do with things on their behalf. It's a silly comment. But I love the idea. And level four is, I think, gets me really excited. You know, again, this is all very high level sort of speculative thinking, but you know, they're moving on it as well, this level for me, I wondered, sometimes with even the current level of chatbots, if you were to go in and say, I'm going to spend, you know, two straight weeks, trying to work with this chatbot to solve, you know, X to cure a certain kind of cancer to solve some aspect of climate change. Like, even with a PhD, you don't often get to work with, you know, people with PhDs and that level of intensity and brainstorm and think and go deep and have access to all the research ever there. Like, already, that seems like it's maybe almost within reach. So it's the idea that a level four is specifically about that, you know, how might AI be able to solve meaningful problems for humanity and for society is obviously very exciting. Yeah.
Claire Zau 22:03
And I feel like you're already seeing steps towards that, with all the investments going into all these, you know, also fold all these AI models that are able to aid us in pharmaceutical and drug discovery, I think that's, you know, what level four is hinting at. And
Alexander Sarlin 22:19
what's interesting is to our earlier discussion about sort of going big or going small, you can imagine that there might be two different routes to to, you know, amazing drug discovery, one might be an incredibly powerful, you know, level four model and open AI is parlance, like a generalizable for model where you can ask it anything and you say, Hey, by the way, can you make a drug for Alzheimer's, make a drug for cancer, you know, and it can find it. But it also might be a specialized model, it might be something like that does nothing but try to understand cancer and knows all the research and doesn't, you know, it's trained on absolutely nothing but cancer research. And maybe that would be a way to get it in a different way. Maybe not nothing but but It specializes in drug discovery. And if it specializes in drug discovery, maybe it would accelerate the process. So it feels like the whole field is trying to figure out the right route to get to different things. But this is you know, parallelism happening of some people are focusing on smaller, smaller, more efficient, more specific domains and others are continuing to go bigger and bigger and bigger and think about things at the level of how do we replace entire companies and industries, which is pretty wild to think about? Clear. Let's talk about anthropic, when AI making lots of news with their mini with their strawberry, you mentioned in passing here the concept of some of the benchmarks that are used in AI model world right now to assess whether you know how effective models are at different types of tasks. And Trump put out a couple of announcements this week that are sort of more field building type of announcements, which I found very interesting. One was this desire for third party AI model evaluations. And it basically is this giant list of all of these different things that anthropic wants to be assessable, within AI models, safety capabilities, you know, all sorts of different, pretty subtle but meaningful aspects of what you know what a model should and could be able to do. And it's basically saying, it's going to start funding all sorts of different people to do third party evaluations of different models. And even though the funding obviously would come from anthropic, anthropic, whole ethos would, you know, it would not be about trying to make anthropic products win, at least that, you know, in theory, it would be about building the field and making sure that, you know, everybody's playing on the same playing field. In terms of safety and capabilities. This seems incredibly relevant to education, even though they didn't mention education, specifically, in this announcement, this is something that feels so important for the education space where we're, you know, we're seeing these models grow and evolve so quickly, but we aren't sure which ones are safer or more private, or have less bias or any of the things that we're worried about in education, we really can't assess them against each other. So that was one of the big announcements and they made another one which is an AI startup and $100 million announcement which I went right by me but you caught it when With Menlo Ventures tell us about these two things. These feel like really interesting, big field moves from anthropic. Yeah,
Claire Zau 25:07
I feel like anthropic has always so for to take a step back for people who are maybe less familiar with the the lore and history, anthropic was fun out of people who left open AI believing that they weren't taking the best approach to safety. And so the people who left open AI started anthropic, they wanted to start with constitutional AI. And so a lot of their work has always been rooted in this ethos of let us try build without breaking things. And so they've actually just generally done a lot of really interesting research and work in the space. Yeah, I wrote about this, but they recently did a big breakthrough in a field called mechanistic interpretability, which basically is the field of trying to get an understanding of how the AI mind works. And so they were able to kind of peek a little bit inside the AI black box, which is obviously massive, of huge importance for us to understand how these biases work and how we get from input to output, which is highly important for education or any setting where you're working with people who can't have hallucination prone outputs. They also did, I think, a really interesting study. And and I would just, you can check it out on their website, they put out a lot, it's a search on how a cert different AI models can develop personalities even. So, just really cool research in space. But this is kind of another iteration and feels very aligned with that brother mentality is they basically, we're seeing this trend across the board where a lot of the valuation methods that we have now are kind of outgrowing the frontier AI models and advancements. I mean, if you see like seven new models this week, all of them are acing the tests that we keep throwing at it. And so they basically want this industry wide initiative, where they're asking people to develop better evaluation benchmarks so that we can test for things like aI hacking, or whether a model is designing bio weapons, or if they're advancing towards being autonomous. I think they're basically for different domains. They're encouraging people, and they will actually pay people to develop these third party evaluation so that as a field, we're all able to at least, you know, you can't test how good something is without measurements and assessment for those. So just really encouraging as a field for us to build these new evals.
Alexander Sarlin 27:30
It's hugely important for the whole world in almost any field, I think it's doubly important for education. I mean, we have two different criteria in education that we use that are really different from, you know, correctness, which is what a lot of the existing are, you know, problem solving a lot of the existing benchmarks are, how well does this do on on an AP? How well does it do on really complex math problems? How well is it do on writing? And it's like, Well, okay, that's correct. But that's not actually what we care about in education, we care about two very different things. One is, can you teach, right? Can you get someone else to understand it? Can you get someone else to succeed? And none of the benchmarks do that yet? And the other is, is it appropriate and safe for younger students, whether those are teenagers, college students, or even down to 13. And below, which is technically not even sort of in scope for a lot of the frontier models, but like, to me, those two issues are actually really in a sort of subtle way, keeping the entire AI education ecosystem a little bit off balance, because we have no way to measure whether Gemini or for Oh, Mini, or, you know, or Claude is more pedagogically sound or is better at, you know, catching problematic, inappropriate asks from a student, like, we don't know, which nobody knows, because there's no assessments for it. So I love just the idea of having a much more robust set of benchmarks. I know that that's the kind of things I'm asking for here. I'm not exactly what anthropic is they're looking to fund quite yet they're looking at, like you said, bio weapons, things that are, you know, outside of the education space. But I think the same thinking is in there, if we had, you know, frequent listeners of podcasts may have heard me say this in the past, because it is I've been stuck on it for a little while now. But if we had a, you know, a really good benchmark assessment developed by a third party, right, we don't want it to be just one of the edtech companies develops it and then aces it, but you know, third party, independent benchmark, and then you could actually say, Okay, we now know, actually know which of these models or which of these small models or which of these edtech tools, do the best job of teaching or do the best job of creating a safe environment for students or educators? We'd be in a totally different space, and I hope it's coming soon. Tell us about the 100 million dollar AI startup fund.
Claire Zau 29:42
Yeah, I think in line with a lot of other big foundation model providers as they kind of, you know, if the trend that everything is increasingly commoditized becomes increasingly true, I think a lot of these foundation model providers are looking to partner with early stage startups and AI, people have always kind of drawn that analogy to the cloud. And you know, that big, you know, the big race between AWS or Google Cloud versus Azure. I think you're kind of seeing this in play where these big foundation model providers are rushing to partner with great venture funds to provide support for AI startups so that they can close the distance between the big tech players and early stage startups. I do think a lot of people are continuing to question the upfront costs of getting started in this space, especially if you want to compete in the AI space. That's why you're seeing all these large funding rounds. anthropic and partnerships Menlo Ventures started $100 million, what they're calling anthology fund to support AI startups, and they'll basically offer credit, access to the latest AI models, various other perks. And so really, I think, overall great for just nurturing the startup ecosystem, because I do think that generally, there is a lot of barrier to entry cost wise, that stops a lot of people from building compute intensive products. You know,
Alexander Sarlin 31:06
it's not yet clear from the announcement, at least from my perspective, whether, you know, Ed Tech, or education companies would be of particular interest, or even, you know, viable options for some of this funding with all of these frontier models, they can, they're gonna get applications from absolutely every sub sector of tech. But I will say, I've been consistently positively surprised about how much Google Open AI and tropic all really do care about the education use case, they really do. They want it to work, they want it to certainly work in higher ed, they want it to eventually work in K 12. And they want it to at least be the kind of, you know, positive force for teaching assistants for tutoring, for democratizing access to information for search, in an educational context that, you know, we all want as well. So if I were an edtech company starting out right now, I would definitely look at at this 100 million dollar fund and just see if there's anything in there that that might be an overlap of interest. But you never know, I mean, all of these, you're gonna get a lot of applications isn't good to get a lot of different directions to go. And so who knows that education is going to be a focus there, but let's not count it out. So we've been talking a lot about AI from a very broad perspective, and I love it, I love it. And we have the privilege of having you here, Claire, and you really keep your aperture very wide open. You know, we on the show, always talk about specifically the you know, the Ed Tech use cases, but I think you are really have looked at that entire space. And there's two more really quick things I want to talk about in the really big AI space. And then let's go through some of the really interesting things happening directly in AI and Ed Tech. So the first one I know you've written about this in your newsletter is this concept of are we in an AI bubble? And there were two reports I've heard that came out this week, I have not seen either them about hey, you know, is there going to be a return on some of this investment? Tell us about that? It sounds like a pretty interesting issue. Yeah.
Claire Zau 32:55
And I think people generally always talk about hype cycles. And I do think that is highly relevant, I'm not going to say that we will always be at the peak of the AI hype cycle forever. I think that, you know, there has to be adjustments in the market. You know, us already saw that with a couple developments in the market. For example, early on, we saw companies like Jasper raise these massive rounds, and they build AI writing assistance tools. And now we saw that, obviously become very commoditized. across every application, you're seeing that in Google Drive, and Microsoft, copilot, and Apple on your phone. So I think you're going to see those types of correction in the space. But as you mentioned, you know, I am definitely biased towards, you know, a more positive outlook on on the space, you know, on what AI could look like when everything goes right, more than when it goes wrong. But generally, I think a lot of people have raised this question of are we in a big AI bubble? Specifically, two big reports came out this week that I think, you know, from two big names Sequoia and Goldman, Sequoia, specifically, there's a piece from David Kahn, he calls it the $600 billion AI question questioning whether we're reaching a tipping point. And he basically outlines the big gap between revenue expectations, which can be kind of implied by the AI infrastructure build out from companies like the video and you know, chips ban, and the difference between that and the actual revenue growth in the AI ecosystem, which essentially is a proxy for end user value. And so he does all the back of napkin math, and he gets to this $600 billion question. And I think basically, what they're observing is that open AI is still the majority of AI revenue in the space. You know, there are now a, I think, about 3.4 billion in revenue up from 1.6 Last year, but they're pretty much the majority of all startups scale revenue right now and there is a massive hole in where all this venture funding has gone. On into. And so Goldman Sachs also did a recent report, they called it too much spend too little benefit. And it's also a very similar analysis of the current state of generative AI, and its economic impacts. And they lay out that, you know, even in the last 18 months, we've had $12 billion in funding in AI in q1 2020 for a loan. And so I think people do agree and are aligned on that this is a revolutionary technology. But as this initial excitement is fading, there is this increased skepticism on if there is truly that economic and productivity gains that everybody has been promising. And I think, you know, the Wall Street Journal recently put out a really great article on the rollout of these copilots on enterprise fronts and all these CIOs saying that, actually, they're not really helping us a lot, you have to handle people through that process. I mean, to begin with their data isn't even clean enough for these copilots to be useful. So I think it's just I don't have an answer on this. But I do feel like it is an important question and a meaningful question to raise. But we'll kind of continue to see if those capex costs move upstream into the products that we eventually use. And I think that is also a relevant question for the broader ad tech space. You know, I think people are always constantly questioning whether any technology will truly be a revolution in the tech ecosystem. And I truly don't think any one technology, including AI can ever be a silver bullet in ad tech or education. But, you know, I think it's an important question that continues to be raised for all of us. So I
Alexander Sarlin 36:34
share your sort of base level positivity, you know, towards AI, as anybody who listens to this podcast will know, as does Ben, I think that you know, whenever I'm sort of feel like the tide started shifting, or things get confusing in this AI space, which is pretty often my sort of go to metaphor, is looking back at what I consider the last, you know, world changing technology, and especially world changing technology that started as sort of a solution in search of a problem, which I think is the internet, right? I mean, the internet was a defense is a Defense Department initiative. It was about decentralizing information, so you couldn't be attacked. But when people realized, hey, decentralized information accessible everywhere, all of this, you know, hyperlinked world that you can sort of do anything on, there was initially this enormous investment in the 90s. And then this enormous bust in 2000. Right. But was that the end of the internet? No, in fact, I think it was a over correction, based on the idea of people were starting to say back then, you know, pets.com, all those examples like, hey, you know, maybe people aren't ready for online shopping? Or, hey, maybe some of these specific kinds of things that we've been investing in maybe, you know, it's just, it's not quite, you know, the businesses aren't quite there. And now we fast forward right, 25 years from now, or 2024? And it's like, no, I don't think the most bullish person in the world in 2000 would have guessed that, you know, all the top companies like, you know, we talked about the fangs or whatever the new acronym is, and they're all internet companies, they took over oil, they took over hotels, they took over shipping, they took over everything, nobody saw that coming. And I look at some of this AI stuff. And through that sort of metaphorical lens, and I say, yeah, there will be correction, no question. And some of these companies will be lapped by the Foundation models, as you're saying others will be laughed because they don't have product market fit. And they're trying to solve something that people don't yet need. But when we do figure out, you know, the interfaces, the accessibility and make it actually useful, like get to the point where you don't have to hold people's hands and train them how to use it and clean the data and all the things that are keeping it back. Now once it becomes just integrated the way the internet is now integrated in all our lives. We're recording this on the internet while using documents on the internet. Well, I mean, it's just part of everything. When we start getting closer to that, I actually I don't know, I could still not be more bullish on AI changing the world. But I get it, I get why people are questioning it. There's been so much money just being dumped into it in sort of every direction. And I do still think, frankly, that it is still a solution, a whole bunch of different solutions in search of problems. I don't think that AI people are like, Oh, AI Great. Now we can finally X. Like there's 100 1000s of those options. But it's not obvious which one it's actually going to be just as it wasn't obvious in 1997 It'd be like, Oh, internet. Now finally, people can meet each other because they're not going to be meeting at work anymore. Like nobody knew that but that's true. Now the majority of people that's how people meet their romantic partners on the internet majority people in the US by far. Nobody saw that coming. Right? So it's gonna happen. It's just we don't have crystal balls. You can't see exactly how it's going to fixing. Yeah,
Claire Zau 39:47
please. Oh, no, just on your foot. It's so early to say whether or not something works or doesn't work. I mean, the one thing I always think about it's like we're even so early in the modality and usage of how this AI service delivered to us. I think when we saw 20 plus different AI edtech companies that were all tutoring kids through chatbots. You know, the first thing I think of is, okay, that's great first instinct reaction because that's how chat GBT is formatted. And that's what their UX looks like. But we're so early in that we have an integrated what teaching with voice and tutoring with video and tutoring with multi modality when an AI model can interpret the world in the way that a human would, I think it's just so early that everything we're seeing is just the most primitive UX of here's the AI chatbot when the promise is five more iterations of what that is. So and I think another silver lining is just really that maybe this time where AI progresses, could provide us as a society and even the education industry more time to actually deal with important ethical concerns and actually develop the appropriate regulatory frameworks. In the meantime, as it catches up to that investments.
Alexander Sarlin 40:57
I love that and I think that would be such a powerful use of this five to 10 years is really getting the education system ready. from a regulatory standpoint, from an innovation embracement standpoint, from a what is assessment from a what are we actually trying to do with education anyway, if we start to really think about those things now, I think by the time AI really hits its, its stride, the you know, the sky's the limit, you know, the last thing and we're running low on time. So I'm just going to do this one quickly, because it's interesting. It connects to what you were saying before about the anthropic Menlo startup fund, in that, you know, it's very expensive to start AI companies in terms of getting the compute power getting the GPUs. So you reported in your newsletter that Andreessen Horowitz is doing something interesting, an initiative called oxygen where they're basically stashing they're hoarding GPUs that the chips so that they can actually support AI companies through compute as one part of their investment, if I'm understanding that correctly. So it's like, because it's so hard to get these really top level chips, it becomes a currency beyond even the cost, I mean, currency beyond even funding is that about, right? Yeah,
Claire Zau 42:03
and you know, just another value add that venture investors can add, I think they have about 20,000, GPUs, which is huge. They're calling the initiative oxygen, which basically references to GPUs being critical for companies to build. And, you know, it just follows that trend where smaller firms are having a harder time due to the high demand and competition with the Microsoft's and the opening eyes of the world. So they're leasing and leveraging this GPU stash, to negotiate ownership stakes in different companies that it supports. So I think generally seeing that, you know, you're not only arming your investments and portfolio companies with value that in terms of advice, but also powerful computing resources.
Alexander Sarlin 42:46
Yeah, a silly metaphor comes to mind with me when I hear this whole strategy. And I think of like, you know, how in prisons, you know, cigarettes are currency, right? It's like, money doesn't get you too far. In AI world, you know, GPUs are currency. And if somebody can get you the compute power to take your company to the next level, and actually launch your MVP and scale quickly, that's better than money. Right, it's more useful for a company that money I think Andreessen is probably one of the first people to see this,
Claire Zau 43:12
I might just add one little caveat, though, is we are seeing a little bit of the other trend to like there is a right time to build and invest in that GPU intensive work. And a lot of startups spent so much money trying to add vision, add audio to these models. And then, you know, in one demo day, they see Google and open ai do exactly what they spent millions and billions of dollars trying to do. And so there is a little bit of a merit in and actually, you know, the Apple strategy where they did not invest in building their own big foundation models, they have a lot of open source ball models that run on edge. But because of that, they're able to sit back and let the big guys do a lot of the heavy lifting of bringing these models to new capabilities. And so they're also my you know, if you're a smaller startup, there is also some strategy where you sit out and wait for these models to hit specific capabilities, and so that you don't have to invest in totally. And
Alexander Sarlin 44:06
I think there's been in many ways a consensus that in edtech, where, you know, the funding isn't that enormous compared to other parts of the tech world. Most people are much better served by using API's, and especially as the cost goes down to use the different frontier models and sort of pay per use, rather than trying to build your own stack or trying to add complex AI functionality. That may not be true forever. But for now, it definitely seems, Joe. So this is definitely relevant to the AI space at large ad tech, maybe not quite as much, not quite yet. But it's really interesting, just the realization that GPUs are that powerful in the space. So, you know, let's come back down to Earth. We're talking about all these really exciting big future of AI trends. I think your point about you know, let's think about how to use this time in the education space to really sort of lay the groundwork make sense of what we actually want to do with AI is really relevant to some of our top Ed Tech Store. Is this week one, we continue to see fallout from LA's AI adventure the ED product that they done, and they went out of their way to be this sort of first mover and draw a lot of publicity to themselves. And now, you know, there's just a little bit of a mini feeding frenzy in the very specific, you know, ed tech press where they're starting to say, oh, you know, yet another project, especially project out of LAUSD, which has had some real visible public crashes in the past, where the promise and the delivery don't seem to match up that well, you know, this is such a sensitive and weird story. And nobody really knows the heart of exactly what happened. But the thing I'm curious about your perspective on Claire is like, do you think this is going to chill districts desire to take big swings with AI? Yeah,
Claire Zau 45:49
I do think there is generally, you know, now that a little bit of the hype dust has settled, just generally a more measured approach to not going really big and moving really fast to adopt new technology and being much more thoughtful about the risks involved with any new technology, not just whether or not it's AI. I also think generally, across the board, when speaking with district leaders and other players in the ecosystem, there does seem to be more of a openness towards adopting tools that leverage AI and adult or teacher or administrative use cases, rather than tools that are trying to completely revolutionize instruction, or those tools that are leveraging student data. So I think that's probably the biggest shift we've seen has just been Yes, I think, in the same way that an Andre Carpathia is building a learner facing AI tool, I think there is a really big opportunity. And that's where a lot of big visions are, I do feel like in terms of actual readiness to adopt, and, you know, where we're seeing a little less hesitancy is on tools that amplify the adults in the room.
Alexander Sarlin 47:00
Yeah, I think that's a fair read the data piece of this feels really integral. And I think we're all trying to figure out exactly where the data policies have to be that we're all feel like we're doing the right thing for students. And, you know, as somebody who sort of followed the development of some of this Ed product, I strongly believe that everybody involved was doing the best they could to make sure that student data was trying to be as private as possible. I know, we have this sort of whistleblower situation out there. But it's just not clear how to do that at this point. And I think that that's one of the takeaways that I'm taking from this. And I think the field is as well as that, we need to be careful with student data. And we're not sure that we know how to do as well as we'd like to yet. So there may be a little bit of a pullback, like you're saying to things that are more about adult facing AI. But on the other side, another, you know, slightly under the radar, surprisingly, growth story in the tech space, we've seen Khan Academy put out a whole bunch of different things, you know, Sal Khan has very specifically and purposely made himself sort of the face of the AI tutoring concept and movement and trying to be make it as safe and clean as possible. They've been working with Microsoft, they've actually been working with one of Microsoft's smaller language models, they call it the five, three family, try to scale the tool and offer it for free to, you know, teachers all over the country, which is to your adult point. But they're also starting to do sort of state level deals, New Hampshire, this week announced a $2 million contract between the New Hampshire Department of Education and Khan Academy that's basically making conmigo, their AI teaching assistant free for teachers and students in grades five through 12. For the next year, it's, you know, a $2 million contract and government land is not that enormous a contract. But at the same time, this is, you know, speaking of the wake of this sort of California thing this is seems like a different and somewhat more low key, but maybe as exciting approach to bringing AI tools into the classroom in a way that is, you know, safe and controlled. Khan has also been working with educator training programs, like a teaching certification program called I teach, which is going to use conmigo in its training. So they've been sort of laying all these relatively low key government contract kind of things that, you know, are ideas, but clearly trying to build an infrastructure to make this work. What do you think of this sort of con model of working with governments and working with partners to try to get this into the classroom? Yeah,
Claire Zau 49:27
I do think there is a little bit of, you know, the mentality that if you're not on the train right now, you're gonna miss it. And I'm probably not characterizing that correctly. But basically, that if you're not doing it right now that our students will fall behind, and to a certain extent, I think that is what's driving a lot of districts to feel the need to be very AI first, and I think that's probably what's driving. Importantly, I think it's important to look at these technologies and think through how they can you know, 10x, the educator workforce and open doors and access for students, I do think it's exciting to see, you know, we always want to be able to see great ad tech products continue to expand their scope, it's really cool to see kind of the Microsoft five three product, the externalize in this forum where because they're using smaller models, they can actually lower the cost of all the different lesson plan generations and tutoring sessions, so that every student can get access to it. And if I'm correct, I believe the the offering is freemium right now for New Hampshire. So just really cool, I think, generally, for this tool to be available to everyone because it means that everybody can experiment. And, again, so early in the process, that I don't think there's one right product or wrong product more that I'm excited about seeing districts and statewide, you know, departments of Ed be more open to adopting these tools, because the only way that you can learn and get better at using AI is not through a MOOC course or reading a book, it's by experimenting and becoming familiar with its pitfalls. And you have to do that in order to understand its limitations. And so I think that's what I'm most excited about is the adoption so that we can be better critiquers. And what we would like to see from these systems, yeah,
Alexander Sarlin 51:23
I love that point. And I think, you know, part of what Khan and the Khan Academy team is really trying to do is enable exactly that type of experimentation that you're mentioning, it's one thing to hear about AI or to have taken a training course on it, and may be ready to try it. And when people actually try it, and they actually sort of that barrier to entry is lowered and the tools are available, and the trainings available, and you're actually trying it. It's just a totally different world. I mean, one of the things that keeps happening in the EdTech space, is this sort of drumbeat of surveys that come out trying to figure out what percentage of educators and students are actually using it. And you know, the numbers often don't line up with each other. And we've talked about this in the blog guys in the past, but across the board, it is starting to go up, there was one sort of real outlier survey that said that only I think 80% or 8% of teachers are using irregularly but Quizlet just came out with their second annual survey. And it basically says that, you know, 82% of the college students they talked to say they've used AI, that's a lot. And 58%, more than half of high school students, and teachers are actually in the 60s. So you know, those are given that AI, you know, the kind of AI we're talking about is only you still under two years old, having more than half of people trying, it feels very promising. But you know, again, these data come from different places, and that you never quite know, Titan partners just also released a similar survey that basically says that, they found that 59% of students are regular users, meaning they use AI monthly or more often. So again, more than half. And they found that about 40% of instructors and administrators in their report reported being regular users of generative AI tool. So and they also found that a lot of people found that increased academic workloads were are the result of AI, which is maybe counterintuitive, so we should dive into that. But either way, the numbers are going up. And I think that to your point, Claire, about experimentation, we are starting to get more and more into a world where most people have at least tried this, and a good number of people are using it frequently. And I think that's going to start to open doors again, just like the original Internet, once people know what this technology can do are comfortable, you know, logging on and trying it, it starts to really change people's conception of it and the speed at which it moves.
Claire Zau 53:45
You know, I always think that's the analogy that Andrew used to describe learning how to use AI as it's like riding a bike, and you can only you can better understand it only by doing more reps and playing with it. And I think my biggest thing and is not necessarily pushing or, you know, forcing anyone to, you have to use AI but rather that I feel like it's critical that this technology is exposed to the education space, so that AI can be built with educators and students as opposed to AI happening to this field. I think if we don't understand its limitations, and we don't even have a grasp of what it's good at and what it's not good at. It's hard to be part of the conversation and be at the table. And what ultimately happens is maybe big tech builds it for education as opposed to educators and students and districts and learners building alongside these big tech providers. Very
Alexander Sarlin 54:43
well put, I think, you know, everybody agrees, at least in theory that AI should be designed with educators at the forefront co designed with educators. You know, we did an AI conference last November and almost every speaker said, if you're not designing with educators in the room, if you're not designing with educators are constantly getting feedback from what they actually use and need and want and are afraid of, then you're you're lost in this space, everybody believes in it, whether they're actually doing it depends, like you're saying on whether educators are actually involved, whether they're jumping in whether they're making themselves available for these companies, and just whether they're embracing it or feeling like, you know, it's just another thing on their plate that makes things complicated. And they worry about, you know, so we want educators to embrace it, because it will actually make the product and the entire ecosystem work a whole lot better. Speaking of the regulatory front, that we mentioned earlier, one thing that came out this week, and we're not going to cover it much here, because it's brand new. And we haven't actually broken it down that much yet, even though we did talk to Jeremy Rochelle, who was one of the lead authors of it, who is fantastic from digital promise. Basically, the US Department of Ed did issue some guidance specifically for AI ed tech developers this week, arguably, this could have been our lead story of the whole week, because it's obviously so relevant to ad tech industry and to AI. The only thing I'd say is, it's so far, it's a little bit more from the sort of stance of like, really, I don't even know how to put it, it's a little bit more principles based than sort of specific so far, it's about you know, safety and security, about providing evidence, we all care about evidence, equity bias, like all these things that we definitely all believe in. And I think these are great recommendations. But I also think the actual rubber meeting the road is still to come. I think this is sort of guidance about the kinds of things we should be thinking about, as we do AI development. And everybody should absolutely read this, we'll put the link to the report in the show notes. And it was also in clear in your newsletter this week. But at the same time, I think there's still a journey to be made to actually figuring out what these things are, you know, what does data privacy actually look like? What does earning trust in an AI system actually look like? It's easy to say it? How do we do it? I don't know. Did other things jump out to you about this? Or should we save this for our future date when we've really digested it further?
Claire Zau 56:56
Yeah, I think it was fantastic, as kind of this a broader push towards shaping the conversation around AI and education. I think, you know, we know so many people who put so much time into this document. And it's just great to see, you know, a proactive conversation from both developers and educators about what responsible use of AI looks like. I do agree with you. I think right now, most of the guidance is not regulatory. And I don't think for a while, honestly, we will see. I mean, I don't think the government will be able to put into writing what it wants to regulate around AI and education. I don't know if there's early views on what should be regulated or not. I think a lot of it also has to move downstream from what gets regulated on a foundation model level or even, you know, California has another bill out. So a lot of it, I feel like it's still in flux. And it's hard to say what AI education will look like from a regulatory or legal standpoint. But just generally, I think it it aligns so well with everything we've discussed. And I think it's still top of mind for so many schools around bias, data privacy, building with equity and safety at the forefront,
Alexander Sarlin 58:03
for sure. And I don't mean to poopoo, any of those things. They're all incredibly important. In fact, I think they should be systematized as soon as possible, I think we need tools to measure what Trust and Safety look like in these models and in these edtech tools. So I'm a fan. And I love Jeremy Richelle. And all of the authors of this paper and all of the people quote in this paper, it's all great. I could go on and on about this. But I've worn you know, instructional design hats and Product Manager hats in my entire career. And often they sort of clash with each other, which is sad, but often true. And when I look at a paper like this, I'm like this comes strongly from the sort of principled perspective of like, we need evidence, we need to follow teaching and learning principles, we need it to be transparent. It's all like well, and good and incredibly important. And then you have to make actual decisions of how to get something out in the market. And you have to actually figure out what people need, you actually have to figure out how to measure any of these things. I feel like until that other shoe drops, the whole story can't be told. But this is a very important one. I
Claire Zau 58:59
mean, to your point, I think you know what we discuss way back on just evaluations and building models from education spaces, think how this guide translates into people building models or evaluations for testing how well a model is not giving away the answer or not giving up too much cognitive load for a third grader. I think hopefully we see that externally. Or we see that pipeline of this guide into actionable evaluation benchmarks for the education space.
Alexander Sarlin 59:30
Yes, amen. Yes, that is exactly what we need such a good synthesis. That is what we should use. This kind of really, really structured and thoughtful advice for is to develop tools that allow a tech companies to move forward without having to worry about these every individual company have to worry about all these principles baked into the system and we'll all be great. I love that. And then just last news, and this is also pretty interesting news. We talked to Steve Daly was the CEO of Instructure a couple of weeks ago because on the eve of Instructure because that's their big, you know, annual event where they talk to all of the myriad Instructure clients and customers and partners all over the country and world instructors, obviously one of the biggest LMS systems in the world, and certainly in the US, has about a third of the market in both k 12. and higher ed, there is some really interesting, advanced rumors about Instructure buyout from private equity in the wake of these AI features and Instructure Con but there's a lot of talk about this. This is sort of your world. I'm curious what you make of some of this buzz.
Claire Zau 1:00:32
Yeah, definitely. So still very, very high level of right now in terms of what we know. And not a lot of it is publicized. But right now, it seems in the news that KKR is emerging as a front runner for a bio of Instructure. And it might value the company approximately $4.7 billion, including debt. So in a discussion for an offer around $24 per share, Thoma Bravo, another you know, P group already owns about 84% of instructor according data. So, you know, this is still developing. We don't know what's going to happen. But it is interesting to see this kind of broader trend of big ad tech incumbents being taken private, you know, this comes on the back of power school being acquired by Bain cap, I think just last month, I think it was around $5.6 billion. So just generally this privatization of big tech, public companies, and we'll see how that translates into the rest of the activity in the market. Yeah,
Alexander Sarlin 1:01:34
very well put in Yes, that's right, they already have a lot of private equity stake already. One thing that I think is an interesting potential downstream effect of this kind of thing is that PowerSchool, Instructure, Google Classroom, you know, collectively are about 70 plus percent of the LMS market in K 12. We also have, you know, desire to learn and a couple of other big players in there. But one of the things that was interesting about the conversation with Steve Daly, is that one of the things that Instructure is increasingly cognizant of, and I'm sure all the LMS is are cognizant of is they are really the homes of where student data and academic data and behavioral data and student information live. So in a world where AI tools increasingly want to be personalized, increasingly want to respond to students, you know, gaps in knowledge, or how well they did on that last assessment, or what unit they're in, or what their upcoming assignments are, and all this, you know, context awareness of student data, both about the student themselves, and about their curriculum and what they're studying, and the standards and all those things. The LMS is really are the repositories of that data. So that's really relevant, it means that the entire AI space, at least as of right now may sort of have to be funneled, and maybe a lot of different AI tools that funnel through learning management systems and student information systems and instructors trying, at least according to their CEO, you know, trying to make it easier for AI tools to plug into the instructor system in a way that data can flow safely and actually create that kind of system infrastructure where data can go back and forth, where you can say, Oh, I know this student is in eighth grade, and that they're doing a unit on, you know, quadratics, and that they didn't do so great on their last quiz. And so here's how I can help them. That's the future we all want. And actually, they might be a big part of it. So I'm not actually, I don't know if the AI is part of the reasoning behind places like KKR, or Bain. But if I were looking for an edtech company to sort of bet on, I think the LMS is are in a very pole position when it comes to how this AI revolution might unfold.
Claire Zau 1:03:32
Yeah, I mean, I think, you know, we always talk about how in this big AI Valley creation moment for companies, it's not so much about tech moats, because the tech is evolving on an hourly basis in this era, but more about where you can get data and distribution modes. And that's exactly where these big LMSs and si s is set is having all that student data. And the one thing I always push on is with early stage startups and new entrants in the space is this tension where, say you want to introduce AI lesson plan generation or content creation, just thinking through the existing data, you know, the the goldmine of data that instructor has on not only historically how an educator or a faculty member has generated content, but on the students. And so when they do create content, how that automatically plugs into the student data that sits within the system, I think just thinking through workflows versus capabilities, and where you can have those data pipelines flow much more naturally so that you're stuck on, you know, the incumbents workflow as opposed to jumping between different AI tools including charge GPT. Right. So I think that's kind of where they have a big opportunity to really build on top of their data and distribute Yeah, to
Alexander Sarlin 1:04:55
make a clumsy metaphor, right again, I keep going back to the internet, right? Argue believe these LMSs could serve as web browsers, right, they could serve as sort of the connective tissue, the front door for a student or an instructor for how they access AI. It's not inevitable that they will become that but they have probably have a better case to be made that they will do that than many others that to be the AOL and prodigy or obviously the Google and all the actual modern Chrome and Firefox and all those great things. So I think this is a great time to segue to our amazing guests. They will also talk about this type of integration. We have Chris Hess, who's the director of product for AI at Pearson, he has some really interesting thoughts about how Pearson has some advantages from having this huge content library and so much data around it. And Mark novel from Axio, formerly known as primer, which integrates already very smoothly with Google Classroom and Instructure. So they're definitely they're really interesting conversations I highly recommend staying on and listening to both of them. I know this has been a long episode. But we are so privileged to have Claire's out of GSV, a new partner VP, what is your you have a new title, don't you? Yeah, thank you partner at GSP. So well deserved, and sort of the master of all things AI in edtech. As you can tell, from listening to this episode, follow her newsletter and know her work because she is always at the cutting edge of both AI in edtech, and AI in general, which is pretty great. Thanks so much for being here with us, Claire. Really appreciate it. I'm sure our listeners are really excited to keep following your work if they aren't already, which I bet most of them already are.
Claire Zau 1:06:32
Thank you so much, Alex. And you know, as I mentioned in the beginning of this podcast, just thank you, and Ben and just the entire ed tech insiders, community for the work that you all do, I think it's so exciting to see this energy in the space, especially in this AI moment. And so really appreciative of all the work that that you all do in this space. And as Alex mentioned, if any of you are building in the AI, education space, or just generally want to chat about trends, always looking to chat with others that are interested in this very dynamic intersection. And my email is claire@gsb.com if you want to reach out
Alexander Sarlin 1:07:08
amazing. Well, thank you so much for being here, you hopefully you'll get some positive and exciting emails from some of our amazing listeners. We're also starting a book club for Ed Tech and AI with Tech Insider. So keep an eye out on the newsletter. Our first read is going to be Sal Khan's terrific book about AI and education. We mentioned him and his work a little bit on this podcast. So keep an eye out for that if you're not already on the newsletter. Thank you, Claire. And let's get to our guests. I hope to have you again soon on the pod. Thanks for being here. Thanks. For our deep dive in this week in ed tech we are talking to Chris Hess, the Director of AI Product Management at Pearson ever heard of it? Very, very, the global edtech giant and they just announced a really exciting set of ai enhancements for instructors. Chris, welcome to the pod.
Chris Hess 1:07:57
Thanks for having me, Alex. I'm really excited about this and getting the message out about all the great work that the teams have done on this.
Alexander Sarlin 1:08:02
Absolutely. So first off, let's get to the headline, you know, comparison is doing some really interesting AI work inside some of their biggest some of your biggest programs in my lab in mastering and teaching and learning platforms and in many different titles in business, Math, Science and Nursing. What is the product? What are they doing with AI? What have you been doing?
Chris Hess 1:08:22
Yeah, well, we've been doing a whole bunch of stuff, mostly up to this point focused on direct value to students, which you know, trickles down to instructors for sure. But as a former instructor, myself, and my wife is an instructor still that uses Pearson products, we have not forgotten about those customers and how you know, it's a really difficult task I remember back to when I was teaching. And I always felt like I wasn't doing a good enough job in the class at the research bench or as a you know, as a university citizen at one time I was I was slipping on something. And so our goal was to think of the most onerous or challenging tasks that an instructor has to do, and figure out ways to lower the administrative burden on that make them easier to do so that you could spend more time doing the things that matter most. And so you know, think about like, creating the best lecture or, you know, looking at the analytics of how your students are doing and turning that into actionable insights. And so in the courseware that we build, which is incredibly powerful, really helpful for students, you know, I, I left the classroom of 75 students to help millions of students if we make this incrementally better, it's a huge value. Yeah. So one of the things that is really neat about it is it has this really robust library of content that a professor in math or science or business could give to their students to give them additional practice outside of the classroom experience. And so, you know, especially for things like math, or chemistry or physics, you know, you just got to work through these problems and get experience with it. So, the instructor it's on them to build these assignments and make sure that they're tailored specifically to their schedule, their syllabus and what they think is important. And, you know, a random math chapter might have 400 problems in it. So if you went through and wanted to preview ever though every one of those, make sure it was the right one for your students, that would be a pretty tedious task. So what we have done is that this is the first initiative on this constructor side. But think of a lot of different administrative tasks that are things that you could make a little bit easier, especially on complicated software. So what we've said is, let's put a natural language chatbot inside of our tool and allow it to do things like instead of having to go through and say I want to build a chapter three assignment for my college algebra class, all you do is type into a chat window that says I want a 30 minute assignment for my chapter three course. And I want to be a mix of easy and medium problems, and I want to be roughly 30 minutes. And within a minute, this will produce a, you know, an editable version of that you can still change whatever you want, but it does all the heavy lifting right away. And you know, even in that minute, my suggestion would be well, you know, start thing going and then get on your horse and do something else for your course. Right, you can be multitasking. And I generally think we're not good at that. But this is an opportunity. But then once you're done with that you got your assignment, and you move on to you know, prepping that lecture or grading, right. I mean, at the end of the semester, I felt like, you know, I was always at a guilt conscious about stacks that I hadn't given my students feedback on for the writing as fast as I wanted to, you know, that is something that we're hoping to facilitate. So supercharging the instructor by taking away some of the things that have gotten in the way of their efficiency and productivity, a Pearson
Alexander Sarlin 1:11:35
skills outlook report recently, basically said that, by 2026, these generative AI tools could save us educators, collectively, nearly 3 million hours a week in time, based on exactly the kind of efficiencies you've just named, being able to generate assignments quickly, being able to pull questions from the textbook, being able to make sense of the data that's coming out of a classroom. That is a lot of time and hopefully, you know, instructors at the K 12. And of course higher ed level, which is, you know, mostly what you're referring to here. Hopefully she should feel very excited by that number, right 3 million hours a week collectively. And obviously, that's a number of hours per instructor. What do you think the world will look like it two years from now, when instructors have that much time back? What are they going to be able to do, they'll be able to enhance their current teaching, but they'll be able to do as you say all sorts of other things they've never had time for? What do you envision them having time to do?
Chris Hess 1:12:32
I mean, really, whatever they want, right? I mean, there's all these visions of like having a better work life balance. I mean, I think that's, that's one aspect of it. I mean, if that's important to you, you know, I mean, I love to exercise, I love to play music, and I never feel like I have enough time for that, or my family, you know, a lot of times because I'm working, you know, if we can make that bit more efficient, you know, maybe, you know, you hear about four day workweeks and things like that. I mean, these things shouldn't be possible if we were more efficient in our day to day, but you know, many people will just be motivated to be better at their individual job, right? And so, could they handle more students, that's one thing, my feeling would be, be better for the students that you have, right, like, have more time to individually devote to the personalization of, you know, the 30 students in your class, or the 400 students in your class, however big it is, most of the time, we don't have the opportunity to think about that, because we're just trying to get the next test written or we're trying to get the next lecture prepared. Or you know, you're working on your tenure document or you know, your annual review. As a professor, I mean, we there's a lot a lot of things. And you know, I was talking with you before we got on the actual recording. Most professors are on the hook for teaching. They're on the hook for some aspect of research many times and they're expected to be good citizens of the department and university. I was felt like I was falling short, at least a one of those things at any given time. And so the hope is that maybe this makes that one bucket easier, so that you can be better at the other ones. Maybe you don't feel as bad about being on that committee that you know, the chair wants you to be on or, you know, just being involved in maybe a student group or something like that, which are wonderful things that you could do. I mean, the idea is, is not to do anything that makes it so that we're replacing instructors, we're trying to superpower instructors, right scale, the great teaching what my best interactions with students were in office hours, right? Like it didn't happen very often. But when they would come in and have a have one of these scenarios where they were had a misconception, and I would give them some targeted questions and figure out where their shortcomings were, and help them overcome that the light bulb goes on. We want a magic moment. Like Could we now do more of that? Maybe we can do more of that with this. That's the kind of idea Yeah,
Alexander Sarlin 1:14:52
I think it's inevitable that we can do more of it. I'm really excited about this particular world and you know, one aspect that's so interesting about what you're doing at Pearson When Gen AI, you know, first hit the market and everybody was talking about, I think the first thing people immediately thought about is, hey, this can create all of this new content. That's really exciting content has been a difficult thing to create in historically, in education and ad tech, really good, you know, lessons really good lesson plans, really good assessments. But it quickly also became clear that it can do incredible things with existing content libraries, and it becomes a real differentiator for a company like a Pearson that has, you know, 100 years of incredible vetted educational content at your fingertips, that becomes not only a training set, it becomes literally a set of data that can be accessed and synthesized in real time. So when you mentioned, hey, pull out questions, I want easy and medium questions. I want questions that, you know, in this particular area from this particular, you know, chemistry subject, this is something that, you know, there really is a major advantage that publishers like Pearson have in that world, I'd love to hear you is somebody who thinks about this from a strategic and product managers perspective, you know, what do you see as the advantages of having that long tail of content?
Chris Hess 1:16:07
I think you're 100%, right, whether we're training models, or whether we're using these as rag examples that we then use to create new content, it's 100% An advantage, I think there was a an initial feeling that content would become normalized, and it wouldn't be as powerful as it has been in the past. But, you know, the reality is that like, even if there is other people can create some great content with generative AI, it's really thinking about the learning journey in a longitudinal way. Like, how do you connect the individual pieces of content, and look, we have the most data on this sort of thing. So I can let you know what has been effective for user x in the past in Scenario why. And you know, that means we can either use this to supercharge our content production efficiency, because we know where our gaps are. Or we can think about the ways to make that learning journey even more efficient. So to me, Pearson's biggest opportunity is like there's a lot of people that are going to build AI generated, you know, tutors and things like that. But if you're going into a one off situation where you're interacting and asking about a chemistry question, that's fine. But I think the important thing is, as Alex completes his homework over the course of a semester, or a college career, how can we think about the way to make the best recommendations for him as he moves forward? Let's make it really efficient. Because like, you know, all you hear from students is, you know, I'm stressed out, I don't know what the professor wants, I don't know, you know, we have that context, too, because we have all the assignments that the professor has given. So we can make some really thoughtful recommendations on how students should proceed as they move into, you know, an upcoming test or a final exam. So I think having that content in that data history, helps us real time recommendations, but also gives us really keen insight into what we should build in terms of content going forward. You know, like, guesses are good, but everybody loves data. That's how we, we envision it. And so I think it will be good for Pearson in a very customer facing way, but it'd be good for Pearson in a cost efficiency aspect of it so that we can make sure that we're building the right things because it's data informed.
Alexander Sarlin 1:18:30
Yes, it like you're saying in a in an individual session, you might ask an AI bot, say, oh, quiz me about molality of something I'm trying to remember from chemistry, morality. And it'll say, Sure, here's a couple of questions about morality. Well, it can make them in real time, how exciting. At the same time, if you have the chemistry textbook, you have 500 questions about morality, and you know exactly what percentage of students get them right in the past and what their difficulty is and what they're associated with, and what concepts they're tagged to. That's a whole other level of knowledge that you can use in an AI context, right? So really exciting to think about it through that way. I remember, I remember being in school where sometimes they give assignments and they say, you know, do all the odd problems in this at the end of the chapter. And I go, Oh, interesting, all the odd problems must have something in common. And after a while, I realized, wait a second, this is just a way for the teacher to give a half as long an assignment without having to think about it at all. And it's like, you know, those things shouldn't happen in this age of generative AI?
Chris Hess 1:19:30
No, I mean, students should get what they need the most, right? I mean, go back to your molality example. We also should be able to know, you know, what are the prerequisites for solving that? Well, I mean, there's math, right. I mean, generally, it's math or these general chemistry things. So we should know, your math proficiency, like you know, if you need some help on that, we should be able to make some very specific recommendations. So we're moving toward that but right now, these tutors that are exist inside of our courseware and inside of our E text and this instructor tool are, you know, things that I think really exciting, I'm gonna add a lot of value and efficiency, you know, to the experience, man, I really think in the ideal world, this, you know, personalized recommendations for students. And then the analytics, going back to the instructors to make it even more personalized to the students has this really cool opportunity to be sort of an efficiency flywheel that really makes everything better for learners and students, you know, and instructor. So, really excited about the opportunities with this and, and, you know, the goal is to make teachers better and to make students, you know, have a better use of their time so that they can, you know, succeed across their curriculum, and maybe have some fun in college too, because it's a fun place.
Alexander Sarlin 1:20:39
It is a you know, efficiency is one side of the coin. And as you say, you know, learner success and impact is the other I know, one thing that's also really interesting about your approach to product management is that it's learning science informed, right? That you know, that the idea of being able to create lessons or assignments or assessments, whether they're formative or summative assessments on the fly, and pull things together from this huge library, where you know, all the data could be really, really effective. We know from learning science, that assessment is actually one of the best ways to learn, tell us about how you use some of the learning science in your product design.
Chris Hess 1:21:14
So at this point, it's still pretty nascent, but I can give you you know, some of the vision is, you know, I mean, the book that I've given away the most is make it stick, you know, it's a wonderful book, there's so many great recommendations on how to supercharge learning. But so, you know, we talked about the assignment creation thing, I want a 30 minute assignment, but imagine that the data is such that it's informing some recommendations, everybody gets these from Amazon, right? I mean, there's, we have the best library and this so you know, think about spaced repetition. We built this chapter three assignment, you're getting ready to make chapter four. In the future, the hope would be, well, maybe we recommend a few of the things from chapter three, that students struggled on the most to give them an opportunity for some repeated practice. So this is the core of Pearson's DNA, right, we think about this a lot. And I think it's wonderful when instructors have happened upon this themselves when they've created their assignments. But this feels like the most real opportunity to embed those suggestions in a recommendation that's easily actionable by an instructor where in the past, you know, like, if you had a sales rep come in and say, Hey, try space repetition, you know, that just doesn't hit the mark. Right. But if you know, if our bot says, Here's a recommendation based on the book, Make It Stick that has value with spaced repetition, you know, that would have hit the mark with me back in the day when I was in a classroom, right. And I suspect it will hit a mark with the customers, because it's really cool. And it's really like, it's really the vision that we've always kind of had of how this would work. But, you know, we struggle sometimes with giving the instructor complete autonomy and how to build these assignments. And most of the time, they do a great job, but sometimes they don't pick the best content. And so this is an opportunity for us to utilize our data and our understanding of learning science to make some real specific recommendations in the front. So yeah, thanks for bringing it up. That's one of the things I think I'm really, you know, we're not gonna be heavy handed about this with instructors, because it's their course. But, I mean, if we can make some thoughtful suggestions that drive that efficiency, everybody wins for sure.
Alexander Sarlin 1:23:16
Spaced Repetition and fighting the forgetting curve and interleaving. Right. Yes.
Chris Hess 1:23:21
interleaving? That's the other one. Yeah, I mean, like, it's everybody knows, it's the right thing to do. But very few assignments actually adhere to this when we look at the data. So we can see just those things I think it's really exciting to think about. Absolutely,
Alexander Sarlin 1:23:33
absolutely. And, and even in context, it could be, you know, we just did chapter four, pull three questions from chapter three that are actually, you know, relevant to chapter four, so that it doesn't feel like they're just pure review, it feels like you're actually building a corpus of knowledge. I mean, there's so much opportunity there, I'm completely with you there. And so much learning science, you know, we know from the studies, but it's so hard to and in real time when people as you say, professors, especially have so many different competing demands on their time, the idea of trying to go into your teaching, which is just one of your many responsibilities and go really deep and then incorporate all this research. And when you know, the students are waiting for their next assignment is, it's a lot to ask and it feels like AI can really bridge that gap. It's a very exciting vision. Last Last question for you. I'm just you mentioned this in passing, but I think it's so interesting, you know, you are a former biology professor, you are somebody who has been in academia as an actual faculty member doing all of this in real time, you know, doing all of these competing interests. And now you are director of AI product management. You do all this strategy at Pearson. You know, I'm curious what you take from that past role that helps you sort of bring empathy and inform some of the tools that you make for both faculty and students at Pearson.
Chris Hess 1:24:42
Yeah. I mean, it's a great question. I mean, that experience serves me well every day. I mean, I've been in the classroom in you know, 10 years now, but my wife is still a professor. So I hear directly from her. I think it mainly just allows me to have really good conversations with instructors because they you know, I can relate to them I started off appears from working in chemistry, even though I was a biology professor and, you know, like, there's like a rivalry between chemists and biologists. But the reality is that like was, it was a really good option because I could have great conversations, I understood their challenges. But I wasn't going to be seduced by my own opinion about product development, because I understood they're the experts in this field. And so this mix of, you know, understanding and being kind of obsessed with having validation with the user on everything we do. I mean, I know a lot about teaching. But I also know enough that like, somebody recently said, Well, you know, we know what customers want, they've been telling us the same thing for, you know, 10 years. And I said, don't you think that the world has changed in the last few years, I mean, generative ai COVID, all, you know, rote learning, all these things are different. So I always tell my team, even though you're very smart, and you think you have it figured out for customers, it's not your job to have an opinion, it's your job to understand the opinion of the students and instructors that use these products. And they will give you nuggets all the time. So you know, build that network of people that you can trust and make sure that everything we do and everything we've done here, Alex has been like put in front of instructors and students over and over again, because we really want to make sure that we're doing something that is hitting the mark. And so, I mean, I think the bottom line is, you know, I taught 75 Kids this semester, millions of students use these products. Now, I want to help those instructors be the best that they can be. Because I know how hard it was when I was doing it, it was not easy. And I see it every day with you know, my better half. And it really is a cool opportunity to, you know, evolve education because it's, you know, it is a really transformative time. But I'm a big believer in, you know, every challenge is an opportunity. And this feels like one of the biggest ones we've had and maybe our lifetime. Amazing.
Alexander Sarlin 1:26:49
That's a fantastic note to end on. I think that is a beautiful product vision to really stay close to the needs. You know, we're not the experts in edtech. It's the users who are the you know, the customers, users, professors, learners, who are the real experts. Thank you so much for being here with us. This is Chris Hess, Director of AI Product Management at Pearson just launching some really exciting new instructor tools in their humongously popular global MyLab. And mastering solutions and in 25 titles in business, math, science, and nursing. Coming this fall. Thanks so much for being here with us on ad tech insiders
Chris Hess 1:27:26
It was a blast!
Alexander Sarlin 1:27:28
for our deep. Today we are talking to Mark Naufel, the CEO and founder of Axio AI based education tool that's doing really, really interesting work and has just rebranded welcome to the podcast Mark Naufel of Axio.
Mark Naufel 1:27:46
Alex, thanks so much for having me.
Alexander Sarlin 1:27:48
Yeah, so the big news from you guys this week, you had a company called primer, sometimes known as primer sort of named after the the Neil Stevenson Diamond Age book, and just rebranded as part of your big launch as Axio. Can you tell us a little bit about what you're doing with Axio? And sort of the origins of what this tool is all about?
Mark Naufel 1:28:09
Yeah, I mean, I'll explain the origins in the relationship to the original name and some of the rationale behind the name change, you know, primer was the name of the original company because it was so much part of our founding history. And for the last 7 or so years, I was running a skunk works innovation lab, Arizona State University, a very close relationship with President Michael Crow there.
And the Skunk Works Lab was a student driven lab from end to end, and the idea would be that we pursue moonshot projects that could change the world. And this idea of Primer, now Axia, was the first moonshot project we ever pursued back in 2017. And it was really aligned with this idea of Neal Stephenson's illustrated Primer in that science fiction novel.
And so for those who aren't familiar, it's just this idea of a device that gets to know the individual and that can teach them what they need to know when they need to know it in a way that's. Tailored exactly to them. And as early as 2017, we've been trying to figure out how to bring that type of concept to life.
And we always felt that AI was going to be at the core of that. And so, you know, fast forwarding to today, Axio as a company, we're really an AI. Learning companion that we deliver to every learner. And sometimes that learner is the teacher. Sometimes that learner is the student in the classroom. But what's unique about our system is that the AI companion gets to know the individual over time, their interests, their passions, their career aspirations.
And it persists all this knowledge in order to personalize every future interaction, particularly around their learning. And so it's something we've, you know, launched about a year ago. We've been piloting throughout the year, establishing partnerships. And the name change really came at a time where, you know, SEO was going to be hard around primer.
There's a lot of primer companies and, you know, Axio was actually the original project name when we launched it in the student lab. And we just liked it. It was short, sweet, you know, we had bought the domain axio. ai back in like 2017. And we just did a few acquisitions that we're about to announce. And we just felt like there was no better time to rebrand, especially given we're about to go into.
Commercialization of the product and we're really excited for this new phase. So the EdTech
Alex Sarlin 01:30:21
aficionados that listen to this podcast, I'm sure their ears perked up when you mentioned that you are Skunkworks out of Arizona State in close collaboration with President Michael Crow. Michael Crow has been a legendary EdTech pioneer for decades, since his time at Columbia, since his time, his work at ASU has taken ASU to, you know, global recognition as a pioneer in EdTech.
So the idea of an ASU Project, you know, incubated within ASU meant to sort of change the world is very exciting. Can you tell us a little bit more about that collaboration and how that came about?
Mark Naufel 01:30:56
Yeah. I mean, my relationship with president Crowe, I mean, like you, like you said, I've, I've looked up to him.
Since as early as middle school, I think that's around the time he actually came to Arizona and took that job with, with Arizona state university. And he's been there over 20 years now.
And my dad used to take me to founder's day at ASU with the table of Motorola back when they were like the number one employer in Arizona.
And it was really important to my dad that I watched President Crow give his annual talk and just see the vision he was laying out, the consistency of it. And so from a very early age, I really looked up to what he was saying, what he was trying to do in, in higher ed in particular. And so that was a big reason I went to Arizona State University.
And my junior year, I became student body president and got to know President Crow through that role. And later I got appointed to be on the governing board for the three public universities in Arizona as the student region. So I did two year term. And for me, I mean, I actually was a really big champion for a more, Kind of relevant bespoke education.
You know, I'm someone, I built my first website in third grade for my third grade teacher, I'd been building things ever since. And, you know, I think all of higher ed, you know, the curriculum's not as relevant as it could be to prepare you for industry. I think a lot of people recognize that. And I say that very constructively and I always have, and I feel like ASU has been one of those entities that have been at the forefront of, of maintaining relevancy and, and doing large.
Part of staying ahead on technology. And so that was kind of the core of the relationship. And, you know, I think there was this alignment with president Crow and I since day one, that an infinitely scalable world class quality education tailored to every individual was going to be available. And this isn't something we jumped into, you know, a year ago, two years ago, when, when, you know, chat, GBT came out, This was something it's like started in 2017, really as early as 2016, you know, we were as a university, as a research lab, watching those early papers, you know, I would say probably the first one we started tracking was the paper on generative adversarial networks.
So something called gons. And if your audience is familiar, I mean, That paper really set the foundation for what would become, you know, generative AI of today. It was really important to us as students and researchers in the lab to continue to track that and figure out, well, where is the world going to be 10 years from now?
And where does this technology mean? To be honest, it came a little sooner than we had thought, but it's interesting if your viewers go look up Axio platform for life. On the Internet, you'll find a video from 2018, and it's a video of the students in the lab at Arizona State University, and it's the same design of our product today.
It looks very much like a chat GBT and a world class, you know, open ended learning platform driven by this AI companion. And I love looking back at that video because a lot of people told us, you know, you're kind of crazy. They'll never be the case. There's no way you'll be able to achieve that. And it was great for me because I told my students that people don't think that we're not pursuing the right project.
It's not a moonshot and fast forward, you know, seven or so years now, everyone's saying this idea is inevitable and we're on the forefront of it because of it.
Alex Sarlin 01:34:08
It's really interesting. You know, there's only a handful of folks who were really doing things that were very much in this generative AI lane before generative AI sort of broke through and took over.
And I feel like they're looking around now and saying exactly like you said, you know, people thought this was impossible. Now they think it's obvious. It's inevitable. That's like the trajectory of Technology, they always talk about in the past, right? They laugh at you and then they, and then they say they, you know, it was always going to be one of the ideas that you are championing here that I think is really interesting and still actually does feel a little bit new, even in this general era where everybody's really speedily.
Trying to make sense of things is this idea of an AI companion. And not only just an AI companion, a, a continuous lifelong, or, you know, multi year AI companion that can help learners excel and, and really get to know them very well, which is part of the vision of the primer in the book, but also big part of the vision of Axio.
Tell us about that sort of concept of a companion and how you envision it playing out over the next few years as generative AI becomes more and more built into our everyday lives.
Mark Naufel 01:35:16
Yeah, you know, I would say that's something we're doing that's a little different than the rest of the market that we focused from day one on providing a world class tool for the learner and not so much the educator.
And I say that as an educator myself, as you guys know, I'm a professor at ASU and love educators. But, you know, when we started the project, my lab was students. And they were a lot of them just coming out of K 12, they were freshmen, sophomores, you know, they were looking back at the K 12 system that they had just gone through and trying to figure out what would the technology be that they would have loved to use both then and throughout their collegiate studies.
And this idea of an AI companion was very compelling. I mean, I think everyone is so unique. And there's no two learners are the same. I think the challenge statement we're trying to tackle is this one size fits all approach to education that exists from K 12 and well into, you know, post secondary. And, you know, everyone felt like if they had this companion they could confide in and they could use for day to day utility, they would get to know them.
You know, there's a lot of things people don't necessarily want to confide in, you know. Myself, I had my own struggles with anxiety in high school, even though I was always a very social person. And I never wanted to talk to my parents about it because, you know, they would stress more than I would stress.
It's funny. Nowadays, I'll ask Axio about those things and it knows how to give me the support I had to learn on my own going through those experiences. And so one of the important parts of it is that we do lean into it being a day to day tool that students can use across school. Everything, their mental health, their productivity, their learning.
And, you know, a lot of people say, Oh, why are you doing that? That's not a standard ed tech tool, but the reality is we don't want to be a standard ed tech tool. We want to be the tool that students actually want to use. And this was the tool that they designed. And so I think how it plays out over the years is one of our provisional patents that's now getting converted is this idea of we have this self adapting knowledge structure of learning scope and sequence.
And all that means is that we can actually quantify an individual. What they're learning, both through our integration with the LMS systems, but also self directed learning. So we actually take a quantified approach to assessing the learner and what they're learning in a very, like, local way. And so when we go to Tutorum, not only do we get to Tutorum based on what we know about them, because, you know, they're in seventh grade, they're in high school, they're, you know, that's a baseline, but we're also able to tutor them to their level of competence.
Current expertise in that area. And we do that through this really sophisticated graph database that synergizes with LLM models. We were one of those folks that really found out early that you can synthesize graph databases, which are based in semantic language and logic with LLMs to have this really powerful, um, tool and efficiency.
And so that's really the bread and butter of what we do behind the scenes.
Alex Sarlin 01:38:03
One of the things that's so interesting about LLMs and you're mentioning it here, you know, in a really exciting way is that, you know, they do things, as you say, behind the scenes, they can make sense of data. They can make sense of very complex graphs.
They can make sense of huge amounts of, you know, data corpuses like literature or the history of, you know, anything or, or academic research. So they can sort of do things on the backend and then they can also You know, act like a human and, and speak like a companion or a tutor or a coach or a friend.
And I think what's interesting about what you're doing with Axio is you're sort of taking advantage of both of those capabilities. You're saying, you know, Hey, Axio can go deep and understand, you know, basically model out where an individual student is at in their learning, make sense of what they're doing, how they, what their learning preferences are, what they've struggled with, how it connects to their mental health or their, you know, other aspects of learning that are outside of academics.
But then it can also Talk to you and listen to you and go back and forth. Tell us about that sort of two sided approach. It's a really interesting way to think about the LLMs and I think it comes from, you know, it seems like it comes partially from your academic background knowing LLMs in that way.
Mark Naufel 01:39:08
Well, I think you're exactly right. I mean, this extraordinarily ability to triangulate all this data all at once, to take the best of content, to take the guardrails of a university or a K 12 school, and then to take that data of a student and bring it together in real time. To personalize each experience is extremely powerful thing.
And the cornerstone of all of that is student success, right? And when we're doing our pilots and post secondary education, it really comes down to, can we increase retention? Can we increase engagement? And that is really what our pilots are focused on. Obviously you have that in K 12 as well, but in post secondary is such a big focus on retention.
And that's so core of what the, what matters to the student, you know, they're paying a lot of money to be there and it's a hard system to navigate. You know, K 20 education. I always tell people, I mean, I was so lucky. And I think many of us are, we have just phenomenal parents. You know, I had a mom who was so involved in my schooling, you know, that immigrant Lebanese mother, almost, you know, that helicopter parent, which I probably hated growing up, but look back and love, And that's almost in some ways what we're trying to replicate this idea.
We're not trying to replace teachers. We love teachers, and there's such a important role for the teacher. It's really that day to day support when a student comes home. Do they have that mentor and companion and advocate that knows what's happening in the classroom, knows what they're struggling with, knows what they're aiming for?
And can really help triangulate everything that's going on and give them the next best step to achieve their desires in life. And for us, those desires really map to a career plan. And so one of the cool things that Axio does, it's pretty new for us, is we actually map students to a future career path.
And we do that at a very young age. So the AI actually makes these suggestions. You can always change it. It's always exploratory. Well, it's fun is once you pick a career path for you, the AI will actually build a scope and sequence of what skills you need to learn and when you need to learn them to have success in that career.
That's been somewhat of a provocative thing. Some of the, you know, and so we're selling directly to schools. You know, we had a university that we're piloting with, you know, question it a little, right? They said, Hey, can you actually remove that feature? And I asked why, and they said, well, We agree with the sequencing and all the things it's saying.
The problem is we don't have all the courses here that could teach those skills. And that makes us nervous. And I said, well, that's the problem with post secondary right now, right? That is the challenge is that we're not properly preparing students for the workforce and quite frankly, everyone knows it.
And these families know, and these students know it. And the best thing you can do is give a tool that's going to identify these gaps and a scalable 24 seven mentor and teacher that can help fill those gaps where you might not have curriculum or coursework. And they said that you're right. That's great.
You know, and we're seeing that as a huge value proposition for what we're offering. And so, you know, we're hoping we can be a tool that actually helps. The teachers, the learners, and the administrators at a university and kind of be that first of its kind enterprise product. And at the same time, we're LLM agnostic.
You know, that's not what makes us us, right? So I would say we have a novel architecture, but we can use any LLM. We're mostly aligned with GVT 4. 0 now. But the beauty is as open AI goes to market and education, we're a product layer that can sit on top of that. So it's very exciting for us. We know that this future is coming in higher ed and in K 12 and we're hoping to be a part of it.
And I think for us, we want to be a champions of the learners and those are teachers as learners and those are students as learners. And it's very exciting place to be.
Alex Sarlin 01:42:39
It is a couple more quick questions for you. One is very logistical and down to earth. The other is very speculative. So let's start with logistical.
You've mentioned in passing a couple of times, you just mentioned your LMM agnostic. You are also LMS agnostic, or I should say you integrate with multiple Learning management systems, and sort of by design, you integrate with canvas, but in structure, you integrate with Google classroom. And I imagine, you know, over time, you'll continue that type of integration.
That's really interesting. And I'm sure that, you know, some of the listeners would say, Oh, that's a really interesting approach because by connecting to LMS is you immediately get access. To the coursework, you get access to the student schedule. You get access to the student's upcoming homework. You know, the student themselves doesn't have to tell Axio, you know, I have an assignment coming up this week.
It absolutely knows, and it can prepare for that. Not that, you know, the career path stuff is incredible as well. Maybe even more incredible, but I love the fact that it integrates with the students academic schedule. Tell us about the integrations and in context, you know, part of why I'm asking this is InstructureCon just happened last week.
We got a chance to interview Steve Daley, the CEO of Instructure, and he, from his perspective, was saying the sort of flip side of this. Instructure is incredibly excited about having all this student data that it can, you know, help AI tools integrate with it. So there's a sort of a really interesting systems approach starting to emerge here.
I'd love to hear how you think about it.
Mark Naufel 01:44:00
Well, we love all these partners. I mean, since day one, one of the Most important tenants of the company is we would be agnostic to these technologies. I would say the secret there, you know, our secret mission long term that we tend not to say too often because we know we have to stay focused as a business as an early startup right now, but our long term vision here is we're establishing honestly, what will be a personal learning management system for the individual and if we do what we're doing right, it will be the first time that A student can really own their own system and platform and start using it in K 12.
It's going to start to learn the individual at a very young age, their passions, their interests, do that in a way where their data is faulted and owned by them and specific to them, not by any of these providers. And that might be Google Classroom that we're integrating with when they're in K 12 and it's pulling in their curriculum and it's engaging with them.
But when they go to ASU or to whatever college they're going to, and it's now Canvas, let's say, they bring their account with them, they bring their lifelong learning platform with them, and they connect to the new LMS, they connect to the school, they get the new guardrails of the institution. And we imagine that we'll follow them into the workforce.
And we actually are selling Axio into industries for continuous learning.
So
we're starting to build this foundation that in the long term, we really foresee a very powerful personal adaptive learning system. That's lifelong. And you can't do that if you're. Exclusive to a technology and on the LLM side, these LLMs are going to compete and get better.
And every day we see something new. You're going to hear about that from Claire, right? I love her newsletter. It's like something every day and we want to keep adapting with those great technologies, but by being agnostic, we really do create this ecosystem that can follow student through life without having it.
To have our technology be the provider of the LMS to all these schools because they've already chosen and it's hard to change. Right? And so now is the time for a new platform and you know, every great platform and technology. A lot of them have been based on graph technology, right? From google search engine to social media.
We're bringing the power of a graph into a learning environment. We're using it to empower students and we're super charging it with Jenna AI. We think we're doing one of the most relevant thing in the space right now. And we're really excited to see how it plays out.
Alex Sarlin 01:46:20
It is really exciting. I love that concept of a personal learning management system that can sort of traverse multiple phases of a learner's educational life and traverse both their educational and academic life and their, you know, potentially their personal life or their mental health life, or, you know, you have a health.
Mark Naufel 01:46:35
Magnum opus of educators for, you know, 100 years now. And finally, we have the technology to pull it off. Not to say that it'll happen. You know, we've had. Technology for a long time. But I think this is finally the time we can see that vision through
a hundred percent.
Alex Sarlin 01:46:50
So here's my speculative question just to close us out.
Cause I know it's such an interesting conversation. Everybody should be looking up axio. ai and seeing some of these amazing features, the tailored learning, the career path thing, the integrations. It's a lot of really great stuff there. Here's my question. So I've been thinking a little bit. Recently about AI from sort of over 10, 000 foot view and one of the things that seems sort of as like a given with generative AI because it's so good at simulating human communication at listening and talking like a person.
It feels obvious to many of us that it will end up doing that. But I look back at like early Internet days and, you know, remember ask Jeeves or, you know, there was At one point, sort of a paradigm of, Oh, what people will want from the internet is a person on the other side, something that looks and acts like a person.
And eventually that got, you know, superseded by the likes of Google and YouTube and Amazon, and none of those act like a person at all. And I'm curious about, you know, Your thoughts as a professor, as somebody whose thoughts are deeply about this space, you know, is there a future in which there's actually a little bit of a push and pull between, you know, personified AI, something that looks and acts like a human being versus, you know, very agnostic sort of flat, you know, material design, like, hey, this just feels like an interface.
It doesn't even try to act like a person. I'm curious how you think about that.
Mark Naufel 01:48:11
Yeah, and I think there will be a push and pull there. You know, I think a lot of people just want the utility of it, right? They want information when they need it. Cut the cuteness and get me to, you know, uh, Perplexity AI is doing phenomenal on that.
And we're starting to add a lot of those features you see on Perplexity. You'll start to see integrated in Axia around learning. It's, you're gonna still have the chat. They do, but you're going to get those resources on the sidebar, right? It's like what you need, you know, I think the best part about the generative AI technology that you couldn't do with the ask Jeeves and what's available on Axio is it gets to know you, it gets to know your personality type.
It gets to know your preferences. If you want to be matter of fact, and just get information and you tell your Axio companion that it will do that, right? It adapts to you. Mine's very quirky. Probably a little too quirky, but that's the fun part is depending on who you are and where you live and what your culture is and what you like, and you know, it's going to adapt to all of that.
And so here's the thing before you always had to design deterministically, right? You couldn't do a one size fits all approach from a design point of view. This is the first time with generative AI. We can actually launch a tool that then adapts to the user and really like at the end of the day, what it's trying to optimize on is the most optimal experience for you as a learner.
And I think that's the cool opportunity. So I think you're right. It is a push pull and no two accounts will be the same. And that's the coolest thing about Axio.
Alex Sarlin 01:49:36
Yeah, yeah, it's a really interesting metaphor. You have this Neil Stevenson, Diamond Age, you know, metaphor here. And one that always comes up for me is the Philip Pullman books, his subtle material books, where every person has a, I think they call it Damon, you know, a little animal sort of spirit that lives with them.
And you can, like you're saying, some people will want that kind of, you know, parrot on their shoulder that talks to them all the time. And others will want something that feels and looks incredibly And You know, straightforward and doesn't have a character at all. And just the information, you know, just the facts, please.
And yeah, it's non deterministic because the AI can actually react in real time. So interesting. I'm glad we got to both some of the, you know, nuts and bolts, logistics of this and the speculative future. Mark Knopfel, really interesting work that you're doing. So this is Axio. ai formerly known as. Primer or primer, and it is a companion that is cross different elements of education.
It goes, you know, K 12 and post secondary and workforce. There's career pathing. There's LMS integration, and it's based on deep graph technology coming out of A. S. U. Really, really. Amazing combination of factors there. Thank you so much for being here with us on this special extra long AI episode of Week in EdTech from EdTech Insiders.
Alex, thanks so much. Thanks for listening to this episode of Edtech Insiders. If you like the podcast, remember to rate it and share it with others in the Edtech community. For those who want even more Edtech Insider, subscribe to the free Edtech Insiders newsletter on Substack.