
Edtech Insiders
Edtech Insiders
Week in EdTech 7/23/25: Quizlet AI Usage Hits 85%, AI Outsmarts Math Olympiad, Roblox’s Learning Hub, Pearson’s AI/XR Lab, Federal AI Funding Priorities, and More! Feat. Brad Carson of Americans for Responsible Innovation & Ryan Trattner of StudyFetch
Join hosts Alex Sarlin and Ben Kornell as they break down a pivotal week in EdTech, from AI breakthroughs to Roblox’s education push and the future of personalized learning.
✨ Episode Highlights:
[00:00:33] Quizlet report shows 85% of students and nearly 90% of teachers using AI, with different adoption patterns.
[00:02:24] International Math Olympiad highlights AI’s reasoning advances, earning a gold medal and raising assessment questions.
[00:11:16] OpenAI agents and AI-native browsers signal a major shift in tech workflows and task automation.
[00:16:58] Roblox launches a centralized learning hub featuring educational games from Google, Sesame, and others.
[00:20:55] Pearson unveils an AI and XR innovation lab, sparking debate on whether incumbents can truly innovate.
[00:29:13] U.S. Department of Education outlines new AI funding priorities for instruction, tutoring, and career navigation.
[00:36:12] Preply challenges Duolingo with “Better Duo” campaign, framing human vs. AI tutoring as a key market battle.
[00:37:31] McGraw Hill IPO and new funding rounds for Honor Education and Galaxy Education mark a busy week in EdTech finance.
Plus, special guests:
[00:39:50] Brad Carson, President of Americans for Responsible Innovation on AI policy and its impact on education.
[01:04:44] Ryan Trattner, CTO and Co-Founder of StudyFetch on personalized learning tools and their rapid user growth.
😎 Stay updated with Edtech Insiders!
- Follow our Podcast on:
- Sign up for the Edtech Insiders newsletter.
- Follow Edtech Insiders on LinkedIn!
🎉 Presenting Sponsor/s:
This season of Edtech Insiders is brought to you by Starbridge. Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.
This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.
[00:00:00] Alex Sarlin: Google is saying, this thing is so big that we're actually willing to take our core search product, which is still our core product in almost every way, and actually just throw things right into it. I saw a LinkedIn comment about how Notebook LM has now been added to the core little menu on the main Google site.
When you're sort of can pull up all the different tools that Google does Notebook LM is just added to that and they're just obviously moving as fast as possible to pull AI into everything. That's a good thing, but it's also, they spent a lot of time optimizing the simplicity of all their tools and I think there's gonna be a flip side of that too.
It's a crazy moment
[00:00:33] Ben Kornell: from an EdTech perspective too. Reading the Quizlet report, it made me wonder, wow, generalized versus specialized. How is Quizlet gonna handle this when a lot of their data shows that people are getting test prep just through ChatGPT. That's a very big competitor, and when you are actually looking at these behaviors, the fact that lesson plan generation is low and specialized educational things.
The use cases are relatively low compared to the generalized use cases. It feels like the big tech companies are winning here. And like specialized ed tech companies aren't.
[00:01:17] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders. Remember to subscribe to the pod, check out our newsletter and offer our event calendar and to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben.
Hope you enjoyed today's pod.
[00:01:57] Ben Kornell: Hello EdTech insiders. It's another week of week in EdTech. I'm Ben Cornell alongside Alex Arlin. How's it going, Alex? Doing all right. I'm about
[00:02:06] Alex Sarlin: to take off on a vacation. I'm so excited. We're going up to the Poconos from our home in South Carolina, and I'm looking forward to being off the grid for a little while.
So not gonna be here next week for the week at EdTech, but I am gonna be following all the news and I'll be excited to go and I'll be excited to get back as well. But feeling good. How about you? You just got back from Australia?
[00:02:24] Ben Kornell: Yeah, I'm just getting back. It wasn't vacation, it was a work trip, but I was at the International Math Olympiad.
You know our audience, you may have heard of it. Many of the AI models, OpenAI, Google have been touting their performance on the test. I will say they did get a gold medal, but they did not get a perfect score. There were a couple students that got perfect scores. There were several students that got question number six, which was the one that stumped AI.
So there's still hope for humans yet. One thing that's interesting about the math Olympiad is it's two days with just six questions. So imagine two entire days just trying to solve six questions and it's proof-based math where you're kind of laying out your logic, and I think it really gave me a window into what the future of assessment is likely to look like.
Rather than smaller incremental questions with high velocity answering. So, you know, the classic state test will have 40 to 60 questions per segment, and they would do multiple days. This idea of deeper thinking with more complex structured questions, and then the ability to grade it not on a zero one binary, but on a spectrum, on a rubric that basically understands their level of problem solving.
It really was a powerful experience for the students in terms of thinking deeply and laying out their logic and being metacognitive. We've had snorkel on the pod before, like this is where the space is heading, but also in terms of the grading, the ability for them to collect everything digitally. Grade it and turn it around really quickly.
It was all human based, but I see lots of opportunity for efficiency and AI, like first pass grading in these types of models. And then there was kind of a protest or conflict resolution process where the organizers would grade everything and then the teams would grade everything. And if they got the same score, it was locked in.
But if they got a different score, they would resolve it. And so when you've got these like high stakes a hundred question test per segment, you're never gonna get that. But if you imagine that things actually reduce down to deeper thinking and then AI and some system first pass grades, everything and gives you kind of spits out a score, but you're able to contest it.
That also made me think about, oh, this might be where we're going in the future. And so there was that going on. And then second, how they pick the questions. They all vote on what the best questions are. 130 countries. All the coaches vote on what the questions should be. But AI, even though it wasn't allowed, it wasn't part of the test, it was in the room.
'cause they were thinking, well what is a problem that's elegant that will be challenging for the kids, but requires the kind of creativity that AI just can't solve? And that's how they got to question six. So I thought quite, quite interesting. Before we humans celebrate though, I should note that last year AI got a silver medal and they had to build a specialized model just to get a silver medal that was trained on years of math data.
This year they got a gold medal and they literally just went into ChatGPT and gave the model. No training, no special prompting. The Google model that solved, it was a specialized model for like logic and reasoning, but according to the open AI people, we have no validation. They just put it into ChatGPTs, like deep research mode and like figured it out.
So. We humans have the lead for another year, but let's see what happens going
[00:06:10] Alex Sarlin: forward. It also makes me wonder about that whole paper a little while ago, that Apple paper about how these LLMs can't actually do real reasoning and it's all blah, blah. They can't do the Tower of Hanoi kind of problems. And I'm like, okay, okay, but here's a counterpoint to that.
If it can handle the international Math Olympiad, even questions that are sort of designed to be somewhat AI proof and still do really as well as the best students in the world get a gold medal. Not perfect, but these technologies are very powerful and I feel like that's, I don't know. You feel like it's inevitable?
I'm pretty sure it's inevitable. I mean, the thing is, it's one of these things where like even if there are pieces of thinking, pieces of reasoning that AI is not that good at AI is learning. From everything all the time. It's learning from these teens who are doing their proofs, right? I mean, if you use AI to grade their solutions, well, suddenly those solutions could become part of a, a trading set.
If you are working with a thousand physics PhD students all around the world who are trying to figure out complex pieces of physics, it's learning from them as well. So the idea of thinking that this thing is fixed, it's moving faster than anybody. So I don't know, I'm, I think it's inevitable. I'm not sure I personally believe in the sort of concept of AGI, I don't actually really understand how that's even measured, but I do think that they we're very, very close, if not already, at a place where AI can get better answers in almost any domain than humans.
And this math is just another step in that direction. And I don't think we should be afraid of that personally. I'm not afraid of that future. I think that could advance what we do with humanity really, really quickly.
[00:07:43] Ben Kornell: One more thing on that. One is I do feel like the news coming out this week for OpenAI, where you've got agents.
It is showing that when we talk about AI, I think we're using this generalized term, but actually each model is better at some things and worse at other things. And as we get to ag agentic specialization, the idea that maybe, so kind of your point is what is AGI? AGI I think for a human, we conceptualize it as one brain, one body, all of this stuff.
But it's probably much more likely that it's a hive concept or like ant intelligence, where you basically have specialized players that can do things. And this is where on the math side of things, of the logic and reasoning side, the fact that the generalized model could do it tells you that likely what's happening behind the scenes in OpenAI is it's fielding the question and then it is farming it out to the specialized components and then bringing it back.
And Google was a little bit more transparent with that. So eventually, I think, and Ethan Mo was the first one to suggest this, but. Maybe ChatGPT five isn't some singular brand new model. It's actually a conglomeration of all the models they have, but it's much more of the kind of quarterback or the head coach figuring out who's gonna what parts.
That's what executive function
[00:09:13] Alex Sarlin: is, right? Deciding what to focus on and how to handle different kinds of,
[00:09:17] Ben Kornell: and that's probably what AGI is, is executive, I think. I think
[00:09:20] Alex Sarlin: so. And what's amazing about that is then it benefits from. Many different pieces. 'cause any deep research, agentic AI, AI that's specialized for particular data sets, right?
Like Google's been working on this medical LLM, really, really focusing on deep, deep understanding of medicine. It's like, well, if you ask a question that has anything to do with anything medical or even might have. Analogic thinking to medicine where like something in medicine might be useful for thinking about it.
And it goes, got it. I'm gonna go tap the medical brain and make sure that that's weighing in on this problem. And then combine it with the math brain and combine it with the writing and combine it with the reasoning. I mean, there's no limit to, and that's how our brain actually works. Our brain has all these networks.
The networks are interconnected. They're trying to bring the right capabilities to bear. So I'm excited about this feature, but I know a lot of people find it very scary. I think it's really exciting. I just wanted to mention there's a couple of really fun. Interviews coming up on the podcast. I just wanna mention them before we get into the meat of the episode, which is we talked to Katie Knight and Dr.
Allison Scott. They are the heads of the Segal Family Endowment and Poor Foundation about all sorts of things about AI and math and the future of education. Really interesting conversation. We talked to Amir Nhu from out school, along with Justin Dent, who's the head of out school.org. Really interesting conversation.
Amir is a tough cookie and he just has a lot to say about the future of education. And we talked to Julia Stiglitz, our mutual friend from Upli, and those are sort of the next three weeks of interviews. So those are all killer interviews. I highly recommend all of them. And then the end of August, we're doing a webinar about AI for all disability and Neurodivergence in the next wave of EdTech.
Very cool topic. We're in the middle of pulling together a world class panel on that, so keep an eye out for that. Meanwhile, Ben, what else is grabbed your attention this week? In terms of AI, in terms of education, we saw some stuff from Quizlet. We saw some stuff from Pearson. We saw prep lead, declare a little bit of like a war on Duolingo, but we also saw tons of stuff happening in the AI space.
Where do you wanna start?
[00:11:16] Ben Kornell: Yeah, let's start with the technology. I think the biggest release, and I'm surprised at how little play it's getting, is OpenAI's agents. Basically what you do is you prompt the LLM, then it opens up a screen or a desktop, and it is essentially like a browser inside GPT. And then it will do things for you.
And I will say it is not the most efficient. Quick thing today, but you can totally see where it's going, and it's incredibly powerful. I had it go into my email and create email summaries of all my unread emails while I was out in Australia. I had it go into my LinkedIn and view all of the connection requests and accept all the connection requests, over 20 connections.
It just is amazing how it's doing, and I think that ultimately next year we will all have a few use cases where we rely on agents.
[00:12:22] Alex Sarlin: When you say it's sort of like a browser, I think it should be worth mentioning that OpenAI is now creating a browser. Perplexity is creating a browser, and I think there's this feeling of people are not only sort of trying to head on tackle Google search and web search as a way of getting information.
They're starting to challenge Chrome and Safari and Firefox as browsers because they're starting to say, well, there's a huge benefit in having an AI native browsers, and there are a few out there and there are more coming from big players. And the reason for that is exactly what you're saying because if AI is built into everything you do, and if it's a agentic, it can do all sorts of different, put the pieces together of your mail, of your calendar, of your searching, of your research, of your Google drive, of your, all your tools.
And it can do things like make reservations places or book your travel or buy gifts for people or, it's pretty easy to see the benefit of this in all sorts of ways, including educational. It's funny, I still think. Apple. Like when I say what I just said, I picture Steve Jobs or one of these Apple leads, you know Tim Cook standing up there talking about, hey, this is the future of technology.
And I feel like Apple has really not been a big part of the conversation, but I think this idea of sort of a one stop hub for all the different pieces and agentic workflows, being at the heart of that is definitely where we're going. So I'm excited.
[00:13:42] Ben Kornell: And it is so crazy because Google is building all their AI features into their Android.
So they're pushing forward with this full stack AI everywhere strategy. And ChatGPT is creating destination AI where you go to it and that's where your AI happens. And Anthropic has been quite successful in integrating into work related B2B workflows. So it is a striking absence. I will also say that I've been trying perplexity comment browser, and there's a way in which Google search has become so AI enabled at the top that it's hard sometimes to actually find what you want, 'cause it misinterprets it.
So they're in this weird spot where they're watering down their like. Pure search, which has like links and like you can find what you want. So for example, we were looking for Quizlets report. I put in Quizlets report, it gives me an AI summary, which was half inaccurate, and then I looked down the line and on the links, Quizlet appears nowhere because it's all based on traffic and so on.
Whereas in perplexity and comment, it gives me on one side the summary, but there are the links. So I think there's basically, I like to sum up what you were saying about Apple and what I'm saying about Google. There's a big shake up in tech workflows, and I'm not talking about tech Tech, I'm talking about consumers and their workflows are getting totally shaken up and it's a jump ball that hasn't been a jump ball for like 20 years.
Exactly.
[00:15:24] Alex Sarlin: And they know it. They all know it. That's why OpenAI is creating a browser. 'Cause they're like, there's actually a, there, there, there may be room to go beyond destination AI and actually be the first thing you open when you open your computer in the morning. Right? They're like, there's actually a there, there.
And then Google is saying. This thing is so big that we're actually willing to take our core search product, which is still our core product in almost every way, and actually just throw things right into it. I saw a LinkedIn comment about how Notebook LM has now been added to the core little menu on the main Google site when you sort of can pull up all the different tools that Google does Notebook.
LM was just added to that, and they're just obviously moving as fast as possible to pull AI into everything. That's a good thing, but it's also, they spent a lot of time optimizing the simplicity of all their tools, and I think there's gonna be a flip side of that too. It's a crazy moment. So speaking of Google, one interesting headline that jumped into my view this week, and we should talk about the quiz report too, Roblox, we've talked to the head of Roblox education, Rebecca Kantar, on the.
Pod before. She's from Embolus, she's an ed tech person. They just announced a learning hub inside Roblox, and they're basically a hub of educational games and the at launch, they're gonna include Google's Google Games, they're games by Mrs. Wordsmith, a math tower race, but it's supposed to be a centralized gateway to educational games.
I'm excited about that. That's awesome. Roblox, as we know, is one of the very few absolutely ubiquitous global gaming properties, and they've had an education piece for a while, but they have a sesame workshop, BB, C as partners. They're clearly trying to actually lean into the educational gaming space.
What do you make of that?
[00:16:58] Ben Kornell: I
[00:16:58] Alex Sarlin: mean,
[00:16:59] Ben Kornell: look, Roblox has stalled out a little bit and part of their challenge has been they had massive growth and sometimes you're the victim of your own success and you go public and it's hard to eek out 20%, 40% growth when your customer base is so large. And then they also have these controversies around the quality and safety of the platform with these issues.
So I think this is a turn the corner moment for Roblox. I think I'm really excited you've highlighted the intersection of learning and games quite a bit. If you think about like who has real currency, and I'm saying that tongue in cheek. 'cause robots are like the currency of Roblox is like from a child standpoint, it is the closest thing we actually have to like Bitcoin for kids.
And so I'm quite excited by this development and I do think that if you think of them as a platform. Or as an ecosystem that gets really exciting with what you could do.
[00:18:06] Alex Sarlin: I mean, 200 million monthly actives, 79 million daily active users, over 8 billion registered users. That is, yes, that is more than the population of Earth.
That is the number of registered users. This is eight.
Huge
platform. But yes, your point is it's also good that, you know, as they've gone public, it's gone from something that is just endlessly hockey stick to, there's a little bit of a plateauing happening there, but given how much time and how many users are on Roblox, even if a tiny percentage of them go to this learning hub and start using educational games, especially ones from Google and Sesame and BBC, that could be a real sea change in the accessibility of educational gaming and the fact that these games can be potentially, I don't know if they're doing this yet, but potentially there maybe user created content in the games or users can create their own games that maybe could be graduated in there.
That creates a really interesting potential feedback loop. I don't know if that's. Part of the plan right now, but I hear you. Just 'cause the company's really big, doesn't mean they are will always be there or that they're sort of, it's, it's just endless growth. But I'm always excited about Roblox and I think, Ben, something that you and I have talked about a lot over the last few years is the sort of benefit of being able to use partnerships to quickly expand your distribution as an ed tech company.
The fact that Roblox is having this learning hub means that I think anybody who's doing educational gaming might wanna look at what that looks like. Obviously these first set of partners are pretty big name partners. They're very established. But the idea of getting in front of the Roblox audience versus having to build from the ground up is pretty appealing.
[00:19:30] Ben Kornell: And like we've said, channel or distribution is one of the fundamental challenges here. Yes. So I'm excited that Roblox is thinking about themselves as a platform and a conduit. You know, basically. It takes many mindsets, shifts to partner effectively with people, and I think this is a great evolution.
[00:19:52] Alex Sarlin: I do too.
I'm really looking forward to seeing what happens there and if some of these games start to really get some traction. So let's talk about the Quizlet report that you mentioned. Quizlet obviously has become an ed tech incumbent over the years. They do a lot of things at K 12 schools. They're also still enormous among and colleges and still enormous among students.
They put out a report this week called How America Learns, and basically it's all about AI. It's about, yeah, what is happening with AI, what does growth look like? And they found some pretty significant increases in usage. So they found that 85% of respondents, this is basically between 14 and 22, are using AI.
Last year when they did this report, it was 66%. So we're going from majority 66 to vast majority, 85% of students using AI and teachers are outpacing the students in AI adoption. They're growing even faster, which was different than last year. So this is not something that's a total surprise. We have seen growth happen really quickly in AI, but 85% of students in this survey using AI, pretty impressive.
[00:20:55] Ben Kornell: Yeah. I mean, the percentage growth among teachers is higher, but still, you know, students are using it at a disproportionate level to teachers and in our arms race between the, the educators who want productive struggle and the learners who may have near term incentives to get the answer. We're still seeing an imbalance there.
One thing around teachers that I found interesting is. Over half use it for research. So I think basically, uh, 45% are using it to generate classroom materials like tests and assignments. And that's probably the use case we hear about most in EdTech. But it's actually like researching or summarizing or synthesizing information that basically the gathering of information.
And when you think about a teacher's workflow, when they're starting a unit or they're thinking about, okay, what am I going to do? Or maybe they're even thinking about pedagogically, what are some practices I could use? These are really common like workflows for any professional, and this is particularly helpful.
So what I worry about is are they getting accurate research? Is there risk of hallucination? All of those things. But also is there an ability for them to create repositories? Of the research that they've done and shared it. So, you know, what was it like a year and a half ago we were talking with OpenAI about their gems.
Uh, actually they were called GPTs Gems is the version at Google. And I think there's a real opportunity for community building around these shared teacher repositories of research. And we've yet to see that kind of catalyze from an EdTech perspective to reading the Quizlet report. It made me wonder, wow, generalized versus specialized.
How is Quizlet gonna handle this when a lot of their data shows that people are getting test prep just through ChatGPT. That's a very big competitor. And when you are actually looking at these behaviors, the fact that lesson plan generation is low and specialized educational things, the use cases are relatively low compared to the generalized use cases.
It feels like the big tech companies are winning here. And like specialized ed tech companies aren't.
[00:23:15] Alex Sarlin: Yeah, it's an interesting point. I think that there is a, definitely a wrestling, I mean, we saw that when Google dropped all of those features after Isti, there was a lot of commentary in the ed tech communities about what does this mean for the Magic schools and BRI and school ais.
We just talked this week in this episode with a StudyFetch co-founder Ryan Trattner, and they just celebrated 5 million students using StudyFetch, which is obviously an education specific tool. I think so far for now, when you have numbers this big, right, you have 85% of students, and I think this says even up to 90% of teachers in one way or another.
I, that seems like a high number, but it says nearly 90% of teachers currently use AI technologies for school, at least in some capacity. If you have that many students and teachers starting to use the platform, there is room, I think, for both types of tools, the general purpose tools, the ChatGPTs, which I still think is for a while, is gonna be the number one tool.
And some of the specialized tools. That includes homework helpers, that includes, you know, study planners, that includes writing coaches and things like that, and to AI tutors. So I think it's becoming a big enough space that I think there's room for both, but I'm not sure that will be true forever. And I think any ed tech company has to consider ChatGPT core.
Competitor and Claude and Claude's learning projects, or just Claude generally and Google as competitors, which is pretty scary competitors to have if you're an EdTech company. But so far I think the pie is big enough for all of us. We'll see if that changes. A couple of other EdTech headlines made news this week and we wanna make sure to talk about them.
We saw some interesting headline from Pearson. So Pearson announced a new innovation lab opening in London, in this London headquarters, basically to explore emerging learning technologies. It's about AI, of course, it's about immersive learning. So that's xr, ar vr. Pearson has, you know, pivoted over the years to be really focused as much as it can about lifelong learning, the higher ed and workforce learning, and sort of making the connections between them.
So this is supposed to be a place for prototypes, it's a place for customer research, and they're working with people like Meta for Education for xr. They're working with. Google's Android XR platform to think about how that might work. They announced a partnership recently, Pearson and Google, they're using Google Cloud as well.
There's an ambition here, and Pearson is, you know, one of the only public ed tech companies that has consistently really continued to maintain its position for quite a while. They're looking for this sort of internal labs and what's next? What did you make of this? Do you think Pearson's gonna come up with some new ideas out of this?
[00:25:40] Ben Kornell: Well, we've never lost money betting on the slowness of these large companies to innovate. But here's the case that could be made, that this could be a breakthrough or a major development. In the world of AI, you actually don't need specialized skills as much because the AI is creating a lot more velocity on its own.
For product output. And it's also can be horizontal in terms of features. So distribution is actually like the key differentiator and Pearson's very strong on that. And then on the training data side, they've got very strong data sets because of their large history. So, and then they've got a large content pool.
And remember we were talking long ago about ChatGPT wrappers. What's emerging is there are some companies that have features wrapped around ChatGPT, and then there's other companies that take content and wrap GPT around it. And Pearson could be really well positioned there. So in terms of like longtime EdTech skeptic, one thing about the dinosaurs here is that they, they are not extinct in our space.
They can take Cliff. No, they're, but in terms of counting on them to be the breakthrough in innovation, it's always been a good bet to bet against them. But I think there are some new dynamics that could. Potentially position Pearson to break through. And if you look at Amplify and you look at curriculum associates and what they've been doing, I think they have also shown a playbook where innovation pays off and has an ROI.
[00:27:23] Alex Sarlin: Yeah. Balancing the sort of dinosaur nest, which means, you know, you have a big install base, you have a lot of data, you have a lot of existing customers who are using you for long times. You have a lot of people hired and a lot of ability to acquire and things like that. Balancing it with the ability to sort of continue to innovate and stay nimble while these technologies move so quickly.
It's sort of the order of the day for any incumbent. I mean, this is what we're talking about with Google as well. Google users are less likely to click on links when an AI summary appears in the results. According to a Pew study that came out just yesterday, that's exactly the kind of tension that somebody like Pearson is messing with, right?
Or is dealing with. They're like, new stuff is coming, but if we incorporate it too quickly and disrupt ourselves, we're disrupting our core products. And do you do it anyway because you sort of believe that VRXR or AI is so powerful that you've gotta double down on it? Or do you. Try to be block out the noise and say some of these things are faddish or we could acquire our way through this.
Every incumbent, I think, in almost any industry right now is dealing with this in various ways, but I think Big Pearson and Curriculum Associates and Amplify and some of the people who are sort of some of the bigger re renaissance learning to some extent in this space are all trying to find that mill ground that where they can sort of continue to leverage and take advantage of their incumbent ness as well as not close their ears too much and not realize all the things that are happening and the fact that even, you know, core tech products, but also nimble startups are starting to nibble around their user base.
There's no answer to it because this is a very complicate, like this is disruption at play. I think it's interesting to point out that Pearson is, is very clearly in this space signaling we are not going to be. Lean too much on our haunches here and stay back. We're gonna, we're gonna spring forward and try to be really innovative and stay, stay moving.
And then of course, being in London right next to DeepMind could be really interesting. I think London is becoming more of a AI hub.
[00:29:13] Ben Kornell: Yeah, I agree with that. I mean, the investor perspective on this is that it just adds even more caution to which AI startups you can invest in. As the competitive space gets crowded with bigger and bigger players, Google is going to be a player in K 12 education, open AI is gonna be a player.
All the content companies are now like stepping up their AI game. And you realize that there's, the kind of core challenge of any AI based company is its defensive moat. You know, its competitive moat. Any of these companies can just fast follow any of the features that are innovative from the startups.
So I just think there's a. This is all maybe long-term good for the ED sector because we've had a really hard time demonstrating that our most successful companies can be financially successful long-term. But I think it's a challenging environment to raise in. Right now, Alberto, just from Transcend had a post around seed strapping and there was a little back and forth with Jennifer, Carolyn around the investments that she's made.
But I think all in all, it's a tough time to raise massive amounts of funding, and the seed strapping route is getting more common. Adding to the uncertainty is the US Department of Education and overall stuff with the government. Just as we're seeing the industry shake up happening, we're also seeing a funding shake up with US government, for those of you who are abroad, I think this also I'm hearing in Australia, government funding for education overall is getting turned upside down across a lot of countries.
Broadly, as this kind of AI universe is unfolding with some countries feeling like, oh, AI is our solution. We don't need to invest in new schools and educators anymore. Ironically, the US Department of Education, which is basically in the process of getting, you know torn apart, is also releasing new federal funding for AI and education, and they've released new guidance and basically the focus is on high quality instructional materials around it.
It's both AI as subject as well as AI underneath, which I think is a little bit confusing. The three areas are AI based, high quality instructional materials, AI enhanced high impact tutoring and AI for college and career navigation. Now, we do have several friends in these buckets, and I think it could be exciting for people to build in that space, but it also is tricky to rely on any federal funds.
And the app. It wasn't exactly clear what the application processes would be for this guidance, but it does create some sense of prioritization and basically if funds are already being deployed, these will be viewed with favorability versus others. So you know, also on that background, there was a new AI report around basically the US' AI strategy and.
I think what we can see is that the idea that AI is gonna get regulated by the government is going away and the idea that it's gonna get promoted by the government is rising. So I'm sure the AI companies are happy. This just means the stuff we're talking about is just gonna accelerate.
[00:32:49] Alex Sarlin: Yeah. Well the federal government definitely is gonna be pro AI.
We, we talked to Brad Carson, a former congressman and head of a, a nonprofit former university president. He really said that one thing you can basically count on here is that the federal government is very close to a lot of VCs. We know that vice president is a, is an xvc. It's a very close to a lot of tech First thinkers.
They're not gonna get in the way of AI. That said, the states, there are states do, did retain the ability to regulate AI and we may see some, at least basic regulations from different states around privacy, around data sharing, around model building, around and maybe even. I don't know, maybe even some, some age regulations just around how young is too young to interact with AI bots.
I'm not sure if that'll happen or not, but it might. So I think it creates a really uncertain atmosphere. 'cause you have this sort of unabashedly pro AI federal government, so much so that even there, you know, education department, which is basically like kamikaze education department that's trying to destroy itself is still putting things out saying, yeah, well AI has gotta be at the center of this and we're gonna focus there.
And then you have the potential for any given state government to start to be reactive. I mean, you know, I've said this for years now, but there will be disasters that come from AI. We still haven't seen very many of them yet, but they're gonna happen. There will be suicides, there will be, I mean, there will all sorts of things that will happen because this is incredibly powerful tech.
And I think the real, the real litmus test for this is when that stuff starts happening and when you start seeing, you know, lots and lots of negative headlines about certain things that are either happening or possibly happening with AI, we saw a little glimpse of that this week with these AI companion reports that came out from, from Common Sense and, and internet matters where, you know, kids are, are using AI companions a lot, partially because they don't have anyone else to talk to.
You know, if, if you start seeing these really intense. Sad, scary, and, you know, problematic uses of AI. That's when the states will have pressure to regulate, and that's when I think we'll have to see, you know, what this means in educational environments or even just environments for, for young people in general.
How will people overreact to AI usage for young people when some of the negative implications become more concrete? Because they, they will. It's a weird moment. I don't know. I think, Ben, you, you were saying that, what are you supposed to do if you're an ed tech founder right now that's doing something you know deeply in AI, should you lean into this federal guidance and say, Hey, AI, high quality materials, let's just what we do, let's go, you know, stamp that on all of our sales materials.
Or do you try to sort of balance and say, well, let's, depending on what state or what district we're, we're selling to, let's make sure we understand how they think about it, not just the federal government.
[00:35:27] Ben Kornell: As a startup, CEO, you've gotta live in three different features. The down case, the mid case, and the upside case.
If you are dependent on federal funding. That should only be an upcase scenario, and you should still have a plan for the down case scenario. If your down case scenario is dependent on federal funds or really governmental funds of any kind, that's not a great downside scenario to plan for. And I do think that the reaction that most people are having is to place a probability on these funds.
You know, before we wrap, we should probably just go down like any quick hits that we have in, in, you know, EdTech deals or EdTech news, anything you've got here.
[00:36:12] Alex Sarlin: The one I wanted to bring up, and I know we're short on time here, is just, I thought it was an interesting thing today. You know, Duolingo a few months ago sort of announced that they were going to an AI first model, and we know that Duolingo has a very tech first approach to things and they're saying, we're gonna really lean into AI.
We saw Prep Lee, a competitor to Duolingo this week, basically come out with a whole campaign saying We're the Better Duo campaign. And what they mean by that is, yes, we're better than Duolingo, but they also mean the duo is a tutor and they're two T and the relationship between language tutors and their students, that's what they mean by duo.
That's smart. It is smart. And I think it's an interesting sort of canary in the coal mine or like a harbinger of, I think a lot of debates we're gonna start seeing coming in the next few. Months to years with AI, which is what exactly is the value of human relationship. You know, they have these quote here from the chief brand officer.
A human tutor offers empathy, trust, and personalization things no algorithm can truly replicate. So you're gonna have some folks leaning all the way into the, the human side of learning and others leaning all the way into the efficiency and the scale and the omniscience of AI. And I think, you know, ideally, you and I, Ben, I know, think there's value in both those things.
You, I don't have to choose, but I think there may be this sort of dichotomy created, especially among branding and marketing about do you want a real person or do you want an ai?
[00:37:31] Ben Kornell: That's a really interesting insight. We'll follow up on that one. On my side, on funding an m and a, I saw McGraw Hill is planning to IPO at a 4.2 billion target valuation.
Word on the street is that that valuation is a little shaky right now, so we'll probably find out more in October at the New York Ed Tech Week. I know McGraw Hill will be there. And that's really when by that point there should be a, a really clarity around what their target price per share is. That has a cascading down effect of how do people price or value equity in basically every ed tech company.
So important to watch that and honor education. We were basically saying it's hard to raise money in this environment. $38 million for their digital learning platform. I just talked to their lead marketing person. They're growing like crazy. It's a AI native LMS essentially. And you know, one of the things they emphasize is engagement and this idea that engagement is the new currency in education.
Which I think connects with the research we've heard from Lawrence Holt and others. And so interesting to follow that. And we also had a couple others, galaxy Education raise $10 million for AI powered English education in Vietnam. We're seeing Southeast Asia is really having an Ed EdTech boom right now.
[00:38:56] Alex Sarlin: Yeah. Hey, which is another Vietnamese ed tech just raised money as well.
[00:39:00] Ben Kornell: Yeah. Well, we've gotta go to our interviews. Thanks so much for joining us here at Weekend EdTech. Of course, if it happens in EdTech, you'll hear about it here on weekend. EdTech. Thanks so much, Alex, and have a great vacation. We will see you in a couple weeks.
[00:39:16] Alex Sarlin: Thanks, Ben. Thanks everybody. Have a great time for the next couple weeks. If it happens in EdTech, you'll hear about it here on week at EdTech. At EdTech Insiders. Thanks so much. Bye-Bye. We are here with Brad Carson. He is the President of Americans for Responsible Innovation, otherwise known as ARI a Nonprofit promoting Safe Pro innovation AI policy.
He served as President of the University of Tulsa from 2021 to 2025, was a US Congressman from Oklahoma, and held senior roles in the Army and the Department of Defense. Brad Carson, welcome EdTech insiders.
[00:39:50] Brad Carson: Alex, it's great to be with you.
[00:39:52] Alex Sarlin: Great to be with you as well. So first off, tell us about what Americans for Responsible Innovation is as an organization and how you are responding to the AI age.
[00:40:02] Brad Carson: So ARI is about a year and a half old. We now have more than 30 people working for us. So really in DC the largest group focusing on reasonable AI policy. And our ambition is to say to people that this technology is gonna be transformative and has many benefits to society, and we want to see it go well.
But that probably means you need some guardrails around it to ensure it's not misused, that people don't lose faith in what AI can do, and that it's used for its most beneficial purposes. So I spend my days working with legislators here in Capitol Hill working with the executive branch to talk about what that kind of reasonable guardrail might look like.
[00:40:40] Alex Sarlin: Yeah, and there's been lots of movement in that space over the last few months. There was regulation that was in this giant bill that got pulled out. Tell us about just the state of right now, of those guardrails. How are people thinking about how to keep the momentum really robust for AI growth while putting guardrails in place that keep irresponsible use from really taking over?
[00:41:01] Brad Carson: Well, the good news is people are thinking about it more and more. You know, I think AI is a new issue to Capitol Hill, and so people are just kind of understanding what it is actually as well as some of the policy options around it. So I've seen amazing growth in just the last year, people talking about artificial general intelligence, talking about misalignment risks, talking about bio and chemical weapons that could be developed with AI and asking about what that guardrail might look like.
And we saw this really kind of come to a head in the last couple of weeks with a debate over moratorium on state laws, where 99 to one, the Senate rejected that. 'cause they realize that there's going to have to be some kind of guardrails around AI. It's really gonna work for people. So people are just kind of getting up to speed on the issue.
But I think you're gonna see a lot more action from Capitol Hill over the next couple of years.
[00:41:45] Alex Sarlin: Yeah, which is healthy. It is a very, very powerful technology in all sorts of ways. Some of the things you're mentioning here, the artificial general intelligence, or there's biochemical warfare, there's DeepFakes.
Some of the issues that people are concerned about for the future of AI are generalist issues. Others are specific to the education sector. And when you talk about AI in education, and especially right now in higher education, there's a lot of news articles. There's a lot of sort of thrash around AI for cheating AI for students using it to write their essays and maybe professors using it for grading, and a lot of sort of stern drawing about how AI is undermining the educational experience.
I'm curious how that dovetails with some of the things you are hearing from Capitol Hill and from the law. Like are we stuck in this debate about AI as a cheating tool, or are we starting to think more broadly about what AI is going to do for education in general and both the opportunities and risks?
[00:42:40] Brad Carson: I don't think there's gonna probably be a lot of law or regulation around higher education or K through 12 use AI. It's incumbent upon those sectors to get it right themselves and see how AI can be used for good purposes or to prohibit it in certain cases as well. You do see, I think, some work from Capitol Hill to enable AI to be distributed into the K through 12 system, right?
Grants to help teach teachers about how to use it correctly and maybe schools to access the technology or upgrading their infrastructure, right, to be able to access it. So you're gonna see that kind of thing, but I think you know how the professions use it. Every level of education is gonna be left to the profession itself.
And obviously there's a raging debate about that in a higher education where I was the president of a university dealt with this day to day, but K through 12 is having a similar debate about it.
[00:43:27] Alex Sarlin: How do you think that the, either the federal regulations or the state regulations, which are now unfettered, the states will be able to regulate?
I'm curious if you're an EdTech founder, which is a lot of the listeners to this podcast, and you are building something with AI baked in, which most of them are, you definitely want to keep an eye on that regulatory landscape. It sounds like you're saying that they shouldn't be desperately afraid that there's gonna be some giant state sweeping law saying AI can't be used in education at all, or it can't be used unless it meets these very high levels of criteria.
It sounds like, I don't wanna put words in your mouth, but I'm curious how you would approach this moment if you are an EdTech entrepreneur and you're trying to sort of get ahead of the regulatory landscape.
[00:44:04] Brad Carson: I think if you're an ed tech entrepreneur, you're worried about the legal prohibitions or rules around you, as well as the cultural norms that might inhibit people adopting your technology.
I think on the legal front, there will be some rules, probably at the state level about student privacy, for example, about what you can do with uploaded student materials. These are real concerns people have is about the cybersecurity, the privacy issues that most EdTech founders are probably quite acquainted with because they exist in the analog sector already.
You know about what you can say about students and things like that, so I think that is a big issue. I think the cultural norms are probably going to be a more serious problem for that ed tech founder in that there are a sizable number of educators who believe that AI is impossible to reconcile with quality education, and that is a norm I see every day on my university campus.
People try to prohibit its use. Many syllabi say, if you use this, you're in big trouble. People do it anyway. In fact, writing your student essay or writing more often, kind of, you know, a lot of faculty love to have daily reflections or this week's reading reflections and you'll post that into the learning management system.
What you thought about the book or the article, AI does that beautifully, right? You could just like upload the article, it'll spit out a paragraph and you know, probably most faculty just kind of casually grade those anyway. Like AI, he, Brad participated in the activity and so this is an incredible use case for it.
Not a necessarily a healthy one, but this is being used and maybe the most common, probably the most obvious use case. For the typical American today is having it write documents for you. And if you're a student, it's obviously everywhere,
[00:45:41] Alex Sarlin: right? So there's been some really interesting sort of series of articles over the last six months or so about will AI undermine the act of writing, you know, teaching of writing for college faculty?
Will it undermine the college essay? And you can't trust anything students write. Are students going to sort of optimize their way through college by using AI for everything they can possibly use it for? Whether or not it's technically allowed in the syllabus? And then there's even been the flip side of students suing their.
Universities because they're saying that the professors have been using AI too much for grading or for doing the types of posts you're mentioning here. I'm curious, when you look at this landscape, I feel like we're in the very early innings of AI. You mentioned ARI is is a year and a half old trying to, in response to this AI era.
I'm sort of looking forward to the moment where we start to get past some of these, I think very early and techno moral panics I sort of call them, where it's like, oh, this thing is coming. How is it going to upend our lives and make everything worse and destroy education? But what might that look like?
I'm curious, from your perspective, talking within the government as a president of a university, do you see a moment coming where we're gonna start to say, the first thing we used to think when we heard AI was cheating and integrity. Now we hear AI, AI and we think, oh, this amazing thing. Students are inventing their own.
Solutions to things. They're creating their own companies. They're doing this amazing innovation, or they're making movies on their own when they're freshmen. What do you think will be the next chapter of AI that hopefully will sort of turn around some of the fears?
[00:47:09] Brad Carson: Well, I fear in the near term, the concerns of AI use are going to be the ones that most educators talk about, because I do think you should assume any work outside of class is being done by an AI and that's the only writing of course.
But if you're doing physics and math. The AI solve problem sets at the university level might be intended to take you six to 12 hours. It solves it in 30 seconds and it solves them usually perfectly. And I see a lot of students who I talk to do this, my own son who's a college student. And you know, I ask 'em how people are using it and they'll like have the problem sets.
Ideally they do it themselves minimally, that they'll have them checked by AI and then they'll have it checked by a second and third different company's AI because a case, right, a mistake is made and by the time you do that. They're perfect. So I think you should assume any work outside of class is probably going to be AI enabled, and it's important for your listeners to know.
You really can't detect that. You know, there are programs you can run it through. It actually are more accurate than not. It picks up some of the ticks or the common phrases, but it has too many false positives to be actually used by administrators. And so you can accuse someone of using AI, but if they deny it, you have to just accept their word.
So basically you should assume anything outside of class is done by AI. So what has to happen actually is obviously more in class work, more writing. Some people at University of Telstro are going to oral examinations, like a Veeva kind of an old school kind of thing. Or you use AI for beneficial purposes, right?
You can like integrate it like, okay, you should use AI and we know you're going to do it. You know, so why not? Let's be open about it. And then you can do some really remarkable things. I mean, I use it, for example, to improve my own writing. I'll often like write an essay or something I'm writing and say an opinion piece for a journal or for a public consumption, and I'll have the AI evaluate it.
I'm like, what do you think of this? And it's like, you know, you make a logical leap between paragraph two and three that you probably should add another paragraph, right? To bridge that. And it actually helps and thinks about it a lot. Not about wordsmithing, but like the structure and whether the argument is sound or not, or you know, you ask it like, Hey, I'm writing this opinion piece, rebut it.
Imagine you're a deep critic and you disagree with. Everything in this piece, right? Write the strongest rebuttal to it you have. And then like I read that like, oh, that's actually a couple good points. I should think about those, right. And amend my own piece to incorporate it. So there's lots of things you can do.
And I'll give you another last example, how I hope it could be used. So this is kind of a personal project of mine. You know, I've studied philosophy a lot in my life. I have a degree in it from my undergraduate days. I occasionally go back to it and a lot of times, you know, I'll tell myself I never really understood what Descartes meant.
I mean, I knew what he said and I read it. Cogito ergo some. And we all know this. Like, why should I care? Why did he care? You know, why do people fight over these issues for centuries? It obviously mattered. But I said, I never really understood what the stakes were or like who he was addressing. You know, who were his critics?
So. I had this little project and I was actually starting with like the medieval era. And so I was talking about the Roman philosopher Boethius, who is still important medieval thinker. And I was like, okay, we read Boethius. I'm like, why should I care? And I had like. Probably three hours of discussion with Claude, which is Anthropics product, where I would keep asking it questions and he would say things, you know, like even it's smarter than any human is because it's read the entire internet.
And so you could ask questions like, okay, you know, BEUs said this. That seems like weird to me. He's like debating this substance versus accidents that seems like kind of esoteric. Why should I care? And he said, well, in this era, this was a huge debate and these five people really cared and it defined the entire metaphysics.
I'm like, well, who were these other people and what was their argument against it? And then it tells you all those kind of things. And like, how did this in info? Like what. Political issues or theological issues. Did this really matter for like, well the Catholic church weighed in on this question and this set the Catholic church on this like centuries long path and said, you have this like incredible interrogation of it where it would say something like, line like this came from Aristotle.
I'm like, well, tell me what Aristotle that is and tell me what year this was actually translated into Latin from the Arabic, or what was the source of this? How did they find this book? You know? 'cause it was in the medieval period, most of these texts were lost. That's an example of like, I was a lot smarter about like ISTs mattered, right?
As a result of this. After even I've had many classes along the way that discussed him. It was always kinda like, Hey, that's a good book. You know, and like very influential, but like, why, you know, why should I care? And AI used in that way is like an Oxford tutor, but 20 x better like one-on-one instruction from this basically omniscient but not perfect, but basically omniscient level tutor.
It's incredible.
[00:52:04] Alex Sarlin: Yeah, I love that example. And I feel like there's an interesting through line that I'm hearing in what you're saying, which is that there's the accountability piece of education. You know, what can you write without ai? You're saying anything you're writing outside of a classroom, maybe you should probably assume that students are using one or more ais.
So you get back to oral exams and blue books and some of the accountability measures we're used to. But that's the baseline of sort of accountability in education. Then there's this other line of what is the potential here? You know, that motivational piece that you're mentioning where a student can say, why should I care about this?
Why does this matter to me? Why is this relevant to today? Is it only something that was true in, you know, thousands of years ago? Or is that relevant to anything today or anything in my potential, my, my life, or anything in my aspirations? And the idea of having sort of the baseline accountability and we all have to figure that out in the AI era and people are trying to, with some of the methods you just named, but also this second level of how can we take education and give.
Everybody, you know, Oxford tutor that's available all the time, that knows the entire internet. That can explain anything, answer any question in any format, in any language. I think, you know, on this podcast we talk to a lot of EdTech entrepreneurs and founders and investors, and there's this sort of two levels at the same time that you have to split your mind into doing this really amazing, complex interrogation of, of bolus on one level and then, you know, raising the floor and saying, how do we make sure students don't get through college without literally ever writing a word or making sense of anything.
This is sort of my interpretation of how I'm writing it, but I'm curious how you'd respond to, as a university president, how higher education can sort of keep both of those levels in mind. How can they raise the ceiling and not just concentrate on the floor?
[00:53:44] Brad Carson: Here's a question that I often ask provocatively to people, like, why are people here at a university?
Like what are they looking for? You know? I mean, to use a kind of a crass term that academia, Hey, it's like, what are we selling here? And you know, the economist will tell you that like higher education really has three values. One is, and this is I think most commonly believed, like there's the human capital value, right?
You show up, I learn new skills, and I go off and have a great life. The second is the signaling value, right? If I have a Harvard diploma on the wall, and I graduated five eight of Kappa from Harvard, people know a lot about me, right? They know I'm smart. I got admitted to Harvard. They knew I ground my way through a lot of stuff that was kind of like mind numbing.
So I'm super conscientious. And I'm probably quite ambitious and a bit conformist even too, right? I played the game at Harvard at that five Beta Kappa signals, right? That I played the game at the top level, and if I'm an employer or a grad school, like, okay, those things mean something to me. Right?
Conscientiousness, conformity. You know, human capital, that that means something. The third reason is kinda like the consumption value of higher education. A lot of people not at Harvard so much, if you go to like Louisiana state, right, there's a consumption value. You know, I love being in the Greek system.
I love floating down the lazy river. I love this four years of like personal growth and and being around other people your own age.
[00:55:08] Alex Sarlin: Yeah, it would.
[00:55:10] Brad Carson: I mean, yeah, well, you know, I loved it. It was great. I loved my own college experience. It was like a consumption value and that's why people have the climbing walls and the lazy rivers and stuff.
So those are the three reasons I often tell people, like if you're a trustee or if you're an academic or if you're most kind of casual observers of education, you say, well, human capital is the reason you go to college, right? And the answer is probably not. Signaling is probably the major value of college consumption's important.
And actually the evidence from social science tells us that while human capital is our. Desire. If you're running a university, we don't do that great a job at it. You know, when they test people like, did you learn a lot in college? The social science says most people, the answer is no. And so it comes back to this question of like, why did MOOCs not take off?
Or why does MIT get by, like posting all of their courses online? Doesn't that like cannibalize the students who want to attend MIT? The answer is no, because MIT has a huge signal value to you. Finishing the OCW coursework is a human capital investment and great one, but it has no signaling value. So coming back to what AI is going to do for us, right on that human capital front, AI can develop it immensely for the ambitious student and the faculty who knows how to do it.
You still have the signaling value, right? Maybe that's diminished a bit because Phi Beta Kappa is easier with AI, right? If I'm willing to cheat all the time versus people who are not cheating, right? My problem sets didn't take me six hours, they hooked me 30 minutes, you know, once I proof them. And so maybe the signaling value declines a little bit in that respect, but the consumption value is still there.
And that's why MOOCs didn't displace colleges. OCW didn't replace MIT. And even if like MIT or Harvard goes to a full AI enabled curriculum, the signaling value and maybe the consumption value are still there and the human capital will be there too. So it's a long way of saying like there's gonna be a strong role for higher education for a long time to come because of the signaling and human and the consumption value, the human capital aspect of.
We actually don't do a great job. Now the evidence says, and that's hard to say, but maybe AI can actually improve that in some pretty radical ways for the ambitious student at least, right? If you want to do a deep dive into Boethius and spend two hours understanding like why this really matters, you now can do it.
And again, I've had a lot of great faculty members over the years, very few of them spent two hours with me personally. Like I'll go through every question I know every answer to you, and I'll just like go through every, you know, super rarefied question you have, right? And give you an authoritative answer to it.
That's amazing.
[00:57:40] Alex Sarlin: That's a fantastic answer, and I think that human capital value of higher education is also in a transitional phase in a lot of different ways. It could be raised within the university system because suddenly students and faculty have access to these incredible technological powers. It also could be distributed outside of the university, like, you know, you mentioned, Hey, it's like having an Oxford tutor.
In your pocket, and it's like, it sure is, it's like having a Harvard professor or an Oxford tutor in your pocket. Well, if everybody has, you know, an Oxford tutor or a Harvard professor in their pocket, that itself may change the human capital value of college. If you don't have to actually, you know, go to Boston or go to, you know, London or, or Oxford, to be able to get access to somebody who knows everything about BEUs, they could tell you exactly what, when it was translated into Latin, you know, 20 years ago, that was not possible.
The, the only people who knew that were on those campuses, and now it's in everybody's pocket. So I think that idea of the human capital, the signaling and the consumption, is a really great way to break down the, the value prop of higher ed and potentially the human capital piece and the learning and the sort of, you know, the idea of trying to make sense of this complex world and think of new ideas and put, you know, synthesize existing ideas, suddenly that becomes something that is, it's not monopolized by higher education.
It's not only something that happens on campuses, it's something that can happen. For anybody at any time. If that person is, you mentioned motivated, right? If that person knows enough to ask questions like, why does Boas better, what does boas have to do with Aristotle? There's a sort of baseline that is necessary, I think, to be able to even ask the kind of questions that get into these deep ideas.
But if you have that, suddenly there's a major shift in the access to the type of very high level human capital. And of course, it's not just in philosophy. You could ask physics questions. You could ask biology questions. You could ask questions about religion. You could get, you know, any way you'd like suddenly.
Every person has these incredible access to education. You know, informal education, which is really interesting. The signaling piece is obviously important to that as well. I wanted to ask you about the Trump administration. I think just today as we're recording this, there's going to be a, a signature on part of an AI executive order, but there was a big executive order in AI in education recently from the federal government, and I know this is something you think a lot about.
You responded, your organization, Americans for Responsible Innovation responded to that. Can you tell us a little bit about your take on the executive order and how the federal government is thinking about AI and what you think they're getting right or wrong about it?
[01:00:05] Brad Carson: So the federal government's dipping their toe into the water a bit on AI, and as we speak today in recording it, they're issuing their AI action plan, which is kind of this national effort about disseminating AI through the economy, but also doing some reasonable guardrails around it.
And it's actually been a pleasant surprise to read kind of thinking of the administration on this issue. But earlier on, right, they were looking at it kind of sector by sector and they realized AI has a powerful role in education. And so they issued a kind of AI education executive order, mostly focused on K through 12.
Students and the idea of like, we want to have grants states to help train teachers to use AI better, and we want to ensure the technology is fairly distributed across the country to all regions. And we want to also realize that there's gonna be a lot more effort now on workplace training that could use AI.
There may be more need for, um, internships and programs like that. So asking the Department of Labor and some of the other agencies to work on those projects, which seems like a very good thing. You know, K through 12 education's gonna be driven by the local and the state government more so than the now defunct department of Education here in Washington.
But it seems like a good step. Again, it's gonna be up to the industry, to the sector itself to get it right, but it is good to see that this administration is pushing it and trying to help states get it right.
[01:01:21] Alex Sarlin: Yeah, it's been interesting to watch the sort of parallel policies from the federal government here, where there the Department of Education has been under fire and is basically being emptied out from the inside.
There's also been a lot of funding that's been held up, especially at the K 12 level, but also at the higher education level. And there's been some colleges that have been under, you know, direct fire and attack by the federal government. So there's, on one hand there's this, you know, clawing back of, you know, PBS and, and NPR just got billions of dollars pulled out of their budget as well, or billion dollars at least.
So there's this sort of pulling back of money, but at the same time, some of the policy around AI and education is very ambitious and they're saying this is exciting. People should lean into it. There's a lot of opportunity here. Um, how do you square this, the funding piece and the visionary piece, you know, the piece about how important it is and how we have to make all these changes.
[01:02:07] Brad Carson: Well, I think one thing that we can say about the people around the president on these questions is like they believe in AI, and if anything, they want it to be unfettered and be widely distributed across every sector of the economy, including education. And that's why the AI action plan today, I mean, they want to make it, that's more part important part of government.
And so, you know, to try to use AI to improve government services. So they're very eager to see AI proliferate and so it's not surprising that they would support it in education. You know, because they see it as disruptive, right? A lot of these guys are venture capitalists, you know, they're EdTech types.
You know, they may have investments in EdTech adjacent to EdTech and like they see, you know, most of them probably have the view like a lot of Silicon Valley does. Hey, you know, the American public school system is failing a bit, you know, and the numbers aren't that great. It's right for disruption. We tried charter schools and that was mixed and you know, with mixed results, maybe AI, right?
Maybe this will like truly like be this disruptive innovation. So I'm not surprised that they're big on AI and education and it's to their credit, they want to get it out where everybody has access to it and has the training to use it.
[01:03:11] Alex Sarlin: Yep. This has been fascinating. I wish we had more time. I'm definitely gonna be following your work at Americans for Responsible Innovation and I think, you know, we are at a such a inflection point for AI in every industry, but especially in education, you have so many different.
Players from school districts to universities, to ed tech companies, to big tech companies, all looking at the education space, to the federal government, to state governments. And I think, you know, your work is very important in sort of finding the right balance of innovation moving forward, and finding those guardrails so that we don't, you know, cause some of the chaos that we've caused with technology in the past, even in the past 25 years.
It's really great to talk to you. This is Brad Carson. He's the President of Americans for Responsible Innovation, a nonprofit promoting safe pro innovation AI policy. Thanks for being here with us on EdTech insiders, Alex, it's been my honor. We are here with Ryan Trattner, who is the CTO and Co-founder of StudyFetch.
So StudyFetch was founded in 2023 when Ryan and his co-founder were just out of college. But the platform already counts more than 5 million users among K 12 and universities and has a number of pilot programs in the works around the country. It can also be credited with substantial improvements in student performance in key metrics.
And a few months back we talked to Ryan's colleague Sam Whitaker, who is a really great conversation StudyFetch is a fascinating platform. Welcome Ryan, to at Tech Insiders. Thank you so much for having me. Perfect. So before we even start, what is the elevator pitch for StudyFetch? What do you do and why do you think you already have 5 million users within two years?
[01:04:44] Ryan Trattner: Yeah, so the elevator pitch for StudyFetch is honestly a pretty simple concept at heart. And then, you know, we've taken a lot of user feedback and proved the product since then. The core thing was Essin and I basically interviewed a hundred people. We sat down with a hundred students, educators the summer before we released the product, and we actually didn't even start coding the product until like.
Right before the school year, like I think the first two weeks we spent all this time figuring out like, okay, what should we actually build that people will like and will find useful? And the core thing is basically students were using AI but it wasn't tailored towards what they were actually learning in class.
So most students are not learning just to learn. They're learning because they have some type of, you know, class that they're in, whether that's high school or college. So the core concept of StudyFetch is simple. It's basically where an either an educator can curate the content that they have in class.
This could be PowerPoints, this could be PDFs, their lectures. Or a student can just drag all that information in literally anything they have about the class, any context that's helpful. And basically StudyFetch lays out a full study plan for that student. So that's basically like very similar to like Duolingo, how there's like a path that you go down.
StudyFetch kind of does the exact same thing. So it gives students an A to Z place to go. Students can also like take notes in the platform. They can record lectures, live just a ton of ways to get information into it. And then StudyFetch will then create study tools for those students as they go down this path.
So say they have a first path and they're learning about viruses, and then there something on. I don't know, like are they DNA or RNA, all that type of stuff, right? That would be a section. They kick play and then they get a basic selection, like, how do you wanna learn this? You can have flashcards, you can have a practice test, you can watch a video explainer that we generate.
You can have a one-on-one session with our AI tutor, which you can talk back and forth to and kind of present you the lecture slides. There's just a ton of different modalities for every student. And then slowly but surely, we've been adding insights and progress that stack. Basically track students all the way through the usage of this platform on any single feature and can basically tailor towards learning styles and strengths and weaknesses that students are having.
And then as an educator, of course, you get insights into how those students are doing. So you could ask like, Hey, like what should I go over in class? Like, what are a lot of my students struggling right now? And if 10 of them didn't know what George Washington was or something, then you as an educator would get that insight.
Maybe bring that up in class.
[01:07:01] Alex Sarlin: Yeah, you mentioned Duolingo having a sort of a personalized pathways created for the student in relationship to particular materials that are coming from their classes, and I think that sounds like a really rich consumer product, but you sort of play on both sides. You mentioned you have consumers, many individual students who use the StudyFetch for their own studies and make their own personalized platforms, and you have a educator side where you can work within institutions.
Tell us about how those play together in StudyFetch and how you sort of manage both sides of that platform.
[01:07:33] Ryan Trattner: Yeah, we get that a lot. Basically like how are you juggling two sides? I mean, on our end it's a lot of work, but I think we've set up teams that are capable on both ends, and so we're able to achieve it.
But the core thing is at hard, the students, right? So the platform is built for students. The goal is to get those students better grades, to make them less stressed, like anxious, to save them time as they go through school. And I think both students and educators benefit from that. From a student angle, like of course they're gonna benefit.
They come, they purchase the platform, and we give them all the tools necessary to pass their exam and. To feel confident From an educator's perspective, I think you want the same thing for your students. You want them to feel confident going through exam. You want them to get better grades and so StudyFetch kind of gives that at home support for the students.
I think it bridges the gap between students who have like direct, really one-on-one tutoring and students who don't. I think it answers that question. Three in the morning when there's no one there. There's so many different learning styles for students and there's, there's no way an educator, if they have a hundred people in the lecture hall or 70 people can individually tailor stuff to each student or answer every question possible.
Right. And I think StudyFetch allows them to do that. And then I think the big part, and something we're really, really focused on is insights and analytics on those students. So I think educators will be able to get a lot of information in class, like if a student takes a quiz or a test or completes an assignment.
But most of that learning happens at home, right? And it happens away from what educators are able to see. And so we're really trying to bring that view to educators where they can now see like what students are struggling with, without students even having to ask or even going forward. And even creating like plans to find students at risk before they even take the exam and then set them up with even study plans or extra material or like help and send it directly to those students and make sure that before they're even the exam, before they're stressed out, before they get a bad grade, you know you can help them.
And we come to that exam time, their confidence and they pass.
[01:09:31] Alex Sarlin: Yeah, that's really interesting. So I'm hearing you say that you're focusing on the needs of students during their study and their study mostly happens outside of the classroom and those study periods outside of the classroom. For most educators, they don't have access to that.
They don't know how students are studying, they don't know what kind of materials they're using. They're basically hoping and wanting the students to study enough to put in the time, to put in the energy to make sense of the materials that they are laying out of the curriculum of the assignments. But StudyFetch sort of bridges the gap and makes all of that visible to the educator side and allows 'em to see who's studying, how they're studying, what they might be struggling with.
I think that's a huge power in AI. So you started this very young, you said in our bio you started it, you and your co-founder were just out of college yourselves. I'm curious if you brought your own insights about your own college experience and sort of combined them with a hundred users that you, you interviewed early on to sort of figure out what the use would be like.
Is this a product that you wish you had had in school?
[01:10:28] Ryan Trattner: Yeah. Yeah. I mean, so funny, I dropped out a lot earlier than my co-founder Essen, who also left college early to finish this product. But I think, I mean, that was, I think, the core thing that really brought it to life at the start, right? It was this like really deep connection with what students were struggling with at the time.
And we had the ability to talk to a hundred students who would be open with us and they would tell us what exactly was going on, what exactly they needed to get through classes. And I think if we didn't have that, I don't know if the product would be the same way it was today. I dunno if it would be trusted by students and purchased by students.
Right? Maybe if we were building for educators, students wouldn't like it as much. Right? But I think it's important that. I think in that when we go sell to schools, it's important that the product that you're giving students, like they would go get themselves if you didn't give it to them. Right? Like that's how powerful it would be.
And so for, for us, that's a great pitch is like, look, we have these 5 million people who are using it and going after it and downloading it themselves. And so when you give it in class, like we actually have pretty good adoption rates among schools, right? To where they don't have to force any, we've never forced anyone to use StudyFetch in like a pilot or anything.
We haven't needed to because it's just been a valuable asset to them. But I think what's extremely important to understand, like outside of just anything else, like if you have a bunch of students, like what do you need? What do you struggling with? And it also helps with, with growth, right? You build something and then the people are already there to get it the second it comes out.
And so I think it helped with distribution a lot. We grew very organically. We had, I think a billion views on social media in our first year. And so. We were able to build a bunch of stuff that people were gonna share with their friends right away instead of us being to like force it to them and then as they go through the product, something that they come back and they use every day because they find it valuable.
I think we still are doing that, where we're trying to make sure we have a ton of student feedback from current students both in high school, college, and post-grad, that are constantly going after like different things in the platform and figuring out what is best. We also, I think, have noticed that as we get farther away, maybe, you know, a few years down the line, we need to make sure that our audiences actually understand that we will take their feedback and so we have like a big feedback button ready for students on the site.
We listen and we respond, and the power users that understand that if they put a feedback, even if we don't respond in two months, their feature that they requested at their bug that they had at the little tiny button they wanted to improve, something gets added. They keep asking 'cause they know that we'll take it to heart and I think that's the type of community that we're trying to build in terms of our users as well, to kind of keep that going farther and farther and farther.
[01:13:08] Alex Sarlin: Yeah, that's really interesting. Forgive me if this question, I'm formulating it in real time. I'm not sure it'll make total sense, so stop me if it doesn't. But one thing that I think is really interesting about the B2C landscape with AI product is a lot of it is at the point of use, like homework help apps, there's a whole series, many, many very popular homework help applications where it's like you're wrestling with this particular assignment, we can help you.
You take a picture of it or you explain it, or you upload something and we'll sort of walk you through it. We'll, ideally it'll help you learn it, not just do it for you, but it's a lot of sort of help at the point of use. What I think is something a little different about StudyFetch is that it's really designed to sort of create a study, like an ongoing study plan and a place you come back to, as you mentioned, every day to continue to learn what you're learning within the structure of a class or multiple classes.
It's not really about any one assignment as much as sort of. Being your partner alongside your learning journey. And that I think is really powerful. It also, I think, splits the difference a little bit between this sort of pure optimizing behavior that you see students doing, being like, I just wanna get it done.
I just need the grade and the studying behavior. You know, I want to actually absorb some of this material and make sense of it and incorporate it into my worldview and be able to learn it. And I think you sort of split the difference in a really interesting way. You also mentioned that you, yourself and your co-founder dropped out of college because Study Fetch was doing so well and you wanted to sort of go do this, this fast growth startup, which I think is a common dream for many college students.
How do you square, this is the question part. Tell me if it makes sense. How do you square for yourself that idea of helping students get through their assignments, get through their classes, get to the other side of it, and the sort of. Real learning or the sort of engaging with the actual material. You mentioned sort of the D-N-A-R-N-A virus situation.
It's like if a student wants to be a doctor or wants to do anything in medicine, they don't just wanna get an A on their exam, they also wanna absorb, I know you think a lot about this and you really try to make the product make sense for student needs. How do you sort of square that? I just wanna get through it feeling with, I wanna actually absorb and learn.
[01:15:14] Ryan Trattner: Yeah, that's a really good question. I think, and it's difficult, especially in consumer, to make sure that you're on the right side of that line, right? I mean, if you go and you go, you know, on TikTok and you tell a bunch of students that this will do their homework for them and they'll be done in 20 minutes, you'll get people to come to your platform and they will buy it.
And a lot of companies have been extremely successful with that,
[01:15:33] Alex Sarlin: right?
[01:15:33] Ryan Trattner: For us, that's, you know, something all the time. I just have to be like, we see a video, it goes viral, it has 50 million views, and I'm like, we can't do that. Like, and people are, it's just not happening. And so I think. A lot of it has to do with why people are trying to do something really fast.
I think there's a few reasons. One of them is they don't feel confident that they would be able to do it themselves in time. Right. And that's something we can go and we can solve for. The other ones is that they don't have a plan to get it done right. Like they don't know where to start. I think that's something that StudyFetch has, especially in our updates this summer, like really, really gone forward and we're really starting to push study plans more than we were before and try to make them the core concept of the site where you do follow that path the whole way down.
Because we noticed that was something where students don't know where to start. So even if you give them all the tools, right, there's nothing where to start. It's still like chaos. I think if you solve for basically, you know, the anxiety around and the stress around like completing an assignment and you make a student feel confident that they could just get it done and they know it themselves and they know it for the exam, then there's no reason to cheat or there's no reason to, you know, get something to do it for you.
That's like the core thing. If you go into an assignment and you're like, this is easy, it's a piece of cake, like I can do this, you know, in 30 minutes because you know it. I think it's a lot easier for someone to go and just do that because they know it rather than have a tool do it for them. And I think it makes people feel more confident to do that.
I mean, as well, like if you're having, you know, AI do it for you. You know, you don't feel when you turn it in, it doesn't feel like it's yours, basically. That's just one thing that we need to teach. I think that's something that we're stressing a lot in terms of, especially when these college institutions and K12 institutions like buy AI products, is that you should not buy the productivity tool.
Like, you know, maybe the teachers. I think the teachers having productivity tools is extremely important, right? Teachers are extremely overworked, having those tools necessary, but I think. In terms of students, like basically taking those tools and making all about understanding the material and feeling more confident going into the assignment, the exam can kind of take away from that need to get it done really fast.
And when we're designing products, we have to think all the time about, you know, look, do we want this user to succeed? I think, you know, from a business perspective as well, it's helpful because, you know, a user who's actually actively learning in the platform will come back, right? And they won't, you know, churn and the retention of them will be good, but you know, you're not gonna get that money quick, upfront, and you're not gonna get the influx of massive amounts of users coming to just get something done.
And so there's pros and cons to it, but I think. We've consistently been saying no to features that are just gonna do it, and there's a lot of stuff that you can do fast. Or like, for example, I think like our Arcade games feature is a really good example of this, where we wanted to make something that was fun and exciting, but we wanted to make sure that it wasn't just.
Students like playing a game to play a game, that there was actually like some type of learning going on there. And so everything we did design was around like, okay, like to get through this game, you're answering these questions. It's more so just a visually appealing, fun and engaging way to do that.
And I think that's the kind of the way you have to design stuff. It's just like, what's the end goal? Are they gonna come back to the platform? And there's a ton of good business reasons to do it as well. Like are they gonna come to the platform because they got a good grade, they feel happy, they like the site, you know, and it helped them.
Then they'll come back rather than they come back because they know it can do their homework in five minutes. Right. Right. And the big thing is that learning does not happen in five minutes. Like as much as people want to say, like you can't digest something in like a massively short period of time. We have some interesting, actually like.
We've done a few, and I think they'll, they'll come out soon. But we've done a few, like basically not studies, but experiments around quick learning with StudyFetch. I think you can get a lot of value after sitting with our Tutor Me feature for eight hours a day. But then again, that's also like eight hours a day for two weeks or something.
Right? It's not like you're gonna go do it and that, so you can do it with short amount of time with one-on-one tutoring. I think you can, you can go a long way. I think people have, you know, proven that consistently that you can go a long way if you do have that one-on-one tour, you're talking back and forth around a topic, but the student's not gonna learn that they're doing it in five minutes.
And I don't think we're ever gonna push students to do that. And we've consistently, like, you know, made all of our AI around not letting them do that too, and just design features that we know will help them study. But if you put students in that environment and you give them all the tools, then there's no need to cheat because they're confident and they understand the material.
They can go into it and they can do it. That's our big thing. If we can make students confident that they know the material, then there's no need to cheat.
[01:20:08] Alex Sarlin: Yeah, I think that's a really interesting and very nuanced answer. I, I think I, I appreciate it a lot. You guys are, I think, at the front lines of some of the most ferocious and complex debates happening in AI right now, and I, I love when you say we focus on student success, you said, what does it mean for a student to succeed?
That feels like the core question for me, right? What does success look like for a modern college student or a modern high school student that has AI tools? Does success mean to them that they got the good grades and got through it and did the least work possible, or does it mean that they actually engage?
And I, I think your answer is really nuanced and makes sense, which is that the anxiety, the stress of not knowing where to start, the feeling of not having a plan or not knowing how to sort of make sense of the curriculum and all the assignments you're dealing with gets in the way of engaging It gets in the way of making sense of the material and actually trying to absorb it.
It pushes people into a transactional. Relationship because they're scared of the deadlines or the tests or things like that. I think that's a really interesting and nuanced point and the fact that of course, because you work both with consumers and individual students and with educational institutions and you have this modern bottoms up approach where you could go to a school and say, you know, we have 5 million students and 10,000 students at your university already use us.
You have to be really thoughtful about that type of feature that the homework helped the solver, the, the snap and solve, I sometimes call it, right, the snap and solve type of feature people are really afraid of at the university and institutional level and some students are really excited about. But I think, you know, defining success in this interesting way between, of course, you wanna get good grades on the exam and you wanna get your degree or get through school and in the way you wanna get through it.
Also success means you actually enjoyed learning. You actually retained some of the material, you made sense of it, you were able to have fun with it and, and go deep with it. I like your answer and I think it, it's one that we could all really, I think the whole field, the whole ed tech field is wrestling with, especially anybody that is consumer in the AI world because it's so complicated.
So you are growing very quickly. Uh, we've mentioned this a couple of times, but 5 million students in two years is a fast growth product. And I'm curious, you know, as you've grown your user base this quickly, what have has changed internally? You mentioned you still try to be very active looking at feedback from your users.
You think a lot about how to make sure that you're building features and focusing on features that contribute to what your, your users actually want and define a success. But what has changed between, you know, the first six months where you were just getting off of the ground and starting to use social media to get noticed and being, you know, 5 million students deep, having I'm sure a lot of ambassadors and enthusiasts and people who are posting about you.
Organically on TikTok, what is different and what would you recommend for other EdTech founders who are sort of hoping to have that kind of growth trajectory?
[01:22:56] Ryan Trattner: It's an interesting question. I think a lot is still very similar. Up until around like a few months ago we were still like a small core team, like very, very small.
And I think it was purposeful to that we could stay profitable, we could, you know, move extremely fast. There was no like, uh, decisions could just be made. People would just get stuff done. I think, you know, at the start we were building a lot for virality and you know, I think what was good is that there wasn't a lot around that could do the functionality that we could do.
So whenever we did release something, we just posted about it and it went viral and you know, a lot of it was basically designed in collaboration, you know, with marketing team, like what, you know, would students, like if we talk to a ton of students, like what is their biggest problem that they're having right now?
Because if we post about the biggest problem, it'll go viral without a doubt. And I think a lot has changed since then to where. Now it becomes around, especially in the market we're currently in, which is these ed tech AI products for consumer, there's so many and there's just so many products that are doing the exact same thing from a marketing perspective.
And so I think in terms of the product trajectory, it's changed more into like the depth of the specific features and the depth of the platform rather than these like kind of surface level just tools. So we talk about it all the time. How we initially kind of built the platform was to not like basically.
Just be one shot tools that just go from point A to point B, right? We actually wanted to capture the whole experience for the students and wanted everything that we built to be integrated together to be tracked together. Tough progress together to insights together. We're even going as far you know right now as to look at, because now we have two years of students.
We can go back and start comparing, Hey, remember that class from last year where your teacher said this? Like very similar to what you're learning right now, and it helps jot students' memories of some products that we're delivering shortly, right? I think we can look more into what long-term users look like on the platform.
How we can help them. And I think it helps us too, is if we're working with educational institutions on that front, we can say, Hey look, we've been, you know, there are very few AI products that have, you know, had a massive trajectory in two years for EdTech like us, right? So there's very few that have that huge amount of users, especially in higher ed, that are working in a positive manner.
There's a ton that are working, I would say, in a negative manner towards the student's benefit, but there, there's very few that can do that, right? And I think that's very helpful to us where we can go look at long term, you know, how have students been successful. We can perform research. You know, I think you talked with Sam about our, you know, reach a study last time.
You know, we can perform research like that, which is like how this behavior change over time. You know, we can take a large amount of conversations, like a million conversations because we have millions of users who can give us that data. And so I think, you know, that's the, the trajectory of the business is like impact us where.
We're more, less so looking into like, what does this look like as a long term product that actually helps students. I think what what's great is we can now be much more mission oriented as well, is like we've had this mission since the start and we haven't strayed from it, but we can actually look into a little, little bit deeper because you know, we don't have to be on every single thing.
Like, okay, we need this, or like we die. That's a positive or a negative. I think we're still growing at a rate where there's a lot of decisions like that that, okay, well they'll come soon. We got it soon, like we can't do it yet. But there's a lot that we can go and like, how does this impact actual students?
How are we actually being helpful? I think from the team perspective, we've done a lot with, you know, the product to make sure that we're working on very similar things throughout, basically the entire experience, like to make sure the benefit of the product is the student and also the educator is, you know, a curator of the material.
They can watch insights, they watch analytics, but at the end of the day, the student is the one who needs to take that and get the best. And I a a, the same technology applied.
So I think starting to implement kind of those as we go forward too. So, but yeah, team is small. Still listen to users every day. My co-founder and I are still involved in the product and the feedback and the support, like consistently making sure everything goes well. It becomes harder. You can't listen to everybody when you have that many users, but you can try.
[01:27:17] Alex Sarlin: That was a great answer. Really interesting. I mean, when I see your trajectory and see how you're thinking about things, I get whiffs of some other really successful EdTech companies like the Quizlets of the world, like the Duolingo of the world feels like there's something, you're doing something very, very interesting that is obviously catching on in, in the market and thinking about becoming a sort of hub for AI learning on the student side rather than a, you know, a transactional support system where it's like, oh yeah, you're, you're just there to optimize.
You're there to just sort of get through this particular assignment. And I think that's a very smart play and it's, it's probably serving to have not only lots of growth, but probably lots of retention as well. I'm sure your, your students people use the platform and then like. Stay with it because they realize it's giving you the plans.
You can use it for multiple classes. Maybe someday you'll go into workforce training and things like that. Ryan Trattner is the CTO and Co-founder of StudyFetch. If you haven't checked it out, check it out. See some of these features that he's mentioning in action, that the gaming features, the Pathways, the study plans, and they are just celebrating 5 million students.
Thanks so much for being here with us on EdTech Insiders. Thank you so much. Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.