Edtech Insiders

Week in Edtech 8/27/25: Teen Suicide Linked to ChatGPT, Bill Ackman’s Alpha School Debate, OpenAI Expands in India and Eyes UK Deal, Anthropic’s Higher Ed Report on Augmentation vs. Automation, and More!

Alex Sarlin and Ben Kornell Season 10

Send us a text

Join hosts Alex Sarlin and Ben Kornell as they navigate a week of heavy headlines in education technology—from AI risks and teen safety to global expansion moves by OpenAI and new research from Anthropic.

Episode Highlights:

[00:02:20] AI panic in the headlines with concerns about teen mental health, suicide, and youth dependency
[00:06:33]
AI’s impact on job opportunities for new college graduates
[00:08:00] Comparing AI anxieties with past moral panics about video games, music, and social platforms
[00:14:14]
Why AI guardrails in school tools may be the edtech industry’s biggest value-add
[00:18:54] Bill Ackman’s New York Alpha School fuels debate over AI-driven education models
[00:22:20] The risk of Alpha School becoming the “face” of AI schooling for better or worse
[00:25:28] OpenAI expands globally with a new Head of Education in India and a potential UK-wide ChatGPT deal
[00:27:26] Anthropic’s higher ed report shows educators using AI more to augment than automate 

😎 Stay updated with Edtech Insiders! 

🎉 Presenting Sponsor/s:

Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.

Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.

As a tech-first company, Tuck Advisors has developed a suite of proprietary tools to serve its clients better. Tuck was the first firm in the world to launch a custom GPT around M&A.

If you haven’t already, try our proprietary M&A Analyzer, which assesses fit between your company and a specific buyer.

To explore this free tool and the rest of our technology, visit tuckadvisors.com.

[00:00:00] Alex Sarlin: We're so new to AI that we just throw these general purpose, you know, ChatGPT and Claude and Gemini are these all purpose apps and they're just in front of everybody. And of course you're gonna see this happen. And of course, to your point, you're gonna see exactly that kind of thing where yes, they knew meaning like.

X number of conversations. Of the 50 million conversations happening in ChatGPT every day were happening in this way by users under this age. And yes, in some level they have to be responsible for that. At another level, we can mature the products themselves and the availability to get more of a spectrum.

[00:00:36] Ben Kornell: I think this is showing that that's far more value added than maybe the people who criticize ChatGPT wrappers. Would've thought. So I think great point. It's a doom and gloom episode, but out of every crisis comes opportunity. And if you go to schools and say, look, this is the reality of what kids are gonna be doing with AI at home, if they're not given.

The tool that is responsible and productive.

[00:01:08] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders. 

[00:01:24] Ben Kornell: Remember to subscribe to the pod. Check out our newsletter and also our event calendar.

And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod.

Hello, EdTech Insider listeners. It's another week in EdTech. I'm here with Alex Sarlin and. We've got a bummer of an episode for you today. You guys know us for our optimism and our excitement, but man, the headlines have been a little bit of a drumbeat of Downer news. And of course, if it happens in EdTech, you'll hear about it here in EdTech Insider.

So we're gonna cover it. And the. Overall ARC is still optimistic about transforming education. Yes, for sure. But man, it's been a tough week, Alex. How are you hanging in there? 

[00:02:20] Alex Sarlin: I'm okay. I just, I get worried when the number of headlines about AI or AI and education start to feel like when it starts to feel like that moral, panicky, people are looking for ways that things are going off the rails.

And that's what journalists and that's what everybody's sort of hungry for. That just makes me unhappy. It makes me feel like we're sort of missing the forest for the trees, looking for the short term splashy headline rather than the longer term arcs. And I feel like that happened a lot this week, but I think it is worth talking about some of these headlines and some of these things that are going on just 'cause they're part of the zeitgeist and I think they're part of what everybody is navigating right now, including everybody in education.

Where should we start, Ben? 

[00:02:57] Ben Kornell: I mean, I think the alternative frame is that. AI could be for good, but man, is it gonna have a lot of negative externalities. Yeah, and I think that the main headlines are this is gonna negatively affect jobs for new college grad seekers. So there's a bunch of headlines around that.

Second, this is going to dramatically affect mental health and as much as we've. Covered some of the positive use cases and some of the research. There was a student teen suicide that was related with ChatGPT messaging, and so there's a lot of dialogue on that. And then you also have, who gets control of AI and how does this work?

A story that came out around meta helping build China's deep seek whistleblower has talked about llama being kind of a foundational. Piece of deep seek. And so the net net is like at a societal level, are we gonna have oppressive regimes fueled by ai? With kids who are struggling with mental health, who can't get jobs because AI has taken them.

Right. We can dive into each headline, but that's the counter narrative to the AI is progress. 

[00:04:07] Alex Sarlin: Yes. And I would add to, actually, let's talk about that first, but I, I think to some of this. There's a news about the Alpha School, the New York Alpha School being opened by Bill Ackman, who's somebody who's like a hedge fund finance guy who's been very political.

I think that's another aspect of this, is this feeling of the politicization of AI and people starting to feel like believing in ai. This has happened with cryptocurrency and Web3 as well. That sort of technology has a sort of political bent to it, and I think that concerns me as well. But let's talk about the suicide and the companionship and the idea of people are starting to.

Really raise the alarm about AI being a dependency for students or something that sort of replaces human connection or can lead to really scary outcomes like teen suicide. I mean, you remember Ben, early on in, in this podcast, I kept saying, Hey, one of the main reasons why we in the EdTech community have to keep sounding the positive alarm about all the cool things, the positive players, about all the cool things that can happen with AI is that there are going to be negative things.

There are going to be things that happen. That people can track back to AI because it's such a powerful technology. And I've mentioned suicides even back then as one that's gonna happen. And now we're seeing this in real time. You're seeing at least one teenager, and we've seen a, a little bit of a drip of this over time, commit suicide and then people look and say, oh, this teenager has been talking to.

Chat GBT and confessing to it and having these deep conversations, and it makes people jump to that sort of causal conclusion of, oh ChatGPT is causing teen suicide and causing this kind of dystopian future that you just named of. No jobs isolated, no friends, feeling dependent and miserable. I just don't buy it for a lot of reasons, but I get why that's scary.

I mean, of course it's scary and we all have lived through this social media revolution and we've lived through the internet and cell phones and basically seeing technology, no matter how many people say it's this wonderful thing that's gonna connect the world, having all these sort of dark undercurrents that then take over parts of society.

So I get the pattern matching and I get why people are very nervous about this. That said, I just really think it's such a mistake. To start looking at this technology so early when we don't even have the core tools yet, frankly, that are gonna be used and starting to feel like just jump to the biggest negative conclusions.

I don't buy it, but I don't know. I get it too. What do you think, do you think, like when OpenAI says it's gonna make chat GPT safer after parents sue, after the suicide is a story this week, what do you think this is gonna lead to? 

[00:06:33] Ben Kornell: This actually is an age old debate. Like is it technology or is it human beings?

Like is the tool itself good or bad? Or are the humans that use it using it for good or bad? And where's the accountability and where are the safeguards and so on. And I think we saw with all the kind of depression, teen suicide, mental health issues with social media over the last decade that's been litigated and like.

Public forums. 

[00:07:03] Alex Sarlin: Yeah. 

[00:07:04] Ben Kornell: Where the social media companies have in general said, it's not our fault. Like we are just the tech. And what happens with the tech isn't our responsibility, but some legislation and some. Gains in terms of people having more moderation and more flags. But I would say the social media experience hasn't given a lot of people confidence that this is going to be handled well.

And when you start looking at kind of. Anytime the AI companies have attempted to crack down on what the AI can say, it's been met with a lot of user backlash around the restrictions for these safety protocols, actually holding back other use cases. So I think this is going to be a theme and a through line.

Forever with tech and AI just happens to be the latest tech. Exactly. And it happens to be incredibly powerful tech. 

[00:08:00] Alex Sarlin: I mean, you look back at the famous Bill Clinton blaming Sister Soldier, two live crew, you know, there was a long backlash against music influencing the youth. There was a backlash against video games influencing the youth, Beavis and Butthead.

Like this is just such a perennial, like you're saying, it's such a perennial issue where people are like, want to point a finger at something in culture and say, this is what's causing. The problems. Yeah. And the issue is that social media has passed that bar, right? That was not a false flag. That was not a, like a few people said, Hey, it's the phones and it's, it's the social media sites that are doing it.

And that went away. That one actually stuck. It's real. There's a lot of research behind it now, and I think that you see the pattern of the sort of silly things where people just blame whatever is in the news for what's. Causing the problems with their kids. And then you see this one big recent one that has been so influential and mattered so much.

It's caused by the Jonathan Height type of research. It's caused systemic anxiety, depression, suicide, bullying, isolation. And it's like, well, if that one is real, people just are trying to figure out where this one fits in, don't you think? 

[00:09:07] Ben Kornell: Yeah, I do. And by the way, those playing at home with your Bingo card, if you've been waiting to play the Beavis and Butthead Square, this is your episode.

It's true. I didn't never expect that to be one of our references on EdTech Insiders, but I love it. Yeah, you're good. I mean, look, this is Ben talking like, I think the ultimate answer is that we're gonna. Get to a place where we just have to have guidelines that kids should wait a little bit longer than the kids would want to use technology.

There's just like a brain development issue here, and on the cell phones there's this big wait until eighth, you know, on cell phones. Basically the number one tactic that. Society and families can use to best support the good use cases of any technology, ai, social media and so on is just delay. And once you've gotten to a better cognitive development state.

Good luck to you. And I think misinformation. There's a lot of ills in the 18 and above range that I think we still have a lot of work to do, but I think kids are particularly vulnerable. And the European Union, I think is kind of on it when it comes to safeguarding kids. And the US has been a lot more wild West ish, and so we'll have to see how this.

Comes out, but I expect a lot more exposes where they say, OpenAI knew that kids were using it for this, or Anthropic knew that kids, and it's like. The data is so immense. Yeah, and the user stories are so diverse that of course, if you want to tell the story that kids are having sex using ai, kids are having suicidal thoughts, like they're having those things independent and there is a reasonable argument that AI could be amplifying those things just as social media has.

And so. I just think it's gonna be very hard to litigate this as a technology problem. It's more of a societal solution. 

[00:11:16] Alex Sarlin: There's also a case to be made that ai, quote unquote, I'm doing air quotes here, covers a lot of different types of interactions. So like I think of the ed tech companies that are do like personalized decodables for kids learning to read.

That's a fantastic use for AI and it's for, certainly for kids under eighth grade, under 13. That is a different use case than sitting there with an open-ended chat bot and saying, Hey, what do you wanna say to me? You're 12 years old. Anything you say I'm gonna respond to, and we can get into a relationship or conversation or sext each other like.

That is a totally different thing than so many different AI use cases that are controlled. And I know you're saying that, Ben, you're saying that there's the freedom to do whatever you want in AI is sort of balanced with the affordances, right? If you limit it and you say, oh, you can pick a character and a setting and a topic and will make you a really fun story or a storybook, that's a totally different set level of freedom.

But I think that's where we're gonna have to. And it's not, I don't think that's the end of the world. I mean, that's happened with lots of different media. On YouTube. You have horrific things, and then you have YouTube for kids. You know, movies have ratings like we learn with any new media. There are books that have mature labels on them.

Like with any medium or type of interaction, you can do a huge amount with it. And then people start to realize there are different categories and genres and age ranges and ways to handle it, that you actually make it work in society. We're so new to ai. We just throw these general purpose, you know, chat.

BT and Claude and Gemini are these all purpose apps and they're just in front of everybody. And of course you're gonna see this happen. And of course to your point, you're gonna see exactly that kind of thing where yes, they knew meaning like X number of conversations, of the 50 million conversations happening in chat, GBT every day were happening in this way by users under this age.

And yes, in some level they have to be responsible for that. At another level, we can mature the products themselves and the availability to get more of a spectrum. I think we are at the forefront of that, right? Ed tech is about that. Ed Tech has been about making the internet safe for classrooms, right?

Or being able to use video in classrooms. And it doesn't just mean video. Any video should be played in a classroom. I mean, you get educational video, you gets appropriate video. So I think we have a lot to say about that, and I just worry that. I don't know. I'm beating a dead horse here a little bit, but I really worry when you get to a sense, like a quote really stood out to me.

There was a really good Ezra Klein op-ed this week about ai, about Chachi, BT five, specifically about how he was like, people are sort of saying that this is a bad rollout, but wow. I think this is like incredible product and it's something that really pushes AI forward. But he mentions how like in certain areas of the punditry or the intelligentsia, it's just like outta fashion to be pro AI right now.

And that really scares me as somebody who says like, we've hardly scratched the surface of what AI can do for education. And if it's just like uncool to think that AI has any good in it, we're in trouble. Right? We just need to keep the narrative 

[00:14:14] Ben Kornell: balanced. Yeah. For our audience here, this should be a wedge to say.

We, ed tech people are the ones that are purpose built for schools learners, right? I think, one, I agree with you on the broader narrative. We're swinging too far. One way or the other, and we're not capable of nuanced views anymore across basically every front. Yeah. But if I am sitting in the seat of an ed tech person trying to make the case to use my tool versus just a generalized open AI or anthropic, this is making my argument for me.

Yes. And I actually think there's a powerful. Pedagogical reason to institute AI use cases, and anybody who wants to check out our library of use cases, they can check it out on EdTech insiders.ai. But you can basically point to all these productive uses of open AI without any of the downside. Freeform chat, and my hope is that this overall narrative doesn't scare people away and make AI a dirty word, like personalized learning became a dirty word.

Or like whatever the fad words, you have to reframe all the time. Yeah. This is not the moment to throw AI as a concept to the. Kids and educators need productive access to AI as part of their workflows, but unfettered access with a generalized model, whether it's all the way true or whether it's the sales pitch, it depends on your product.

And by the way, the people that I know who are building some of the best products they're underpinning are these large language models. So they're getting all the benefit. But then they're putting all the guardrails on top of it. Exactly. And I think this is showing that that's far more value added than maybe the people who criticize chat GPT wrappers would've thought.

So I think great point. It's a doom and gloom episode, but out of every crisis comes opportunity. And if you go to schools and say, look, this is the reality of what kids. Are gonna be doing with AI at home if they're not given the tool that is responsible and productive. 

[00:16:27] Alex Sarlin: Right? Because if the option is, if the reaction to open unfettered access to these consumer tools like Chat BT and, and Claude, is that parents get afraid and say.

No, we, I don't want you to do it until eighth grade or until you know, a particular age. Then you're taking the student out of that AI world at a time when we all know that these AI tools are incredibly powerful and they're really important. So where can you get AI in a safe, usable, appropriate, private way?

I think that's such a good point that those guardrails, that is exactly what the EdTech ecosystem. Can really do. And you know, it's funny, like of all the different types of technologies that have come along, AI is actually, I think potentially one of the ones that could be easier to create those guardrails in.

Because if you're creating a, you know, a tutor chatbot on the side of your canvas, you know, instance, you just set those rules really hard. You say, Hey, if the kid ever says anything that has anything remotely to do with. X, Y, Z, sex, violence, bullying, blah, blah, blah. You just shut it down, you escalate. You do the thing.

Like those rules can be really hardcore. They just can't do them in ChatGPT, right? Chat. GBT is not gonna say, oh my gosh, you said something that evokes in the slightest way sexuality, I'm shutting off your account. Right? They can't. That's just not what you can do if you're a general purpose tool. But you can do that.

You a hundred percent can do that if you're a school tool and you should, you're expected to do that if you're a school tool. So I, I mean, I love your point. I think this is a real value add for the EdTech industry, and I think the other value add for the EdTech industry is we can still keep shouting to the rooftops that for every teen who committed suicide, because of, I don't think it is even because of chat, GBT, but committed suicide.

After talking to chat GBT, there's a student who created a, you know, an app that allows cars to see. Deer and avoid them on their own at 15 years old. Like that story is also true. That's also part of the AI story, and I think just those things just do not get covered. 

[00:18:17] Ben Kornell: You and I probably are one of the few people who actually read all of the details on this, like there were countless times where ChatGPT said, you should go get help.

[00:18:28] Alex Sarlin: Yeah. 

[00:18:28] Ben Kornell: And the student, the teenager, intentionally. Jail broke it or like basically used a fictional, this is just for a story. This is just for creative writing. I also think we need to figure out what the thresholds are, but based on what you're saying about your enthusiasm for AI use cases, you must be a huge fan of Alpha School and everything they're doing.

Tell us more about your incredible fandom of Alpha School. 

[00:18:54] Alex Sarlin: This is why this is a little bit of a downer episode. One of the reasons it's downer episode for me, like we interviewed Mackenzie Price on the podcast and she was terrific. I really liked her. I, she's doing this two hour learning and she's doing alpha schools, but there, it was announced this week that the New York Alpha School is gonna be open by Bill Ackman, who's this extremely polarizing political figure right now, and really one of the most, and I am getting a little nervous here.

That what's starting to happen is that this Alpha School model is starting to get headlines because it's so catchy. It's such an extreme version of what school could look like. Their whole message is two hours of like formal learning, basically led by ai, and the rest of the day is these beautiful projects and these collaborative learning, and it's like it's a polarizing vision.

It's exciting for a lot of people. I think there's something really exciting about being opening up school in that way. It's scary for other people. It's catchy is the point, right? It's something that you can't ignore it. It puts a very clear definition on what AI school looks like. And after hundreds of interviews on this podcast, like there is no one definition of how AI school looks like, you know, the whole EdTech ecosystem.

Hundreds and hundreds and hundreds of startups all over the world are trying to, you know, figure out what this would look like. And they're like, we have a model. And it reminds me a lot of some of the sort of schooling models in the past. You said personalized learning became a dirty word. Well, part of why personalized learning became a dirty word is because it was sort of co-opted entirely by certain players in the ecosystem who made it their watch word and said, we own personalized learning and this is what it looks like.

And if you do that and it doesn't work or it doesn't like very obviously work, then you are sinking the term. You're sinking the whole concept. And so I'm worried that this Alpha school model is going to. Become synonymous with AI schooling and also start to become this sort of libertarian, somewhat right wing, like this is what we can do to escape public schooling.

We'll create these private schools that are really AI focused, and it's just a narrative that feels really creepy and one that like, I think could really solely the entire concept of what. Could be true about AI school, which is like incredible. I mean, you know, we spend all our time here thinking about how amazing it would it could be.

And if they're like, no, it's exactly this, then oh man, it could just go off the rails. I'm worried about Alpha Schools, which is why your intro was ironic. 

[00:21:17] Ben Kornell: Yeah, I mean. Great points that you're making. And you know, this is where I think you and I feel a sense of stewardship for the ed tech space where we just, we see the potential and we also don't want it to go off the rails.

And your point around syne is one of those like literary terms where the park stands for the whole, right. And we have, like we said, we're incapable of nuance and we basically, every era of education innovation has these like. Singular organizations or figures that stand in for the moment. And they both represent the full potential, but they also like absorb the full criticism.

And I think Sal Khan was really like, you know, a year and a half, two years ago was our singular figure here. And now Alpha School is stepping in Yes. To become the school model figure. Yep. And I'd say Diane Tavenner and Summit was like a stand in before, or you know, Zuckerberg backed. Because that cut through on a media standpoint, 

[00:22:20] Alex Sarlin: right?

Amplified Jill Klein and amplify that model. Yeah, 

[00:22:23] Ben Kornell: totally. Now here's what I would say on the Alpha school side's defense, number one, I think all of the focus has been on the learner experience with this kind of like go to the gym for two hours with the AI and then do freeform project-based learning and competency-based learning in the afternoon and.

That kind of combo when you genericize it at that level. That's what I'm seeing a lot of micro schools doing. Kai Pods doing it, pres doing it. Acton schools are doing it. And Mike Yates had this article ask me about Alpha School, but don't just ask me about Alpha School. Right. 'cause I think there's a bigger movement here, and it's not all around the specifics of one in particular model, but it's around this idea of like.

When is AI good and when does AI free up your time? Second point on Alpha School is the intentionality around the educator I think is the story that they're missing, which is educator time is wasted on these like super mechanical tasks like drill and kill math or basic reading fluency, and where they educator shine and where they have passion is around these like really deep.

Projects that are about building skills and collaboration. And right now our teachers are burning out 'cause they have to do all of the above. There's so much of Alpha School's model. I've met with Joe Leman a couple times. They realized we can't scale superhero teachers because as we grow new locations, we burn out the ones that we had before.

So that was a driving and you know, this was before AI happened. They were kind of like. How do we productize the skill building part and how do we give the creative educator space? And then I think they also went with a tuition model where they're not about creating huge constraints, so it's 40 to $60,000.

This is not something that you're like, school down the road can just pull off instantly. Right? This is really, really well resourced and so, so that to me feels like. What an opportunity to talk about how, let's shift it to what does teaching look like in the future. And I actually think we'd get a lot more agreement yes.

On that. And Alpha School could be a wedge, but of course. You know, the kind of ESA movement, the politicization of education, the Republican party being in control in a lot of states, and to be frank, like overall dissatisfaction with the average level of public education. Yes. You saw in the bellwether report.

Yes. I mean, these things all fuel the quote unquote narrative that I think. Sets them up for one of those. We have the inverse of a hero's journey in education. We rise. We rise to the apex, and then there's the catastrophic fall afterwards and they're being set up for that, whether they want it or not.

Big time. 

[00:25:28] Alex Sarlin: Big time. And, and we've seen that a lot with, with tech figures as well. That was really well said. And I think this feels like the first chapter of that. Build them up and tear them down, you know, narrative where you, you make somebody the face of a movement and say, this is what this is, and then you tear it down and thereby sort of try to tear down the entire movement, tear down the, the entire concept.

And meanwhile. So many educators, so many EdTech companies are out there trying to figure out how to make this work in really nuanced ways. And I think Mackenzie Price is too. This isn't about them personally. It's just about the concept of it being like, let's put these people as the avatar of AI schooling.

It's very dangerous when you have so many really interesting people doing really interesting things. A couple of authors interesting things in the AI and partially education space this week. I mean, we saw. Open ai, make a new head of education initiatives in India. And this is actually another ex Coursera person, somebody else I worked with in my time at Coursera, I'm sure they came in through Belski, who's the head of education at OpenAI and Kevin Mills.

But re Gupta is now gonna be the executive as the open AI's head of education in India. And you know, India is an enormous country still with a very young population. Very, very education obsessed with not nearly enough good educational infrastructure at any level, at higher at or K 12 level. And there's a humongous opportunities there for educational improvement.

So we should definitely keep an eye on the space as OpenAI goes there. Meanwhile, we saw a similar, a related headline this week from The Guardian. Saying that there is potentially a deal to get chat GBT plus for the entire country of England for the entire United Kingdom. This is obviously, you know, early, early, this is not pinned at all.

But when you look at these two headlines in parallel, the idea of OpenAI is looking to get bigger and bigger. They're already working with Estonia, right? You're working bigger, bigger countries, and they're also laying groundwork in India, especially around education. You can see that they have, you know, true global.

Aspirations they're taking over the world. What do you make of the, these headlines about 

[00:27:26] Ben Kornell: India and the UK for OpenAI? I mean, I think there's lots of opportunity broadly, and you know, I'm more optimistic about educational transformation happening into the developing world first before the US because when you're competing against, you know, brick and mortar schools that are well resourced as they are in the US as much as we just.

Literally in the prior segment talked about how sentiment is down on them. It's hard to outperform that, but it's kind of the innovator's dilemma where you get something that is. Inferior to the current product, but is on a growth trajectory that's better. And I think that there's a real opportunity there.

I thought Anthropics moves were really interesting this week. Drew Bent, who you interviewed, which amazing interview, you should stop listening to us right now and just go listen to, to that interview 'cause it's really, uh, great, great stuff. But the question. That Drew poses is like, how are educators using this?

And like, what are the modes? And I really appreciated the framing of kind of, are we using it to automate or are we using it to augment? And so much of the data shows that educators are using AI to augment. And so this narrative of replacing teachers versus. Teachers using it as a toolkit. I find the data really fascinating.

Most of his data comes from higher education, so it does seem to me like the adoption on the augmentation side is strong in higher ed. I don't have a strong sense of contrast here. I think it's really exciting to see how they're doing it and you know, it does feel like Anthropic has done a savvy play of being.

With their B2B, let's not make the headline. Let's just do really good work embedded as tooling. They've done a really savvy job with that. 

[00:29:15] Alex Sarlin: Yeah. This is, you know, a new education report. It just came out a few days ago. It basically is 74,000 anonymized conversations from higher ed. Professionals around the world as well as a partnership with Northeastern University, which they had announced.

And some of the headlines, I agree with you, it's not the most headline grabbing, catchy, like, Hey, we just blew the lid off something with, with this particular stat. But it's, it's pretty deep thinking. It's basically like the top three use cases are developing curricula, conducting academic research, and assessing student performance.

And then they look at each of those. They have common requests within each of those buckets, and then within those, they sort of determine what percentage of these is augmenting use case, you know, actually doing more than you could do before versus automating what you could already do. It's a pretty detailed framework about how to start thinking about education in the classroom.

They talk about some of the things that educators create. They talk about different tasks like managing finances and fundraising activities. Is there one that is sort of the most. Automated. Automated that they do, this is sort of like the financial structuring, but then they go all the way to teaching.

And teaching is one that's almost entirely augmentative and not automated. They're not automating the teaching. Mm-hmm. AI and then everything in between. It is really interesting and, and Drew is really fascinating and the way they think about this. I think Philanthropics doing a really good job of framing itself and being, but also framing itself as sort of the responsible.

Frontier Model 

[00:30:36] Ben Kornell: provider for a downer episode. I think that was, it's a good note to end on this report because it points out exactly what you were saying, which is the overall narrative. We're kind of losing the thread here, but the anthropic drumbeat is. Practical, integrated, like enabling, not threatening, replacing, disrupting in negative ways.

For those of you who come to our events, we have a San Francisco event coming up on Tuesday, September 16th. We hope to see you all there and then we'll be at New York EdTech week in mid-October. Please reach out and find us and also check out our sponsor links. They kind of help keep the lights on and pay for the incredible staff that make all of this happen behind the scenes.

So that if it happens in EdTech, you'll hear about it here on EdTech Insiders. Thank you all for listening. 

[00:31:26] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.

People on this episode