Edtech Insiders

Week in EdTech 8/13/25: Back-to-School Uncertainty, B2B vs B2C Learning, GPT-5 Backlash, School Choice Surge, Google’s Gemini Power Play, and More! Feat. Evan Harris of Pathos Consulting Group, Becky Keene of AI Optimism & Max Spero of Pangram Labs

Alex Sarlin and Ben Kornell Season 10

Send us a text

Join hosts Alex Sarlin, Ben Kornell, and guest co-host Matt Tower from Whiteboard Advisors for a back-to-school edition of Week in EdTech, covering market shifts, Big Tech’s push into education, the GPT-5 rollout, and the rising challenge of deepfake abuse in schools.

✨ Episode Highlights:

[00:03:40] Back-to-school funding uncertainty and “back-to-basics” mentality
[00:06:40] B2C vs. B2B: consumer learning grows while institutions tighten
[00:09:31] Big Tech moves from infrastructure to competing in applications
[00:13:40] GPT-5 rollout backlash and prioritization of B2C users
[00:17:34] Rethinking schools’ role: raising the floor vs. raising the ceiling
[00:19:52] School choice and flexible student pathways through ESAs
[00:20:58] Proposal for an AI user Bill of Rights
[00:27:36] Media panic: critiques of AI in education from major outlets
[00:34:19] SoftBank executive buys stake in UK university
[00:38:10] Workforce training gap: Google and Microsoft invest billions
[00:39:12] Google’s Gemini leads in image and video generation 

Plus, special guests:
[00:49:18] Evan Harris, President of Pathos Consulting Group, on deepfake abuse in schools and crisis response
[00:56:02] Becky Keene, author of AI Optimism, on AI literacy for teachers and classroom integration
[01:03:47] Max Spero, Co-Founder and CEO of Pangram Labs, on building future-ready schools with emerging tech

😎 Stay updated with Edtech Insiders! 

🎉 Presenting Sp

Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.

Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.

Tuck Advisors is the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.

[00:00:00] Alex Sarlin: They look at what people are coming to their platform to do to the general purpose platform. And a huge number of people are coming to the platform to do education. Or to your point, Ben Learning, they're trying to learn or study something. So they're like, there's a huge opportunity, a product opportunity here to create a purpose tailored solution for learners.

But that doesn't mean that they're actually doing education. 

[00:00:22] Ben Kornell: But I think for the user community where they're like, wait a second, I was building this relationship with this model and now I've got a new one. This idea of more control from users and more ability to. To exercise their choice, I think is gonna be a real tension.

[00:00:39] Matt Tower: The way that an EdTech company can differentiate right now that you actually do have the power to control is your customer service motion.

[00:00:51] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech 

[00:01:06] Ben Kornell: Insiders. Remember to subscribe to the pod, check out our newsletter, and also our event calendar.

And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod.

Hello, EdTech Insider listeners. We're back with another edition of Week in EdTech, and we have our longtime friend, OG of EdTech Insiders, Matt Tower from Whiteboard Advisors. Welcome Matt and welcome listeners. Thanks Ben. I'm glad to be here. We have a ton to cover. It's kind of our back to school edition, so we're gonna tick through a bunch of stuff, but before we do that, what's going on with the pod, Alex?

[00:01:58] Alex Sarlin: Yeah, we have some amazing podcast episodes coming out in August. We talked to Amir Nathoo from Outschool. We talked to Julia Stiglitz from Uplimit and gsv and Coursera. We just interviewed Drew Bent, who is the head of education at Anthropic. It was such an interesting interview, and that'll be coming out in just a couple of weeks.

We have thunk, fascinating company, and we had the pleasure this week of also talking to one of the PMs for Google's guided learning. So we're writing that up right now, and I not think that's gonna be a podcast interview, but it was a really interesting insight into some of their thinking. So keep an eye out for that as well.

And then at the end of this month, we have a webinar about called AI for All Disability and neurodivergence in the next wave of EdTech with a few really awesome guests. So if you haven't seen that, sign up for that. Check it out on the EdTech Insiders LinkedIn page. That's about it, but really awesome.

[00:02:49] Ben Kornell: Yeah. On the event side, we have our back to school happy hour in the Bay Area, September 16th. Thank you to Owl Ventures for. Ponsor with us, and then we're also looking forward to new Ed coming up October on, we'll have a happy hour that's run by. We're supporting and hope to see all of our friends in New York on October 21st.

So lots going on with EdTech insiders. Lots going on in the pod. And then we've got kind of back to school, which I feel like normally it's a wake up from summer and it's like, oh my God, it's back to school. But there's been a drum beat all through the summer. What are the main headlines that are standing out to the two of you as we're heading into this back to school zone and season?

[00:03:40] Alex Sarlin: Yeah, I'll just give you a little bit of like a vibe check, but Matt, I'd love to hear how you're seeing this. 'cause I think you have a up close view of it. It feels like a little bit of a doer vibe this year when it comes to back to school and EdTech funding and the procurement landscape. It just feels, I've talked to a lot of companies over the last few weeks that are, and funders and all sorts of observers of the space who are basically saying all the uncertainty at the federal level, both for higher ed and for k12.

All of the sort of confusion about whether the money is going to be there for various projects and the the sort of back and forth that we've seen with funding cuts. It's just making everybody feel a little nervous and that uncertainty. Somebody told me the other day, uncertainty was like the key word of Isti.

There was like everybody coming out of isti was like, uncertainty. That's what we're getting from this. And that leads to a pretty confusing landscape when it comes to back to school and people are trying to get their tech stacks in place. I saw a great LinkedIn post from a sort of a school coordinator basically saying, if you're gonna try to get 15 minutes for me to pitch your tool, like don't, because I have a million things I have to do to get our current stack to work and all sorts of things to do.

I thought it was like a really interesting sort of encapsulation of the feeling O on the ground right now. Matt, what are you seeing? 

[00:04:54] Matt Tower: Yeah, I think I'm feeling a similar vibe. I think it's almost like the lack of headlines that has thrown me off, right? There's been very few funding announcements. There's been, you know, a handful of feature releases, but it doesn't feel as sort of.

It feels more defensive this year than in past years. And I think one of the themes that I've been sort of chewing on the past couple of months that has to do with both the policy side and frankly the large line language model providers, is that a lot of EdTech companies are downstream of major, major decisions, right?

When the one big beautiful bill gets passed that affects hundreds if not thousands of EdTech companies Yep. In a way that they really didn't have a lot of control over. And similarly, and we'll talk about some of the features that the, the big model providers have kind of shipped in the past couple of weeks, the companies are downstream of those updates.

Right. And they sometimes they get a little bit of early access, but mostly they don't. And so I think it's, it's probably you're feeling a little squished if you're a company leader right now of, I still have some agency in terms of how I communicate with my. And students, and users, whatnot, but a lot of these big macro decisions I have no control over, and I don't know how to go on offense with those big variables.

Just outstanding. 

[00:06:18] Alex Sarlin: The tectonic plates are sort of shifting under your feet when it comes to how schools work, how they're funded, what universities are thinking about, and then what the frontier providers are doing. And yeah, I agree. When the tectonic plates are shifting, you sort of wanna freeze. You don't wanna sort of run in any one direction because it feels like you're, anything can affect you.

I agree. Ben, what do you think back to school? How does it feel to you this year? 

[00:06:40] Ben Kornell: Yeah, I think that the uncertainty that you mentioned at the beginning has led into a back to basics kind of mentality, and so there's a reductionist view of the role of technology in the classroom. Which I think comes against the idea of the study votes and what we're, what I think we're finding is that there's education and there's learning.

And learning is a B2C proposition, and education is an institutional organizational proposition. And given the macro conditions and institutional pressures, the like zone of innovation is closing, or the aperture is closing right now. Whereas on the B2C side, the aperture is wider. Now the challenge is that over the last decade, SaaS and B2B models have been the far more reliable revenue paths to growth.

And customer acquisition in education is incredibly hard either way. But on B2C, it's super, super hard because. To acquire and then you'd have to have a very high lifetime value. So the I, I think there is this big reorganizing of education as delivered or learning delivered through educational institutions or delivered through whatever your consumer platform might be.

And I don't think that that's actually clarifying for most consumers. I think it's a big jumble of confusion right now, but that seems to be the distinct separation. By the way, I, I think the policy makers are actually pushing towards a more B2C delivery mode, even with public dollars behind it due to ESAs and other things.

So I think that's the real story of the fall. And then the learning modes. As much as I love the fanfare and the kind of, it feels great to have big tech say learning as one of our prime use cases. But I also think that in the trenches when you're actually doing the learning, the models still don't stack up to like what we need them to be, to actually drive learning.

And I think there's also a way in which that their entry into that space is actually depressed. Really strong pedical AI models to grow because it's sucking the oxygen outta that space. So that's my, there's like what's going on in education space, consumer learning versus education institutions, and then there's the big tech energy outta EdTech while not providing a learning experience that needs fly.

[00:09:31] Matt Tower: I wanna just put like a really fine point on the sucking the energy out and why this feels so different. I, if you look over the past decade at the SA era that you just talked about, Ben, almost every big tech provider, with the exception of Google with their classroom products, specifically sat back and provided infrastructure and let others build on top of them.

And yes, all the big models have APIs for consumption today. But to your point, learning mode, study mode, you know, all the modes are very much consumer products. Yep. And that is like, I cannot emphasize enough, like how big a change it is that they're competing at the application layer rather than the infrastructure layer.

And I would be scared if I was a company, if, if I saw that happening too. Not to be too much of a like doomsday person, but like it's a really big change. 

[00:10:23] Alex Sarlin: What you're both saying is such is really enlightening. I love putting it all together in that way. That's idea. You know, we've, we've interviewed a number of founders over the last few months that are really dedicated to the B2C model and they, they look at Duolingo as their sort of primary standard bearer in the space and they think about, okay, well what is our CAC versus CAC over lifetime value?

How do we get sticky? How do we convince parents in many cases, who are the decision makers depending on the age, age of the learner, or how do we make a product that is gonna truly convert, you know, and they're thinking much more in that B2C mindset and you can't blame them, right? Because the B2C mindset has a lot of advantages to it, especially right now as the institutions are all sort of trying to figure out how to, how to mold themselves to this AI era.

And then I think your point, Matt, about the application layer, I was just thinking about this a lot this week. How. Yes, they have APIs, right? I mean, Google and Anthropic and OpenAI have APIs. Not only do they have APIs, their APIs are the foundation of the entire AI ed tech sector. I mean, completely. And you've mentioned on this podcast many times how it's very unlikely that many ed tech companies are gonna truly try to create their own models because it's just increasingly maniacally expensive.

So everybody's reliant on them at an infrastructure layer as a, as sort of the foundation. When they start creating these application layers, it can become competitive. But to your point, Ben, it can become competitive with the B2C space, right? If you're somebody who's been trying to sell an AI tutor to adults.

Well then you should be very afraid of Philanthropics learning mode and Google guided learning and open AI study mode because they're indirect compe competition. But one thing that none of them are doing quite yet, and I think this is gonna be an interesting space to watch, is sort of bridging the gap between that B2C use case where you have a motivated user who knows how to prompt, who knows when to prompt, who is actually cares to go in and say, Hey, teach me about something.

I'm gonna make some time to do it. Versus the education use case, which is much more structured. It has many different stakeholders in in play. It has adults in the room who are assigning things. There are. Privacy complexity issues. There are compliance issues. I think that these applications are gonna be very competitive with B2C models in EdTech, but they're not that competitive with people who are selling to schools, at least not yet because there's so many things that EdTech companies do to make implementation possible that are just not of interest.

And I think the, the use cases that that Google guided learning is chasing, for example, they're look, I mean this is what the PM mentioned, he is like, well, we looked at our, and Drew Ben from philanthropic too. It's like they look at what people are coming to their platform to do, to the general purpose platform.

And a huge number of people are coming to the platform to do education. Or to your point, Ben learning, they're trying to learn or study something. So they're like, there's a huge opportunity, a product opportunity here to create a purpose tailored solution for learners. But that doesn't mean that they're actually doing education.

So it's a, it's a weird moment where you have this sort of B2C competition, including these big dogs and then this B2B competition where the underlying landscape, the policies keep changing schools, budgets keep changing. So a wacky, wacky time. I'm trying to put all the pieces together. I don't know if I'm doing a great job, but it, there's a lot of stuff happening here 

[00:13:40] Matt Tower: maybe and, and I know GPT five was one of our, our talking points.

Yeah, 

[00:13:44] Alex Sarlin: let's 

[00:13:45] Ben Kornell: do it 

[00:13:45] Matt Tower: as a bridge between, I think it's interesting looking at Sam Altman's tweets here. The stack rank prioritization of how they're rolling out GT five access is one. Current paying users of chat gt, so B2C users of chat GT two API demand up to what we have previously committed to. Three free users of chat GT B2C four, new API demand.

[00:14:14] Ben Kornell: Hmm. 

[00:14:15] Matt Tower: So I think, I think that's really telling of how they are thinking about the prioritization of Right. Their own models and the rollout of their own models and consistent with what we're talking about here. 

[00:14:24] Alex Sarlin: Yeah. They wanna own it a little bit. Right. What I'm hearing you say is that they're not saying GBT five is out now open the floodgate for new API calls.

That's actually the lowest on the list. Exactly. Exactly. 

[00:14:36] Ben Kornell: As I think about the educator side of this, I feel like the missing piece of the story here is that schools have become a point of delivery for a lot of social services, food, childcare, after school, programming, social emotional, mental health, et cetera, et cetera.

And so the kind of pace of change and the way in which the learning modes end up being coming across is pretty vacuous. Once you kind of, as an educator, once you get into it, you're like, I can't teach with this. You're also getting the kind of side channels of cheating, rampant cheating, right? It feels like a rigged game for schools to try to compete with the B2C modes here.

And my view on that is, you know, what schools should embrace their role. And maybe the idea I, I think the thing I've been thinking about is are the roles of schools to raise the ceiling or raise the floor. And I think one of the challenges I've had is like, as parent, I felt sometimes that schools were actually lowering the ceiling for my kids because they were so focused on the floor.

But now that you have this bevy of consumer items, some of which are free, is there an opportunity to say, you know what schools are gonna provide this bar of reading, writing, math. For childcare, we're gonna lean in around social emotional expertise and human connection, and then we're gonna make available to our kids all of these free tools and inform our parents of all these free or low cost tools that allow them to pursue their passions and raise the ceiling and be spiky.

And some of our EdTech insiders WhatsApp group is chatting a lot about alpha schools. 

[00:16:38] Alex Sarlin: Yep. 

[00:16:39] Ben Kornell: While I think there's all this like catchiness around AI schools, I think that's what they're actually doing is actually saying, you know what? Let's condense the raise the floor period, and then let's just be open-ended on the raise the ceiling period.

I think that that's actually, that might be a more sustainable strategy for schools. And then we just have to recognize that the school day isn't insufficient for every child to reach their full potential. You know, we're gonna have to have outta school or adjacent to school supplements and models, which by the way, in like many other countries, that is totally the norm.

But I think we've kind of combined all of these things expecting a teacher to be an expert pedagogue as well as a great classroom manager as well as a great like 

[00:17:28] Alex Sarlin: social curator of educational technology and, and third party tooling and all those. Exactly. Exactly. 

[00:17:34] Ben Kornell: So that's not where we are right now. I don't think schools have narrowed the focus to say, we just exist to raise the floor.

But I think that's what they're doing. And if schools could like lean into that role, I think it could take some pressure off. 

[00:17:53] Alex Sarlin: Yeah, I mean, I really agree. I think that's a really fascinating insight about the sort of Alpha school model or other, you know, innovative school models that, that in some ways what they're doing is actually changing the ratio of time spent on raising the floor or meeting compliance requirements.

We've seen that with a number of the different sort of innovative school models. But I think what's different about this moment is that the power of those outta school tools is just incredible, right? I mean, the, the coding tools are incredible. The video creation tools are incredible. The tools to create your own games are getting incredible, very quickly.

Audio incredible. The tutoring sector is starting to get really good for the b2c. So I think, you know, there's something very compelling to me personally about starting to. Loosen the distinction between the B2C and B2B. We talked to Prodigy Learning recently in an interview and they have this amazing model that where, you know, it's free for teachers and it can be used in schools endlessly and the, but it's paid for parents for home access.

And then the students have this sort of seamless experience between school and home, and I can imagine a lot more of that coming. There's a way for students to be introduced to really powerful tools in a school environment, but in a low, ideally a relatively low touch way that doesn't involve multi-year contracts and a huge amount of procurement.

And, you know, maybe could go through an individual educator, and then you sort of have this, this handoff to the home environment and the B2C. And I think you, you said it really well, Ben. It's like you can go deep on your passions. You can go deep on the subjects that your family feels like are important.

We've seen all these language learning partner apps that I think are ideal for that, right? You could do language learning in school, but if you have a 24 7 language partner, or all of these AI apps for practicing language, you should clearly be doing that. Not in your school day, but all the time. It's gonna accelerate your learning immensely.

So there's an interesting blurring here between sort of B2B and B2C that I, I I wonder if there's, you know, a future there. It's, it's interesting to think about. 

[00:19:52] Matt Tower: Yeah, I mean, I've been spending a lot of time recently talking about. School choice has been a topic for 20 years, but I think with essentially one big beautiful bill, like it's really the like school choice era I think is really starting here where you now have a transferable subsidy that follows the student nationally with the tax credit scholarship.

And you know, more and more states are are doing that. And I think there's some short term things that I'm really worried about, frankly, and like stresses that it could place on the system. But longer term, I think it allows a much more flexible student journey. Throughout the K 12 system where you can decide like, oh yeah, my son or daughter is really passionate about chess, so like we're gonna over index on chess and math for a couple of years.

And then they can follow their passion in a different direction and they'll still get their floor to Ben's earlier point. But the journey above the floor can be a lot more flexible with flexible funding attached to it. That would be the bull case for all the school choice movement with Theas and whatnot.

Yeah. 

[00:20:58] Alex Sarlin: You mentioned the chat G BT five rollout. Let's, let's just briefly talk, there was some interesting sort of back and forth this week about GBT five. It does this model routing where it's sort of tried to remove past models and instead sort of say, Hey, we're the G PT five is the model to roll them all and we'll route you to reasoning models or to efficient models.

And there was a big pushback on that. Matt, you know a lot about this. Do you wanna just give a little bit of narrative of that? And Ben, I'd love to hear your take as well. Of course. 

[00:21:24] Matt Tower: Yeah, so. The root of it is as chat. GPT has acquired more models, built more models. They introduce this model picker, which a lot of people actually, as of two weeks ago thought was really confusing.

It's like, well, do I use four oh or do I use four oh mini or do I use deep reasoning or do I use three? Like, why is three still here? I thought we were done. Three. This whole kerfuffle was rooted in customer confusion, and OpenAI took actually a fairly reasonable step of saying, well, we're gonna try and get rid of that, and the model will pick which model is appropriate for the question.

Sort of ironically, people got mad that when they eliminated choice, they were like, but I liked four oh mini to the ninth degree model. Like that was my favorite model. They were so nice to me. And Sam Altman did actually talk about the emotional attachment that users had for specific models, which I think is, is fascinating.

We could probably spend a whole hour on. But that was sort of the root of the issue. And really I think what the, the problem was, is they didn't think about the change management involved. And you know, it's a reminder that they're a startup. 10, 15, $20 billion a year revenue startup that hasn't quite figured out how to nail product updates.

And I think it speaks to, even if something is. They would argue objectively better. There's a lot of subjective emotion attached to a tool you use every day. And I think it speaks to some of what Ben was saying earlier of like the way that an EdTech company can differentiate right now that you actually do have the power to control is your customer service motion and your ability to work with your schools and your student partners to deliver a good experience with humans.

And I think OpenAI didn't nail it on this rollout. You know, hopefully they learned something, the next one will be better. But I think it's a really, it's a great microcosm of the push and pull of, of AI and user experience right now. Yeah. 

[00:23:22] Alex Sarlin: Also relevant about just the idea of updating models, even within an EdTech context, right?

The like, so Jet BD five is, is designed to be less, it's trying to be at least less sycophantic, less flattering of the user. So specifically you get to this moment where it's like some people maybe love the Sycophancy or love the emotional connection they had with certain types of behavior. So updating it and then disallowing the users from being like, no, I prefer the old you is really seems like the problem here because I've read if they gave it as a choice just right off the bat, that most people would choose the routing and triage types of options.

But just by do, by doing it by fiat, it felt like it sort of triggered the loss aversion and people felt, oh no, I'm losing access to something. I know And, and when we talk about AI tutors or AI characters or educational relationships that we're trying to build, the same thing is we're gonna see similar dynamics when we have to update them in various ways.

Ben, what do you think? What did you make of the GBT five roll up? I 

[00:24:18] Ben Kornell: mean, it's making me feel like there needs to be a user AI bill of rights where we, whether, you know, legal framework or more of a philosophical framework. We need to understand what controls and transparency should users have a right to and not.

And it includes like how my data would be used for training and what is remembered versus not remembered. And what can be a feature can also be above, like, I remember more about you, but I also know everything about you. Right. And I think what's been startling is like, where's the black box in all of this?

So let's recall that the first G. Itself was a black box even to its own creators. We did not fully understand at that time, this is basically three years ago, what exactly was happening that was generating the output. We understood the processes, but it was always interesting like this is the era where it's like fine tuned prompting is gonna be the job of the future because figuring out how to get this black box to do what you want is really challenging.

We've made successive leaps forward now where really your AI experience is a blend of different models and things under the surface. May know, may control or may not. And to the degree that the company knows or doesn't know, I think the GPT rollout showed that open AI is not like super credible or reliable in controlling that mixed worry when you give a question and now it's giving inferior answers, that means that it's not actually calling the right model for you.

And so I think for our audience, one on the EdTech side, the more the APIs can still allow for the company to tune what their model mix is, I think that that's gonna be really important. And so I expect Open will backtrack a little bit on this. And so I think in some ways this. I think the user community where they're like, wait a second, I was building this relationship with this model and now I've got a new one.

This idea of more control from users and more ability to, to exercise their choice, I think is gonna be a real tension point because from a scalability standpoint, having everybody creating different protocols for which model they use and how they call on it, I think can be really, really hard to manage.

I dunno, I would love to see, um, a little bit skeptical about the regulatory ability here to harness what's going on with AI just because it's moving so fast, but some kind of core principles that, or one of these platform, if they could publish a bill of rights for their users that. I think that could be a differentiating for them, but I think it also could coalesce the AI community around some just basic principles that we're gonna operate with in the coming five to 10 years.

[00:27:36] Alex Sarlin: Yeah, it's a really interesting concept to sort of AI user Bill of Rights. I think it's ahead of its time. I think it's something that people should be thinking about, especially as, you know, I mean we just saw these results just a few weeks ago from Common Sense and from internet matters in the uk that young people are speaking to AI like friends and therapists a lot and spending a lot of time with it and for various reasons.

And if you have a world in which a technology is playing a role of a tutor or a teacher or a friend or a confidant, then it puts a lot of power in the AI model provider's hands. And if they roll something out that wipes the memory or does something that changes unexpectedly, you could have actual emotional gratification, like pretty serious one.

So I think it's a really. Idea. And yes, to your point, I think there was a headline here about they're already backtracking a little and saying they're not gonna remove old models without warning, because I think they, they learned a lesson here. Give me a point of personal privilege here. I just wanna do a tiny little angry rant.

I don't usually do this, but I have been reading, uh, so many articles over the last few weeks. There was a whole range of articles about, from creative writing professors about how AI was messing up, writing, teaching, messing up creative writing for students. And recently there were a handful of articles just this week that I recommend people read.

Even though I just couldn't disagree more with them. One is from the Atlantic, Lila shr, who was saying the AI takeover of education is just getting started. Was your kids' report card written by a chat bot? Is the subtitle there? And I mean, I'm so listeners will know, Ben, you know really well, I'm so afraid of the sort of moral panic backlash to AI because, and especially when you have people in the New Yorker, there was a Ssu, uh, terrific author, wrote a really intense piece for the New Yorker a few weeks ago about how basically a profile of college students who were cheating their way through college, who like couldn't even name the thesis of their own papers.

They were handing in and they were still getting good grades. And then the times this week, Trisha McMillan Cotham, who is a fantastic writer about higher ed and Jessica Gross, they talk article called What AI really Means for Learning. And they both give AI a two out of 10 in terms of one being apocalyptic and 10 being utopian.

They're like, we're at a two. And this stuff drives me absolutely nuts. And I feel like the sort of progressive intelligentsia, the New Yorker crowd, the pundits, the New York Times and New Yorker crowd are starting to, without using AI very frequently, without, sometimes they go visit classrooms and talk to professors.

It's mostly professors. When they talk to students, it's usually to sort of. Catch them doing things. They're starting to be this coalescing vision of AI and literally taking over education or destroying it or all of these things. To your point earlier, Ben, about institutional education versus informal learning, I am really worried that some of these narrative is going to make university presidents who read the New Yorker and the New York Times make superintendents make people who have the power to think about AI policy start to say AI is just not cool anymore in education.

It's, it's something that people think is destroying it, and the cheating is the big deal. Or this concept of was your kid's report card written by a chat bott? What a frigging clickbait. Idiocy is that from the Atlantic? I'm like, come on, so I'm just scared of this and that's my rant. I'll be done. But I'm curious if any of you have thoughts.

Do you, am I in panic mode as usual when it comes to this stuff? I don't 

[00:30:57] Matt Tower: know. I get the concern. I'm glad other people are concerned. I'm not. I think what I would push on, and one of my favorite to go back to do a 2021 or 2017 or 2013 throwback, my favorite pieces of blockchain art are actually the generative AI pieces.

So there's one called the Squiggles, that the whole concept was like, I have created the parameters that will put something on the blockchain as an NFT and I as the artist have to figure out how to do that. I have to figure out the guardrails such that it creates something interesting, but I don't actually know what it will create.

And that's the art, right, of like, I, there's some e connection that like happens between me and the computer that creates something that like I couldn't have done on my own, because fundamentally, like I'd know what I was creating if I was doing it myself. I'm so excited for the first piece of creative writing that is generated by ai and everybody holds up in high regard as like a beautiful piece of writing.

[00:32:05] Alex Sarlin: Best seller. It's gonna be, 

[00:32:08] Matt Tower: yeah. Yeah. It'll be because the author, quote unquote, put in the time and effort to figure out how to create the right parameters for the AI to make something beautiful. That intersection is so fascinating to me, but it makes me really sad that other people are like, no, I want nothing.

I don't want anything to do with it. I, I won't touch it. It has this potential to create new and interesting things that we could never have done on our own. 

[00:32:33] Alex Sarlin: Exactly. And people don't even address that side of it. It's like, it's just all about how is this messing with the status quo of education, which is universally reviled.

That's the thing, right? It's like there are very few hardcore fans of the educational status, yet people react so harshly to anything that threatens it. I, I, I just find it so strange. 

[00:32:53] Ben Kornell: Yeah. I think this is one of those two where this dichotomy between learning and education may become more pronounced before the education side moves.

If kids move on to all these other tools to learn, then what do they need? Educational institutions for the homeschool movement, the alternative school movement, is partly fueled by this reluctance to innovate. I'm always convinced that the alternatives that people are gobbling together are meaningfully better, but they're much more dynamic and responsive.

And you know, if you look at who is the customer of public schools today, is it really the child? Is it the parent or family, or is it actually the labor? Is it the kind of political institutions and every educator I know, and frankly, every superintendent I know wants to serve families. But when I look at their organization of their time and the organization of their priorities, it's often overemphasized on adults and adult politics rather than kids and kids learning.

[00:34:05] Alex Sarlin: Matt, you caught an interesting headline this week about a SoftBank executive buying a stake in a UK university. And I know it's, it feels like a little bit of a in the weeds headline, but it's pretty interesting. You wanna just tell us a little bit about why that caught your eye? 

[00:34:19] Matt Tower: Yeah, so the base level of it is the guy who used to run SoftBank Venture Fund, Marcelo Clore, is now at a new private equity firm.

This is one of his first deals. It's a 50% stake in Ien University in the uk. I think what's interesting about it to me is, and I think it's consistent with the rest of our conversation, about this tension between B2C and B2B and like what remains fundamentally valuable in a world where the major model providers are investing significant time and, and energy and education.

And in large part, I think it's regulation. Being on the right side of regulation, I think is. Differentiator and a source of alpha to be financial about it. Where if you can control something that is a degree providing institution, which is generally sanctioned by a government, that's something that, you know, at least to date, none of the major L LMS have, have done, and it provides you a source of differentiation.

So I suspect that is the thesis for the acquisition. He basically, in his quotes about it, is like, yeah, I'm gonna redo as much of this, of the processes and infrastructure of this university as possible with ai. Yep. But what I am buying is access to, you know, a government subsidy and government sanctioned.

This is okay, this is a a, something that I can then confer to others that they'll value. I wouldn't be surprised to see more. At that. I think it's interesting to see if this is in the uk, would somebody do this in the us? How would that look different? I think we've got some startups in the US that, that are, are making that play.

Will this happen in Australia? Will it happen in China, India, you know, all these different geographies that where a degree is really, really important can then you then rip out most of the innards and replace it with AI without. Upsetting that compliance side of things. So I thought of it as just an interesting thesis for AI and education.

[00:36:18] Alex Sarlin: Yeah. And he talks about translating the curriculum into all these languages and then recruiting foreign students and offering them visas. And there's this whole sort of, it's like this university as disruptor, I think of a SU and Southern New Hampshire and some, and Purdue, some of the models like that have some similar take of like taking a, an Oxford University, we talked to a lot here.

It's like taking a university shape but then injecting technology into it to differentiate from other universities, but using the university structure to, to not just be sort of out on your own. Having to figure out certification and, and meaning. That's a really interesting insight. And Ben you mentioned, yeah, Clare is also his brother.

We don't have to talk about that, but that was interesting too. 

[00:37:03] Ben Kornell: Yeah. I'm surprised that we're not seeing more workforce acceleration right now. Warning sign that EdTech and Workforce hasn't quite landed yet. And where I feel like the universities need to go is finding workforce partnerships that are at the intersection of private sector and government sector.

And part of this is that federal funding and globally the government funding for workforce initiatives just feels really, really uncertain. Generally speaking, it's a political win to fund those types of programs. And if we're looking at trends of lower uh, employment for recent college grads, or a need for retraining, there's a lot of opportunity there.

I do feel like the university sector continues to struggle to make that case on their own. EdTech companies have a challenge of like, how do they get to a meaningful scale? Workforce Pathways. Yeah. 

[00:38:10] Alex Sarlin: We saw Google commit a billion dollars to AI training in higher ed and Microsoft Pledge 4 billion a few months ago.

And so I think that's exactly what you're seeing in action there. It's like the big tech companies are seeing the gap in workforce training and throwing money at it, for lack of a better word, and throwing credits. Uh, AI credit. 

[00:38:30] Ben Kornell: Yeah. Yeah. I mean, when we were at the Google Summit last year, remember Alex, that we were talking about the training program that Google ran through Coursera, I think they're starting to realize they don't need Coursera.

Yeah. 

[00:38:43] Matt Tower: Don't tell Coursera that that program's worth like 50 to a hundred million bucks a year to that. Yeah, 

[00:38:47] Ben Kornell: I know. I know. I'd be worried if I was holding the stock. 

[00:38:52] Alex Sarlin: I have lots of thoughts on that as X course. I was, I was at Coursera when that whole deal was being worked out by Kevin Mills, who is now the head of education at OpenAI.

Uh, and it was a fascinating deal. I think Google has lots of channels, but they did just double down on the Coursera piece as well. They, they committed lots of access to their courses as well. 

[00:39:12] Ben Kornell: Google strategy is be everywhere, like, and it's working. I think you guys have also gone at the model level, but the Gemini model is kicking butt right now.

And you know, if you look at the image generation and video generation, they're is step ahead of open ai. And certainly it's interesting to think about front runners versus who's gonna win the marathon here. But you'd be, it would be a bad bet to bet against Google. 

[00:39:46] Matt Tower: The Star Wars videos are my favorite videos of the year.

The the Storm Troopers just as life a day in the life of a Storm Trooper. It just makes me giggle every time I see them. 

[00:39:56] Alex Sarlin: I've been waiting to get access to the video overviews and notebook lm, because I'm really excited about They're rolling out slowly. I don't 

[00:40:03] Matt Tower: even think I heard of 

[00:40:04] Alex Sarlin: that. Oh, yeah, they, they announced it at at Isti and it's coming, but it's just not cool quite out yet.

But yeah, they, they have a demo. They have a blog post about it and they're gonna do anything you upload, they'll turn into a video, a learning video. Wild. 

[00:40:17] Ben Kornell: I've been taking my photos of my grandparents using AI and animating them and then sending them to my dad and his brother and so on. I just think Geminis figure, and we talked about note when we were interviewing Steven with Notebook.

The UI is actually quite important. It's in terms of how people are using these elements. This is one of the reason why ChatGPT is so successful is the UI is really intuitive than everybody else has repeated that. But I think the idea of videos and more time to produce something that has this kind of high quality output.

Google's got a, a couple surfaces that are really, really great for that. 

[00:41:02] Alex Sarlin: The Genie stuff is mind blowing as well. It's, it's pretty exciting. We have to wrap there. We have some great guests that we're speaking to in this episode. Always a pleasure. And I, I feel like we covered a lot of ground. Thanks so much to Matt Tower, our special guest host.

He's always amazing. And of course, Ben Kornell and Alex Sarlin. Ben, you wanna take us out? 

[00:41:20] Ben Kornell: Yeah, well, we hope to hear you all or see you all next week, because if it happens in EdTech, we'll hear about it here on EdTech. Thanks. 

[00:41:29] Alex Sarlin: For our deep dive in week in EdTech this week, we are here with Evan Harris, who is an E, a national expert on emerging AI risks in schools with a focus on deep fake abuse and digital safety.

Very important issue. Evan has advised the office of the First Lady, the Texas State Senate, and the General Counsel of the National Association of Independent Schools, where he co-authored their legal guide on deep fake sexual abuse, a former teacher and administrator with a decade of experience in independent schools.

Evan holds a master's in private school leadership from Teachers College and was a technology ethics fellow at Stanford's Human Centered AI Institute. Today, Evan serves as the president of Pathos Consulting Group and as a crisis communications consultant for the Jane Group. Evan Harris. Welcome to EdTech Insiders.

Dude, thanks for having me. I'm excited. I'm excited to talk to you. You are a expert on something that we almost never talk about on this podcast, even though it is incredibly important and a major part of AI fears in school, which is deep fakes and specifically deep fakes for sexual abuse and sexual bullying.

Before we even get into the details, give us the overview of this because this is just a topic that is relatively new for me and certainly for a lot of our listeners, but a big one, and you know a ton about it. Tell us about how deep fake sexual abuse occurs in schools. 

[00:42:51] Evan Harris: Sure. So I think it makes sense to start sort of with the elephant in the room, which is that the best indications we have are that about 90% of the perpetrators are boys.

About 90% of the victims are girls. And what typically happens is students will take clothed, images pulled out of social media, usually of their classmates and run them through AI powered applications that create sort of synthetic images of those classmates undressed. And then usually what happens is those images get circulated in group chats, like things like Snapchat, WhatsApp, iMessage, or in some cases unfortunately, are used to extort or harass the victim.

One of the things that makes this type of sexual abuse, and we'll get into this pretty deeply, I think one of the things that makes this of sexual abuse different. Is the scale of it. Yeah. So at one school in Pennsylvania, for example, they basically had 50 victims overnight. So you're talking about kind of pouring gas on a fire, right?

Like you've, we've seen sexual abuse cases before where maybe someone was abusing over decades, and there are many, many victims that come forward with this. You can have kind of a scope and scale that we've never seen before because the technology didn't exist. And so that just makes it kind of a fundamentally different challenge that schools kind of have to give a dedicated effort toward.

[00:44:05] Alex Sarlin: Oh, big time. And you know, we always talk on this podcast about how AI gives everybody these incredible superpowers, these creative superpowers, and especially young people and students, and that in many ways it's a positive thing. They can create their own movies, they can create their own novels, they can create all sorts of really sophisticated work.

This is the downside of it, and it's a very, very tempting downside of it, and something that's very easily available for young people right now, which is how you get so many people doing this at such a strain, at such a speed and scale. Tell us about, you know, you, you spent a lot of time thinking about this, you know, tell us about how some of these, there are tools out there that just make this very easy for young people to do.

It's not something that they have to know a lot to do. It's basically out there. Apps just downloadable right there for the taking. What role does that play in DeepFakes? 

[00:44:51] Evan Harris: Yeah, and I think it actually gets to the heart of one of the main misconceptions that I hear from school leaders a lot, which is. How common is this?

Like we don't think it's happened here, so is it really gonna happen to us? There's a bit of convincing the toddler that the stove is hot. They don't actually have to touch it to find out, but you're right. We're talking about really sophisticated technology proliferated to non-technical people. Right? So we've got some data from organizations like Thorn Save the Children, center for Democracy and Technology.

They're all pointing at about the same types of numbers. We're looking at like 15, 20% of high school students are aware of a deep fake, non-consensual, intimate image targeting a classmate in the last year. Some, a lot of that data's like a year old and we know it's getting worse. 

[00:45:33] Ben Kornell: Yep. 

[00:45:33] Evan Harris: And so one of the things I tell school leaders is like, even if you don't buy that data, just as a kind of thought experiment, if you've got every student in America with a button, they can push that undresses their classmates.

Is it really that big a leap in logic to think that some irreducible percentage of them are gonna press the button? Like we know what teenagers are like, right? They lack a lot of those sort of like executive functioning skills to really recognize the damage. They're doing, especially when they're not face-to-face with their victim.

Sometimes they're even less sort of thoughtful about what they're doing. That's the main misconception. The other one is this idea that because they've dealt with sexual abuse in the past, that they know how to handle this right. But from evidence handling to the types of supports that victims need to considerations for crisis communications to insurance.

I mean, there are just 10 different ways in which this is a distinct type of threat that they have to. Practice for and get crisis ready for, and you can have a binder as thick as the phone book for every possible risk in your school. Ultimately, those schools that practice these incidents and simulate them and work as a team to figure out that muscle memory are gonna be way better prepared.

[00:46:39] Alex Sarlin: Yeah, and, and you, you've literally sort of written the playbook on how schools can react to this type of incident when it happens, and it does happen pretty frequently. Tell us a little bit about what that playbook looks like. You know, what, what are some of the things that schools should be doing when it's revealed to them that there is a situation with a deep fake abuse in their school environment?

[00:47:00] Evan Harris: Yeah, it's, it's funny, like one of the things I tell schools is kind of good news, bad news, good news. You do have a playbook for this bad news. It is dispersed across disciplines that have never been integrated before. Yeah. Again, things like crisis management, school governance, the science of misinformation and how it spreads.

Trauma-informed responses to sexual abuse, crisis communications, best practices. Schools tend to not be very good at crisis communications, like some of them have like marketing communications teams that tend to be a lot more marketing than communications generally. They've never had to before put all these puzzle pieces together.

Right. So concretely, I think early in a crisis, a few things to be aware of. Number one, the Take It Down Act has passed. It's been enacted as of this past spring. And so that means it's a felony in all 50 states to distribute or importantly to threaten. To distribute. That's that extortion piece we were talking about before.

That's now illegal in all 50 states. Right? To do that. To a minor that's real and deep, fake, non-consensual, intimate images. And so this is mandatory reporting, right? And even those schools do mandatory reporting training once a year, once every two years. There's often a misconception around, well, we're gonna do our internal investigation, then if it turns out we know what happened, then we'll do our reporting.

You only need reasonable suspicion. As soon as you have reasonable suspicion, you should immediately do your reporting. I think victim notification is another area where schools tend to stumble, making sure you call the parents first. They may want to be there when the, when the child comes in. One school in New Jersey, we talked about this earlier, they read the kid's name out over the intercom to come to the principal's office.

And at that point, the rumors were already swirling. Mm-hmm. So you can see how. From the very beginning of a crisis, you could either be relieving the harm or exacerbating it with the choices that you make because we're dealing with a really vulnerable population at a really vulnerable moment. The last point I'll make is about the crisis communication side, which is that if your school is already swirling with rumors and, and the community's already talking about this, especially talking about it online.

A lot of schools think that if they are silent, that that keeps them safe. Mm-hmm. And it's actually just the opposite. They need to establish themselves as the source of reliable information at that moment and think, okay, what would different stakeholders in our community reasonably expect of us in this moment?

Mm-hmm. It's that we know what's going on, that we're gathering facts. Uh, here's where to go for updates. Here's what's gonna happen next. Because if you say nothing, it looks, you're like, you're either obfuscating or you are negligent. And neither of those are good. So you want to lead in that moment with empathy and with clear language, not legalese.

[00:49:31] Alex Sarlin: Yeah. You mentioned that the Take It Down act, which is a federal act, is now making it a felony to distribute or threaten to distribute. This type of imagery, obviously in school environments is always this strange blurry line, and this is both high school and college between things that are sort of problems within the school environment and crimes.

And I'm, I'm curious, you know, where this lands, if you have a 16-year-old who downloads one of these apps and manipulates pictures of a classmate and shares it and the school has to get involved. But does that, is that also mean that law enforcement is immediately involved or is this something that would happen within the school environment or both?

[00:50:09] Evan Harris: So I always recommend to schools that they report to both CPS and local law enforcement. In part because the evidence handling part of this can get really tricky for schools because obviously they don't want to get themselves into a situation where they get charged with child exploitation because they've mishandled evidence.

In a case like this that's happened to an administrator in Colorado, I take it down, act does have a provision that allows for sort of good faith disclosures of the evidence to law enforcement. Mm-hmm. But it's still such a new area and such a tricky area. I still recommend that schools be in touch with law enforcement, and I actually would say it's a good idea to get in touch with law enforcement before an incident occurs, to ask them some basic questions, find out what they think about issues like this.

Some law enforcement agencies are doing specialized training on this issue, but to your point about sort of discipline within the school versus sort of outside of school consequences, I would say that they're, they're running on parallel tracks, right? So once you do your reporting to CPS and local law enforcement.

They'll obviously have you do sort of an oral report. They might come back for a written report, but that's ex, that's sort of in a vacuum that's existing outside of the school. They're doing that on a parallel track and then you can do your disciplinary due diligence within the school. And one thing I would say, certainly for codes of conduct handbook policies is it's important to reserve the right to suspend students who are perpetrators.

Because if you think about a student having to sit in class with someone who sexually abused them earlier that week, mm-hmm. That's a student who can't reasonably be expected to learn. And that's a student who can't reasonably be expected to feel safe. That safety promise is so foundational to what schools do that if we feel like we're failing on that, like something is really going wrong.

[00:51:45] Alex Sarlin: Yeah. So I would imagine that one. Major sort of corollary to this moment is that if, you know, schools have to be ready for this to happen, they have to be ready for crisis communications, crisis counseling, being able to deal with this really complicated and fraught situation, that that is going to occur very frequently.

But one other aspect of it is now that this is a crime, and it's a crime that, that I think very few people know is a crime. This is a new law, this is a new technology. The fact that people can go online and find apps that do nothing, but this probably creates a sort of implicit permission structure where they say, well if, if this app just does this, how could that be a crime?

It's not like, you know, you can't get an app that that does other crimes just like every, every day. It feels like the schools suddenly also have this additional responsibility to inform their entire community. Especially as you mentioned, the parents and the boys who are the perpetrators often of this type of work that this is a crime, that if they do this, if they, they, they might think they're having fun, take a picture and send it to friends.

That is going to have unbelievably serious consequences for them and for their victims and for the school environment. How do you see schools playing that role in sort of helping the entire world get their head around what's happening in this part of the world? 

[00:52:59] Evan Harris: You're exactly right. It's such a difficult issue to confront for schools so complicated and so new that one of the things I do when I'm working with a new client school on that sort of first phone call is we break it up into three pieces.

We say we're gonna work on policy, we're gonna work on crisis readiness, and we're gonna work on preventative education. 

[00:53:16] Ben Kornell: Mm-hmm. 

[00:53:16] Evan Harris: And that kind of clarifies, I think we're gonna put this in three buckets. And I think what you're getting at is the preventative. The preventative, exactly. And so the good news I think, is that schools, a lot of schools across the country over the last 10 years have done pretty good work on things like media literacy, consent training.

You can see how this would slot into curricula that already exists. Yeah. And I think that's important. 'cause as anyone who's worked in schools knows, the question is always like, with what time are we expected to do this? Like, when is this happening? Like things only get added to our plate. Nothing ever comes off.

When are we supposed to do this training? 

[00:53:50] Alex Sarlin: Yep. 

[00:53:51] Evan Harris: There are these different stakeholder groups that have to be educated on this. So. The parents. Absolutely. And with parents, I would say too, that they need to be engaged with as a partner and in a dialogue. It can't just be, here's a one sheet on this, good luck.

Right. There has to be some back and forth communication that ends up being really important after a crisis, because it turns out like the number one predictor of crisis resiliency is trust within the community before the crisis. 

[00:54:17] Ben Kornell: Hmm. 

[00:54:17] Evan Harris: And so you're actually gonna wanna leverage parents as good disseminators of good information after a crisis, and you start making deposits of goodwill before the crisis.

So those relationships really, really matter. Post-crisis. Yeah. With students. Yeah. We can look at it through a media literacy lens, a consent education lens. I think getting back to that original point about, you know, the 90% boy perpetrator figure, which again is sort of like a rough estimate. But clearly there's some bigger issues that live upstream of this for boys.

We've all seen the data on what's going on with Gen Z boys, their attitudes towards women, their attitude source, feminism, the favorable attitudes they have towards guys like Andrew Tate. I cannot at this point prove that these things are all kind of connected, but sexual abuse is, is about sexual violence.

It's about power. And so I have a hard time imagining that the uptick we're seeing in the Youth to youth sexual abuse category in schools is totally unrelated. Mm-hmm. From some of this other stuff that lives upstream. And so I think it would be disingenuous to try to tackle this without getting to some of those root causes as well.

Yeah. And the last piece is just the mandatory reporting training for teachers has got to get. 

[00:55:23] Alex Sarlin: Yeah. Yeah. I mean, it feels like the preventative piece would be ideal, as with many things. You know, if you can get ahead of it in a way that doesn't inform the whole world that this stuff is possible and make it worse, but in a, in a, in a way that really, really helps people understand what this technology does, what it's capable of, and why it's such a third rail for so many different reasons, uh, it feels like it could be a way to get ahead of it.

So that, that's just my personal take. So, I mean, one, one question I have for you, this is maybe a little bit of a per personal issue, is that Sure. We, on this podcast, talk about the potential of AI in school all the time. And I've been saying since the very beginning, this technology is incredibly powerful.

It can do so many different things. Some are not good and some are not gonna go over well. And we have to make sure that the good is out there and visible, because as the bad starts to happen, people have, there's a risk that people start to just say, oh my gosh, this is so scary that this can happen. No AI.

No phones, no ai, no tech. Let's just shut it all down because we're so scared of this. And I'm curious, you know, when you talk to schools about this type of risk that they probably haven't thought that much about, haven't heard that much about, you know, until it's a problem, do you sense that there may be that type of, sort of blanket reaction of saying, oh my gosh, this is what AI can do.

We already knew it can help students cheat. Now we, it can help the, the, you know, students sort of abuse each other. Screw this ai. I don't want any more AI in the school. Do you see that reaction or do people sort of seeing the nuance and saying, yes, this is one potential application of ai. It's very specific and it's very harmful.

There are so many other ones that could be really beneficial. I 

[00:56:54] Evan Harris: think I'm on pretty safe ground to say that if this were happening in like 2023, that we'd get more of that knee jerk reaction. Yeah. Yep. But in 2025, I think schools have just seen too much firsthand of the potential of the tech. They've thought less and less about just sort of AI purely through the lens of academic integrity and more about, okay, so what are the dos?

Not just the don'ts. Right, exactly. Like what are the core comp? What are the core competencies? Yes. They've seen too many narratives within the school of staff, faculty, students doing things that just wouldn't be possible without ai. 

[00:57:27] Ben Kornell: Yep. 

[00:57:27] Evan Harris: And so I have not seen a kind of knee jerk reaction to this. I think schools are nuanced enough and smart enough to understand that it, like any tool, like fire, for example, yes, it can be good or bad.

Right. And so I'm certainly not an AI dor. I'm automating workflows on N Aden. I'll try any cool new toy that comes out just to see if it works. And I think. On the whole, like there's a lot more to be excited about than there is to be fearful of, but I don't think we get to safety by burying our heads in the sand.

Right. You can turn off the school wifi and, and cell phones and never talk about this. It will not keep kids safer. To me, it's almost analogous to like an abstinence only approach to sex ed. Like it just, we know it doesn't work. To your point about like educating kids on this without tipping them off that it exists, I feel like the car, the, the horses left the barn.

Yeah. On this kids are always ahead of us. 

[00:58:23] Alex Sarlin: Yep. 

[00:58:23] Evan Harris: On task. I agree. They just are, and so I just don't think that pretending this doesn't exist keeps anyone safer. 

[00:58:28] Alex Sarlin: Yeah. You mentioned that deep fake sexual abuse is one form of deep fake problem, but obviously deep fake or just the ability to sort of fictionalize and create untrue video, untrue audio has a lot of potential consequences even outside of anything sexual.

Can you tell us a little bit about other types of deep fake manipulation that are happening in the academic environments? 

[00:58:49] Evan Harris: Yeah, I'm such an enormous bummer of a guest. I'm sorry, I've got more bad news, which is that I think the deep fake sexual abuse issue is just sort of the tip of the iceberg. For what's coming.

I say that because I think we are heading into a synthetic media threat environment that schools have not had the time or understanding yet to start to grapple with. Yeah. Nobody 

[00:59:13] Alex Sarlin: has. Nobody has. Nobody 

[00:59:15] Evan Harris: has. Yeah. But if you're looking at a video of Will Smith, eating spaghetti and thinking, I can tell that's fake.

Like enjoy that. 'cause we're probably in the last year or two of that being right. You have to take a moment to sort of consider all the implications of that. I'll, I'll give you just a couple of examples of things that in some cases have already happened. In Baltimore, there was a head of school, a principal who, uh, had his voice cloned right by, I think it was the athletic director at the school, FBI gets involved.

They have him saying all kinds of off-color remarks, huge reputational damage for him and for the school that they can't fully come back from. Right. You can imagine what this would do to the disciplinary process at a school where a photo or a video seems to implicate that someone has done something they haven't actually done.

Or maybe it seems to exonerate somebody, but the school can't tell if it's real or not, 

[00:59:59] Ben Kornell: right? 

[00:59:59] Evan Harris: You can imagine all kinds of financial and data attacks. Maybe someone in sort of the business office of the school gets a call from the principal saying, Hey, we've got this invoice to a new vendor. This is urgent.

It's past due. That's already happened to businesses as well. The last point I'll make when I help schools with their policy. On DeepFakes, one of the things they typically do is they say, okay, well we're just gonna say you can't make DeepFakes of anybody in the community. Their, their voice, their likeness, and that will keep us safe.

Or that's at least a good start. And that's true, but if you look for example, at what happened in the floods in Asheville, the Asheville area in North Carolina a while back, there were all kinds of images of flooding that were fake and it really confused first responders. And so you have to think too about.

Uh, the, the idea that you can create a deep fake photo or a deep fake image of a school where maybe there's not a community member in the image, but maybe it makes it look like a building's on fire or there are gunshots ringing out, and that could, you could imagine that pretty easily happening and that maybe a parent sees it and calls the police, or calls the, the fire department.

Yeah. Protecting school properties, school events. You can see how that would be an important part of student safety, even when a specific community member is not targeted. 

[01:01:09] Alex Sarlin: Yeah. It's, I mean, this is, this is probably an inappropriate take, I hope, I hope it's not too inappropriate, but like, you know, I think back to the way that, you know, the old schemes and pranks that used to be played, you know, 50 years ago in schools where like the kid would change the f on their paper to an A with a red parker, or, or they'd call their parent and pretend to be the principal and say, school is canceled today, or things like that.

It's like there's this, you flash to the modern era and those. Same types of behaviors are now so sophisticated, so, you know, can cannot be determined to be unreal. I'm blaming students, of course, that's like a silly thing, but I'm just saying that like, you know, if anybody wants to sort of be malfeasant right, or do anything a little bit scheming, suddenly they have this unbelievable power to do it.

They can, you know, say, oh, the gym is on fire. I just sent a video to everybody, therefore school should be canceled and we should all go home. And suddenly Yeah, don't have to take exams. Exactly. 

[01:02:03] Evan Harris: Like it's, it's the digital version of pulling the fire off. That's 

[01:02:05] Alex Sarlin: what I'm trying, that's exactly what I'm trying to get at.

But, but I mean, I don't think anybody is prepared for a world in which that much power to sort of subterfuge power is in the hands of anybody of an athletic director of a school, of a student, of a, you know, disen of an angry parent of anything. What do you tell schools when you tell them some of these threats and they go, oh my God, what do we do about this?

[01:02:27] Evan Harris: Well, again, I, I try to break it down into those three pieces, the policy side, the crisis readiness side, and the preventative education side. You know, one point I will make is that there is a lot of good science about combating misinformation that is not a new pursuit. Actually, a lot of that goes back, even like to the 1960s.

We've had studies about how to combat false beliefs and misinformation in people. One of the things that's maybe kind of counterintuitive, but I think it's really interesting, and the science behind it is super robust is something called inoculation theory. Mm. Which is this idea that, and again, counterintuitive, but that as a school you want to introduce deep fakes into the community, but do it with a warning.

You're giving people a sort of a, a, a tone down lesser dose of what they may eventually see. Right. And so if I were in charge of a school right now, I would clone my own voice. I'd clone my own voice, uh, saying something amazing about our, you know, our biggest football rival and how they're so much better than we are.

And I'd play it at an assembly and be like, this is, this is the type of thing you might hear. It might sound really realistic, increasingly so. 

[01:03:28] Ben Kornell: Yeah. 

[01:03:28] Evan Harris: But we wanna use critical thinking if they're appealing to your, a sense of urgency, your sense of trust or authority or emotion. When we're in a heightened emotional state, we're not making great decisions and using our best critical thinking, there's tons of data about how to, um, combat, there's tons of studies about how to combat misinformation.

Last thing I'll say about inoculation theory that's kind of cool is it creates at least a short term umbrella effect that protects us from threats that weren't explicitly part of the warning. So even though it's, in this case, I'm just creating a vocal clone of the principle, it inoculates us to some degree against other types of threats that are similar.

And so it's like, you know, the flu vaccine every year, it's like, oh, well if you didn't, if didn't get that particular strain, you're screwed. It's not like that. It gives you sort of, again, this kind of umbrella protection and it's just really worth doing. I understand why schools would be hesitant to on purpose, introduce DeepFakes, but if you do it with this kind of warning and you do it with, again, a sort of tone down version, 'cause you don't want it to be so realistic that people throw up their hands and say, well, there's no way I'd ever know that mistake.

Right, right. You've gotta tone it down a bit so that people can kind of get their arms around it. But you want to be the, the person that's kind of at the edge of this and introducing it before it gets introduced by someone with sort of nefarious intent. 

[01:04:41] Alex Sarlin: Yeah, that's fascinating. Uh, it's a really interesting way to look at inoculation theory and you can sort of create, uh, you know, situations to showcase what's possible without removing the danger and giving people a chance to sort of experiment and understand what's possible and how to avoid it in the future.

I have one final question for you. This is such an interesting topic when you talk about DeepFakes and, and sexual DeepFakes, what always comes to mind with me, and this is my sort of go-to metaphor for the AI era, is it's reminds me of the beginning of the internet era, right? Uh, you know, when schools suddenly had to deal with this changed digital environment where.

Every student, this was before phones, but every student could go on a computer, a school computer, and access anything in the world. And there was a lot of sterman drunk and a lot of policy, and a lot of thoughtful technology created to make sure that what was happening on a school wifi or in a school environment was safe and that people weren't going to pornography, which is obviously parallel to the deep fake sexual stuff or weren't going to sites to learn how to make Molo top cocktails or you know, X, Y, Z.

But there's been, you know, we've now, we're now 20, 25 years into. Schools knowing how to handle internet. I'm curious if you think that this time around it's gonna be an expedited version and that, you know, we'll be quicker to be able to sort of get on top of some of the threats of ai, some of the negative aspects of it, and, and embrace it the same way.

You know, obviously now the average district uses I think, like 1300 tools and they're all internet based, so we have their ed tech has gotten into the internet, and the internet has gotten into schools. Is the same thing gonna happen with AI or you, you still feel like we're on a sort of knife's edge and some of these negative things are gonna really be problematic for it for schools and for ed tech companies trying to, to get AI into the school environment.

[01:06:22] Evan Harris: I wish I had a clear answer. I think the only answer I can give is that I just don't know. I mean, part, part of me thinks that we really may be at like the beginning of sort of an exponential curve here. 

[01:06:33] Alex Sarlin: Yeah. 

[01:06:33] Evan Harris: It's difficult sometimes for me to parse what is like, sort of, you know, the, the Sam Altman's of the world, like hyping their tech and and pumping their stock price and like justifying the investments they've made.

And one of it is sort of real that we really are at the beginning of this kind of exponential curve and that things are about to shift more in the next year or two than they have in the last decade. Like both of those scenarios seem plausible and part of it, part of Me Too, looks at the way that schools handle cell phones, for example.

[01:07:01] Ben Kornell: Yeah. 

[01:07:01] Evan Harris: And it was like we were struggling with cell phones for well over a decade. Yeah. And it's just recently that we figured out, hey. What if we just said no cell phones? I know exactly like why did that took so long? And so I do think we're gonna be faster. I've already seen that. I think we're gonna be faster to adapt to this.

The question in my mind is like, will it be fast enough? Yeah. Because I tend to think that if you were to bring me back in 18 months, I might have some new emerging threats that we were just not super aware of at this point, because the tech wasn't there at that at that point. And so I think that. Being really nimble, practicing these crisis readiness skills that I think do transfer to different types of crises.

That's gonna be extremely important for schools in the future. That kind of like agility to create new policy, create new preventative education programs, and create new crisis readiness protocols and do it fast. Yeah. That is gonna be like the thing. Yeah. Again, we're just heading into this environment that it's too 

[01:08:03] Alex Sarlin: uncertain.

Yeah. Things keep changing. So interesting. Evan Harris is the president of the Pathos Consulting Group working on deep fakes in schools, including deep fake sexual abuse, and it is a hot and very intense and very important topic. Thank you so much for being here with us on EdTech Insiders. Oh, thanks Alex.

Appreciate it. For our deep dive in EdTech Insider Week in EdTech. This week we're talking to Becky Keene, the author of AI Optimism, which is a really exciting book about AI optimism specifically for education. Becky Keene is an educator, author, and speaker focused on innovative teaching and learning. She specializes in instructional coaching and integrating ai, and she speaks globally on AI and has spent over 20 years designing professional learning experiences.

She's a national board certified teacher. She's a LinkedIn learning instructor, and she's the most recently the author of the book, AI Optimism, as well as the holder of a educational masters in early literacy. Welcome to the podcast, Becky. 

[01:09:02] Becky Keene: Thanks, Alex. It's nice to be here. 

[01:09:04] Alex Sarlin: Yeah. Well, this book was a breath of fresh air for me.

I am also an AI optimist. I'm really excited about the potential. I'm just feeling sometimes it just feels like the zeitgeist on AI in education is very iffy, and a lot of people don't, don't feel that optimism. So tell us about what led you to write this book and how you define AI optimism. 

[01:09:25] Becky Keene: Well, let's start with the definition.

So I define AI optimism as choosing to focus on potential more than problems. An AI optimist doesn't ignore the challenges that come with ai. We don't, you know, throw out important issues like bias and equity in student privacy. But we choose to use the tools that are available based on what we believe is best practice and what we think can do the most positive impact in our classrooms and systems around the world.

And then the reason I wrote the book is because I started feeling. Concerned a couple of years ago about the overemphasis on just using ai. So I would see a lot of social media posts and Facebook groups I'm a part of. I'd hear it at conferences. Teachers would say, I wanna use AI with my students. I wanna use AI to write my lessons.

How do I get started? And I felt like we rewound 25 years to when teachers would say, I wanna use laptops, or I wanna use iPads, or I wanna use Minecraft, or I wanna use pick a tool. 

[01:10:30] Ben Kornell: Yep. 

[01:10:31] Becky Keene: I love that enthusiasm to get started, but I really, really feel with my background as an instructional coach, that that is the wrong approach.

[01:10:39] Ben Kornell: Yep. 

[01:10:39] Becky Keene: I feel like we should be focused on what we want to achieve with our students and the vision that we have for them and for their futures and partnering with them and helping, you know, manage student expression and access and all these other really important things. And then we say, okay, what tools fit that model?

What tools fit what I'm hoping to achieve? And right now AI definitely delivers in many of those places. And so I started with that, which is where the AI optimism framework came from, and then I realized that. The SAMR model really felt like a good fit. And so the title of the book actually came last. I had written the entire thing, and it had been through developmental editing and, and I just really couldn't land on what I was trying to communicate.

Well, I had a whole mental block there and my editor finally said, okay, you really need a title. Like, we need, we need to make a cover like this is happening. And so I, I talked to a couple of trusted friends and the editor and we went back and forth and finally someone said to me, are you just trying to help people be more optimistic?

You should call it that. Yeah. It's like, oh. So we did, and the title has gotten such positive, amazing feedback, which is great. 

[01:12:00] Alex Sarlin: I think it's specifically really helpful to have that optimistic viewpoint in the light of some, so many of these sort of glass half empty takes that you read in the papers or in magazines right now.

I think there's a lot of concerns and as you said, the concerns are real. Like, you know, nobody's, nobody is trying to ignore bias or hallucination or all the, you know, cognitive offloading or all of the things that are potential risks with ai. But we have hardly begun to embrace all the potential. And I think that's what's so exciting about the book.

Lemme quote you to yourself right here. 'cause I love this quote from the book. You said AI optimism is an approach grounded in possibility rather than panic agency, rather than inevitability. And I think that's like, feels dead on the possibilities of AI are so exciting. And I think jumping right to the panic or to your point, right, to the how do I use ai?

Really misses the point. It's like, what do you want to accomplish in the classroom and how might AI actually get you there, you know, in a way that you never imagined possible. So you mentioned the SAMR model. This is about, you know, substitution, augmentation, modification, and redefinition. It's a model for integrating technology into education, and it's how the book is organized.

Tell us about how you think about that model and why it's such a useful fit for thinking about using AI in all of these different types of educational tasks. 

[01:13:23] Becky Keene: Well, I think it just makes sense as a self-reflection tool and also a planning tool. And let me explain that. So when I look at the AI optimism framework and I decide that I am trying to support, right, I'm trying, that's one of the six tasks in the AI optimism framework, and I want to do something to better support my learners.

Maybe I'm focused on UDL or maybe I just realize they need help. And so from that point, then I can make a decision about how much I want to include technology, and in this case ai. How much do I want it to help me? How much do I want it to be involved in transforming the education process or the learning experience for my students?

I can choose to involve AI at a very substitution level. I can use it to just, you know, very quickly do something that, that I could have done with my students if I could multiply myself 30 times, right? Or I could choose to do something that adds functionality or completely changes the task at hand. And I think that that's empowering.

I hope that that's empowering for educators to realize I get to choose how much I allow these tools to inform and be a participant in the learning experience based on my comfort level, my expertise, the tools available in my classroom, and what I'm trying to do. But ultimately if, if I'm seeking, if I have this desire to use AI in very powerful ways, then I have a roadmap to that.

I have a pathway of kind of where to get started and how to incrementally innovate along the way. 

[01:15:00] Alex Sarlin: Yeah. And you mentioned that, you know, you have these six educational tasks in the framework. Design learning experiences create engaging content, support diverse learners, analyze data, evaluate outcomes, and manage operations.

It's, and then the book is organized and sort of given each of these tasks, how might you use AI down the sort of SAMR stack to substitute, to multiply yourself, to augment, and then to modify. And then the last chapter of the book is all about re-imagination and redefinition, sort of what is possible if you really let your imagination fly and try things in a completely different way.

And tell us about some of the things you put in that redefinition category. Because I think those are some of the most inspiring ways that we can see AI playing out in the next few years. 

[01:15:42] Becky Keene: I agree wholeheartedly, and that's why it's its own chapter, because I realized I couldn't just integrate into the rest of the book because the R level is not teacher centric, first of all at all.

The R level is very much, students are the creators. Students are the full active participant in the entire learning process. It's that passion-based learning, project-based learning things where kids are the driver, they have lots of choice and voice, all the things we love about great education experiences, and it had to be separate.

And so one of the things I talk about in there is students as game creators. I'm headed to serious play in just a couple days here. Um, the conference, talking about game-based learning. It's something I am passionate about and what I think is amazing about ai, you, again, to your point at the beginning of the show, love it or hate it.

Kids have the ability now to create. Code, even if they don't have a background in coding. Yeah. Now, you know, does that make them an expert in Unreal Engine or whatever coding platform they're using? No, but it does open that gateway and, and we call it invisible industries, where kids may not have realized they were interested in being a game director, game producer, game developer, game tester, until they had a chance to do it without the barrier of learning code.

And so exactly. It's incredibly exciting to think about that lift for kids to create a mini game for someone else to play based on their ideas without being held back by the technical components. And I think that's an excellent example of redefinition. 

[01:17:28] Alex Sarlin: A hundred percent. And I, I completely agree that this is, I think one of the most underestimated changes that AI is going to reap in, in education system globally is that it puts students in this driver's seat in almost any type of task.

They can do almost professional level work. Right outta the gate. And that can create that motivation and excitement and you can sort of shareable artifacts instead of from hello world on up, you know, and having to sort of start with your tic-tac toe and things that, all the sort of exercises that, that people do in early coding.

You can go right to your idea. We, we interviewed the founder of a company called Funk a few weeks ago who does mobile app development in classrooms. And he said, the moment when students realize they can actually make an app and put it on someone else's phone that they can open and use, he's like, you just see their minds.

Change. He's like, they, they are feel so empowered and they can do more. They just see themselves in a totally different way. And I, that's what I'm hearing you say there. You also talk about AI redefining assessment, which I found was a really interesting part of the book. And let me, lemme quote you to yourself one more time here because I thought this is a great springboard.

You say, and it's relevant to the creativity as well. Picture a history assessment where instead of writing a summary of the American Civil War, which AI can do quite well, students analyze AI generated narratives from different perspectives. Identifying biases, questioning assumptions, creating their own multimedia exhibit that presents the conflict through multiple lenses demonstrating not just knowledge of fact, but deep historical thinking.

And what excites me so much about this is the idea of raising the bar of what you can expect students to do. Mm-hmm. So much that, that you just have a totally different vision of what education looks like. Tell us about how you think AI is gonna change assessment in the future. 

[01:19:10] Becky Keene: Well, I'm not a futurist, but I have hope because I'm an optimist, that we will see a shift.

I've been in education long enough to have had this exact same conversation during the internet revolution 25 years ago. 

[01:19:24] Ben Kornell: Yep. Where 

[01:19:25] Becky Keene: we all thought, oh, you know, it's all gonna change because now kids have the internet. And what we saw was not entirely that. I mean, we, we certainly have educators doing amazing things that are completely unique and different than we had 25 years ago.

But holistically as a system, I think we are still a little bit trapped in an era where students are consumers, students are focused on knowledge checks, they're given tasks that. AI can do and can do really, really well in many, many cases. And so I hope that assessments will shift into an environment where students are truly using critical thinking skills.

We can level that up. We don't have to be in this space where, you know, we're doing these multiple choice assessments and matching quizzes. We can move past that. And when I hear, you know, teachers kind of complain a little bit about like, oh man, these kids just took my assessment that I created with ai, and they used AI to complete it.

I think well. Yeah, I mean, I mean like, that's what I would be doing as an adult and I, I don't wanna make light of students cheating and the teacher frustration around that. And I don't wanna make light of kids quote skipping important critical thinking steps. But I do think we have this great opportunity to personalize, which is a term that's getting overused right now, but it's the term that works.

It's, and give students an opportunity to do something different. And I hope that it's something that really captivates their interest and causes them to realize they can't GPT their way out of it. You know, that's something that they have to actually dig into because they care and because they have an interest in the, in the topic or the content.

We can't do that with everything, but I think we can get there a lot. 

[01:21:12] Alex Sarlin: Yeah, and it has this sort of dual benefit of giving them hands-on experience with these really powerful tools and then asking them to level up their thinking and do more with them than they were previously asked to do in other types of assessments.

I think the Civil War example you just gave here, to do that, well, students have to understand how to use AI tools, which is very valuable, and they have to engage much more deeply with the material than they might have to in a, in a more factual based assignment. That's a really exciting future. 

[01:21:41] Becky Keene: Yeah, I completely agree.

And I think another point in the book I make is having conversations with the students on, does this use of AI limit my learning? At all interested in what they're learning or they understand the purpose or the thinking skills that a task is creating. They're going to be more likely to invest a little bit of energy in the thinking process than if we just are handing out assignments because it fits the curriculum.

I think there's really good conversations to be had here around this whole entire topic that lends itself really well to exciting new experiences. 

[01:22:18] Alex Sarlin: Yeah. Let me ask you one more question that I, I highly recommend the book. I think that you got into so many different aspects of this. You mentioned the sort of creativity piece, and I feel like that is one aspect of it, that, that is so thrilling.

You mentioned the, the Serious Games Conference. Great conference, but you also mentioned lots of different types of creative outputs that are enabled with ai. Right? There's, there's video, there's storytelling, there's storyboarding, there's imagery, there's audio podcasts. I'm curious how you see the ability for students create and games, of course, as you mentioned to students, create incredibly high quality multimedia artifacts.

It feels like that is another piece of the AI story that is not often told when people sort of worry and complain about AI and education. I think they're thinking about the current assignments and the, the arms race you mentioned where the teachers and the students are both using AI for cheating, but they're not thinking about this incredible opportunity for students to create.

Anything they can think of. Um, mm-hmm. Tell us more about how you address it in the book and, and what you think teachers should sort of be doing in their classroom to, uh, enable that kind of creativity. 

[01:23:23] Becky Keene: Well, I think it, in the book, I try my best to give many, many different examples. I think there's over a hundred, I don't haven't actually counted, but of specific learning tasks and experiences that that can be supported by ai.

So in the create chapter for example, there are SAM different ways, and those ways are not just dedicated to teachers. The book talks about student use cases for almost every single task that's in there. How are students engaging with the AI itself? So the great chapter is probably a great one to go to for that.

[01:23:58] Ben Kornell: Yep. 

[01:23:59] Becky Keene: And getting out of this. Kind of text generation mindset where like kids are generating paragraphs. That's great. But just today as we're recording, copilot launched 3D imagery, like 3D objects that are created by AI based on an image. So I can now create these 3D artifacts that can go into a virtual space, like a PowerPoint, for example, would take a virtual 3D object.

And there's other platforms of course, as well. But being able to think outside of what I've been doing for my entire career, and I'm speaking in first person here. Because the technology is new, so it really lends itself nicely to students doing things they quite literally couldn't do before. 

[01:24:47] Alex Sarlin: Yeah, yeah.

[01:24:47] Becky Keene: And students as creators is the most empowering thing you can do for your students. So not only is it about videos and music and imagery and games and 3D objects, and now we have Google LM putting out video overviews and podcasts and word online, and all these things we can do now for students to just be generating those without purpose.

Right. Is where I wanna be careful, because yes, we can generate all of those things, but is it a part of a learning experience? So. I guess I wanna make sure that comes out loud and clear because even though those things are exciting, let's not get into the pathway that we started this conversation with, which is, well, I want my kids to create a video with ai.

Like, that's not the point. 

[01:25:34] Alex Sarlin: Exactly. 

[01:25:35] Becky Keene: The point is, what are we trying to learn and teach them how to do creatively and critically with computational thinking and collaboration and communication, all these great skills, self-regulation, and then, oh, if the end product happens to be, they make a video with ai.

Awesome. 

[01:25:51] Alex Sarlin: Exactly start with the educational use case. Imagine what you would in your wildest dreams, you know, be assigning to your students or allowing them to do or empowering them to do, and then see how AI can actually make those dreams more realistic. It's a really exciting way to look at things. Do you have any final notes to send to, this is a podcast primarily for the education technology industry itself, although we certainly have educators as listeners.

For those of us in the ed tech field who are building these tools, what would you recommend to sort of help spread that gospel of AI optimism and modify some, you know, a balanced optimism that that allows people to see the possibility without losing sight of the risks. 

[01:26:35] Becky Keene: I would say for EdTech app industry, which I see a lot of at conferences and I have done a lot of work with EdTech apps over the years.

I would say, you know, student privacy is first and foremost always, especially in K 12. Um, we really have to see that loud and clear in all of your websites and communication. That should be the first thing you're focusing on. The second thing would be best practice. And I'm so disappointed over and over when I hear ed tech apps just go into the easy button, you know, they're promoting, look, you can make a quiz.

Look, you can make a worksheet. Look, you can make a lesson plan because they think that's what teachers want. Maybe somewhere, somewhere in, and someone in marketing said, you know, what you should do is tell 'em they can take a video and make it into a slideshow. And that's moving backwards in pedagogical practice that's not moving forward.

So. Having educators on your team, having educators as consultants or advisors, having educators, you know, really talking about what are you trying to achieve with students and what's your pie in the sky idea? And then helping them reach that and really targeting that. We care about kids and learning first instead of, we care about making everything easy because I think a lot of teachers see through that as kind of a fleece and ultimately they're in teaching because they care about kids and learning.

So focusing on that and building products that really support kids and learning is going to pay off I think, in the long 

[01:28:00] Alex Sarlin: run. Fantastic. Thank you so much. So this is Becky Keene. She is the author of AI Optimism and an educator, author, and speaker with over 20 years designing professional learning experiences.

Becky, thanks so much for being here with us on EdTech Insiders. 

[01:28:14] Becky Keene: Thanks for having me. Have a great recording and we'll chat soon. 

[01:28:18] Alex Sarlin: We have a special guest on today's week in EdTech. This is Max Spero. He's the co-founder and CEO of Pangram, also known as Pangram Labs. He's a seasoned machine learning engineer with a background deploying AI ML models at companies like Google.

Max holds a bachelor's in theoretical computer science and a master's in artificial intelligence AI from Stanford University. Max Spero, welcome to EdTech Insiders. Thanks for having me, Alex. So tell us about what you're doing at pangram, because I think you are doing something that is so necessary and really hot topic in AI and education, and you're doing it from a very technical perspective.

Tell us about Pangram and how it works. 

[01:28:59] Max Spero: Yeah, so Pangram is a tool, it's machine learning based. We're building technology to detect AI generated content. So basically we can tell you if an essay was written by chat GPT, or it was fully human written. And how we do it is we're actually training a large neural network on millions of documents, both fully human written documents from the pre-chat GT era, as well as documents generated by all of the latest frontier models like G, PT 4G, PT five, lad Lama, minstrel, et cetera.

Meta ai. We catch 'em all. We understand like the stylistic works of each different model. And then we're able to tell you, is this writing making the same stylistic choices that are consistent with AI writing, or does it fall outside of that distribution? Is it human written? 

[01:29:51] Alex Sarlin: Yeah. And that AI detection space is such an interesting one.

You know, you, you're up against some incumbent competitors like Turn It In and Grammarly and Honor Lock and a whole bunch of interesting companies that have been trying to figure out how to crack AI detection and honestly it's been, it's been a hard nut to crack. So I'm curious what you feel like is your, you obviously have a amazing background in machine learning.

What is your advantage as a startup that you can move fast, be in sort of the AI native era? How do you plan to make incredibly powerful technology that competes with some of these incumbent companies? 

[01:30:23] Max Spero: Yeah, so first off, I think we have a really strong team. Like the whole team has a strong machine learning background.

We have, we've been training large AI models for our entire careers. But not only that, I think we have a really good data set. We've basically, like, we have this prompt library, so we're able to prompt these different AI models to produce outputs in a way that's actually like very similar to how. A student or any sort of like that actor on the internet who's trying to like spam AI reviews, something like that.

They're using advanced prompts. They're not just saying, Hey, write me a review about blank. They're saying write it in this style. Mm-hmm. Um, start with these words, keep it short, keep it concise. Write about this topic. And I think we're able to do the same thing so that our model is able to learn more than just like the level one, which is just like generate an essay on X, but we're able to learn level two.

Generate catching essays that are written in a certain style, things like that. 

[01:31:21] Alex Sarlin: Gotcha. And, and, you know, you're, you're out in the field working with universities or various school customers. What is the sort of, um, zeitgeist right now? You, we read a lot of articles about how people are worried about, about AI generation and AI cheating.

That is pretty much a constant, you know, thread in the mainstream press about this. I'm curious what it looks like for you on the ground when you talk to potential customers for Pangram. How do they express their, um, concerns and worries and sort of a, how are they dealing with AI cheating and how, what do they want from a AI detection tool?

[01:31:54] Max Spero: Well, yeah, first off, educators are desperate for a tool that works. 

[01:31:58] Alex Sarlin: Mm. 

[01:31:58] Max Spero: There's so many things out there that either don't work. They have so many students that are using ai. There's a lot of schoolwide policies which are very unclear about like, what, right. You know, they can and can't do. Can they give us student a zero or like refer them to the academic charity office, like depending on.

Just based on suspicion of using ai. So I think what they're really looking for is like clarity and transparency. And so I think we're able to provide that because we are very open and transparent about our methods. We're very open about our accuracy rates and false positive rates. One of the biggest fears here is like, Hey, how can I make sure I'm not falsely accusing a student of using ai?

[01:32:40] Alex Sarlin: Yeah. Let's talk about that false positive issue. 'cause I feel like this is one that maybe is underappreciated. It's not a, it's not covered as much in the press, but it is a huge deal within any school and educational environment, which is the false positive, meaning a AI detection tool accuses or basically detects that a student is using AI for work, which they may not be.

That would be a false positive. Mm-hmm. Tell us about how you deal with that false positives and when you talk about your false positive rate, or how you even handle it in product to make sure that you're not creating an environment where there's sort of false accusations. 

[01:33:12] Max Spero: For sure. Yeah, I think like first and foremost we handle this by just training a really good model that doesn't have false positives on the same scale that others do.

So there's a lot of AI detectors out there that have one to 2% false positive rate. That's one in 50 or one in a hundred. If you use a tool like that enough, you're bound to have a false positive. With Pangram, we have a one in 10,000 false positive rate. This was, this is a number from like third party studies.

These are university researchers that are looking at the accuracy rates of different AI detectors. And one in 10,000 is, I think, much closer to a level where you could trust and just, obviously it's not zero, but it's enough that you could look at it and be like, okay, well I need like significant evidence on the other side to be convinced that it was truly a false positive.

And so I think that's like the best way that we could. Provide this trust and make sure we don't have false accusations, just not having false positives in the first place. 

[01:34:08] Alex Sarlin: Yeah. It's great if you can make it work. It takes a lot of technical prowess. One thing I wanted to ask you about as well is, you know, we saw chat GPT zero EBT zero come out as a, as a detection tool.

Early on, I think it was out of a, a student, a Princeton graduate student who had created it and it grew very quickly and it's one of the, it was sort of one of the first generations of detection models and it's still, it's funded, it's still going. But what I'm hearing you say, and I'm really interested in it, is that.

If the technology is used in the service of detection, it, you can get better statistics, you can go further and basically you just, people are desperate for something that truly works. And if there is a little bit of a, an arms race between the two sides of, of students trying to get their work through, or, and educators trying to identify who's using ai, it feels like you're offering a step change in the ability for educators to actually accurately detect.

Is that what I'm reading? And I'd love to hear you talk about why the sort of machine learning technology is supporting that type of use case. 

[01:35:06] Max Spero: Oh, a 

[01:35:06] Alex Sarlin: hundred percent. 

[01:35:07] Max Spero: Yeah. So I, I totally agree here. I think we are a big step change. We're a new generation of AI detection technology there. It's in contrast with some of the other free ones.

We were not available within weeks or months after chat GT coming out. It took us longer to produce our technology, but we also didn't want to launch with something that could be falsely accusing students. So, so we waited to launch, so we had a really good product and now we're out there and yeah, we are significantly better than everything else.

And so besides kind of just like our prompt library, which I talked about and our like much larger data set, one of the other things is we're just training a much larger model. Yep. So our models are 10 to 20 to 50 times larger than most of the other AI detection models out there. So this is more parameters, but what this means is like our models better able to understand the nuance involved in a piece of writing.

It's not, we're not just counting M dashes and looking for key phrases like tapestry or things that AI overuses. Instead, we're really holistically taking in the entire document and. The choices that AI makes across the full document. This is a step change difference. This is like the difference between GPT three and GPT four, except for like AI detection.

[01:36:22] Alex Sarlin: Yeah. And, and that's such an exciting vision. I mean, so my personal bias here is I just hate the, the narrative of AI cheating. It's makes me so sad on so many levels that that's what so many people focus on when they think of AI and education. But I think a solution like yours has the potential to take us to the next chapter, which is that, okay, if we can get a handle on what AI detection looks like, then instead of all of this hand ringing and desperation, you know, among educators to sort of get on top of it, we could start thinking about all the things that AI can do that, that are not just about bypassing assignments.

I'm curious how you see if you see it that way. Do you see, part of your role at Pangram is sort of moving the entire AI and education ecosystem to the next chapter where it's just, it's not all about this sort of conflict. 

[01:37:08] Max Spero: Oh, absolutely. Like all of this back and forth about like, were they cheating, were they not?

How can I tell, like all of this is because like, we don't have the tools for transparency necessary to just like move on to the next step. So what we're seeing is like once people are able to introduce Pangram into their workflow and into the classroom, then we're getting, it's getting a lot easier to actually integrate AI into the classroom as well.

Now that we have these guardrails in place, we know that AI is a powerful cheating tool, but it's also a powerful learning tool. And so if we could set this guardrail and say, Hey, no, like we're not gonna let you use AI to fully generate your entire essay, given an assignment prompt, then this kind of unlocks the next step is like, okay, well how can we use AI to be more productive, to learn faster?

To go more in depth on the material. 

[01:37:58] Alex Sarlin: Exactly. It's a really exciting vision and I'm rooting for you in exactly that way. If you can create that step change, be a new generation of AI detection, that sort of takes this detection battle outta the front lines of AI and education makes it sort of a solved problem and allows us to move to all of these other amazing use cases for ai, I think you'll, you'll have done.

One of the most important things possible in the AI landscape in 2025. I wish you the best of luck, the new generation of AI detection Pangram Max. Spero is the co-founder and CEO of Pangram. He's done AI ML models at Google and at Stanford. Thanks so much for being here with us. I'm gonna follow your trajectory and maybe we can come on for a longer interview at another time.

It's really great to meet you, max. Thanks so much, Alex. Really appreciate it. Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insider's Newsletter on substack.

People on this episode