Edtech Insiders
Edtech Insiders
Week in Edtech 12/10/25: OpenAI’s Teacher Certifications, Kids’ Online Safety Bills, Early Literacy Declines, College Admissions Shakeups, and More! Feat. Maya Bialik of QuestionWell, Peter Nilsson of Athena Lab & Emily Gill of LEVRA
Join hosts Alex Sarlin, Ben Wallerstein, and Matt Tower for Week in Edtech, exploring OpenAI’s teacher certifications, kids’ online safety legislation, early literacy declines, college admissions pressures, and what remains irreplaceable in education as AI advances.
✨ Episode Highlights
[00:03:00] OpenAI launches teacher certifications, expanding into K–12
[00:05:00] Big Tech credentials raise control and gatekeeping concerns
[00:07:55] Doubts emerge around certifying AI pedagogy
[00:12:40] Google and OpenAI intensify competition for schools
[00:14:15] Congress advances online safety bills affecting edtech
[00:19:15] COPPA changes threaten AI personalization
[00:24:05] Parent reading declines deepen literacy gaps
[00:26:45] Early childhood remains underfunded despite high impact
[00:31:15] College admissions lean further into yield management
[00:33:10] AI reshapes admissions essay review
[00:36:20] Trust in higher education continues to fall
[00:40:40] Education systems face pressure to adapt
Plus, special guests:
[00:45:30] Maya Bialik, Founder of QuestionWell, and Peter Nilsson, Founder of Athena Lab, on their book Irreplaceable: How AI Changes Everything (and Nothing) in Teaching and Learning
[01:11:16] Emily Gill, Co-Founder & COO of LEVRA, on AI simulations for human skills development
😎 Stay updated with Edtech Insiders!
Follow us on our podcast, newsletter & LinkedIn here.
🎉 Presenting Sponsor/s:
Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into person
This season of Edtech Insiders is brought to you by Cooley LLP. Cooley is the go-to law firm for education and edtech innovators, offering industry-informed counsel across the 'pre-K to gray' spectrum. With a multidisciplinary approach and a powerful edtech ecosystem, Cooley helps shape the future of education.
Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.
As a tech-first company, Tuck Advisors has developed a suite of proprietary tools to serve its clients better. Tuck was the first firm in the world to launch a custom GPT around M&A. If you haven’t already, try our proprietary M&A Analyzer, which assesses fit between your company and a specific buyer. To explore this free tool and the rest of our technology, visit tuckadvisors.com.
[00:00:00] Ben Wallerstein: My home state Senator Katie Britt talks a lot about the fact that like, this is a first generation of parents that are raising kids that have the front facing camera on a device, you know, and the implications of that,
[00:00:11] Alex Sarlin: the risk here, risk slash, I mean, maybe risk isn't even the right word, but the thing that could come out of this moment is if the people who are very worried about social media start pushing back and start ending dark ux.
You know, if they start pushing back on on addictive patterns or things that are baked into social media and gaming and video apps, that might have downstream effects, pretty major downstream effects, depending on how strong they are on the EdTech space.
[00:00:39] Matt Tower: I think what's sad about reading these headlines, which I think we see a couple times a year, right, is this sort of decline in engagement on the early childhood front is we know how much of a compound effect it has on the lives of those kids, right?
[00:01:00] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders. Remember to subscribe to the pod. Check out our newsletter and also our event calendar.
And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoy today's pod.
Welcome to this week in EdTech from EdTech Insiders. We are here with two very special guest hosts. We have Matt Tower from Whiteboard Advisors and Ben Wallerstein, founder of Whiteboard Advisors. Thanks so much for being here, both of you today. Great to be here with you. Yeah, thanks Alex. This is another big week for EdTech on the podcast.
In the next couple of episodes, we'll talk to Rene Kizilcec. He's the head of the National Tutoring Observatory at the Cornell Future of Learning Lab. We talked to Nolan Bushnell, literally a legend, the inventor of Atari and Chuck E. Cheese, and several EdTech companies who's doing really interesting things in vr.
And the end of this year, we're doing our end of year episode with predictions and reflections. Reflections on 2025, predictions for 2026 with a whole slew of EdTech thought leaders across the space. But for now, let's focus on some of the news. Of the day. So our first headline today is that OpenAI came out just a couple days ago with their first certifications, and it was a ChatGPT Foundation certification and a Foundations for Teachers certification, sort of part of their education and EdTech initiative.
You can get these certificates from within GPT and there's also a Coursera connection as well because OpenAI and Coursera have been doing all sorts of partnerships. Ben, what did you make of this announcement and what does it say about Open AI's commitment to sort of really entering K 12? Yeah.
[00:03:11] Ben Wallerstein: You know, it's interesting.
I would love to put, I mean, I put this one back to you and to into Matt a little bit. Like I, I, I don't know quite what to make of it. I, I, I do wonder just like bigger picture like strategy, like what's the goal, right? Like, and, and, and I think a lot of times when I see education initiatives from big tech, they're driven by driving adoption and there's like, uh, focus on sort of like platform specific skills.
So, so like I haven't had the chance yet to dig in and understand what these shirts really look like, like what do you learn? And the extent to which they're platform specific. But I always think about like, you know, I'm not a biomedical engineer or a doctor, right? But I've been on enough med school campuses and, and Matt, you've spent.
Probably a little bit more time. More recently, six years of my life, certainly than I have. But like big pharma Stryker, like we have loads of big businesses that are incorporating like hardware, software, various strategies and techniques into the practice in ways that drive adoption. So, so I'm not, I don't say that with a degree of like skepticism or suggesting that it's like bad in any way, but I'm curious like what's the content?
Maybe you two have looked at it more.
[00:04:14] Matt Tower: Yeah, I think what's also interesting to me is it's sort of a reminder of like the people behind the company are what drive a lot of the strategy. And in this case, Leah Belsky, former Chief Revenue officer of Coursera, is driving the education strategy. She's the GM of education at OpenAI.
So, you know, I think it's helpful as we think about what the big tech companies are doing to. Understand who's behind them, right? And so it makes sense that Leah would start with certs and partner with her former employer as sort of a safe place to be experimenting To Ben's point, I, I think we don't really know how to train people for ai.
So like, why not start with the organization she knows best and to dip her toes in the water?
[00:05:01] Alex Sarlin: I agree with that. There's actually, uh, several ex Coursera people at Open AI in the education and, and government world, so they definitely have a deep connection there. I also think it's building on open AI's strategy of the way that they are looking to get into K 12.
And to your point, Ben, yes, I do think some of the reason for that is just. Mindshare for the next generation of, of kids just sort of being the go-to the, you know, people say the Kleenex of OpenAI, right? The default brand in ai. They want to establish that for young people everywhere. But I also think they do truly believe that AI and OpenAI and Chachi, bt.
Have the potential to be educationally powerful tools, the learning mode. They think that there's a real there there, especially if it's done in conjunction with a really well-trained and thoughtful educator. So they put up a lot of money, you know, last year into training or this, I guess earlier this year into training over 400,000 teachers.
They have opened this big center. They're working with the unions with a FT and and UFT. And I think this certification is a move to say, okay, we're gonna put a little bit of a benchmark on this. We're gonna say, okay, let's actually create some asset that teachers can earn. That they can go out there and say, I am officially know how to use, uh, GPP in a pedagogically sound way.
We hope that that is the core of this, whatever that means. Right, exactly. Right. Whatever.
[00:06:20] Ben Wallerstein: Like, like you think, you think about what you said, like, officially I can officially use chat g pt, which again, like, I don't know what like contours or parameters of that are, but I'm, I'm like very much in the side of like.
Who knows? Like, like, yeah, like profound educational implications, like, you know, all sorts of efficiencies to be created, but it's not clear to me that anyone has a really good sense of like how you would certify such a thing.
[00:06:43] Alex Sarlin: That's a great point. We had Justin Reich on the podcast a few weeks ago, and he is a very deep skeptic on exactly this, on the idea of like, we are trying to train people in.
AI pedagogy, AI literacy, all these things that we do not yet understand as a society, right? We don't know how it's gonna affect education. So what does it mean to certify somebody or to say, you know, okay, you went through all the frameworks and trainings. We don't know those trainings work. We haven't gone down the line and see yeah, actually work.
So that is a very valid point. But I think the flip side of that argument, which is one that I, I would think that the open AIS and Google's Philanthropics of the world might make is this stuff moves so quickly. If we wait five years, if we wait till all of the validated studies come out about exactly what use of AI is truly gonna drive the most educational gains.
We'll, not only have missed the moment, but we'll be chasing the next generation and the next generation. It's like the speed at which these change is so much faster than what we can know. And they, I think they believe that the benefit outweighs the risks at this point, especially if, I'm sure part of this training is ethics, integrity, you know, how to make sure to, to using assignments that, you know, trying to get ahead of some of the obvious downsides.
There may be a lot of hidden downsides. We dunno that. Yeah,
[00:07:57] Ben Wallerstein: no, I think that makes sense. And you know, again, I haven't seen it, so I don't know what's in there in terms of just like understanding and anticipating based on information models are using, how that gets interpreted. I mean, we, we did, did, um. I actually, I don't know how public this all is yet, but we're privy to some really interesting information from code path.org, which was looking at sort of like hiring practices and the extent to which they're sort of impacted.
Like these are like really like hardcore, you know, engineering folks, like talking to code path.org about the ways in which AI was shifting or evolving their expectations for like entry level roles. And it was like, like the evaluation of AI generated content, right? Like thinking critically about the output and how it can be used, interpreted, applied.
Like that was sort of like the number one skillset, which makes tons of sense, right? Like think there's in the digital marketing context or others. And, and so I think to the extent that underlying understanding how this stuff works is, is useful in terms of inculcating, like a set of heuristics that allow people to do a better job of thinking critically and interpreting that stuff.
I think it's really exciting.
[00:08:58] Alex Sarlin: Yeah, we'll look back, we say this all the time on the podcast. We'll look back at this moment as this hugely transitional moment where this arrival technology, as they say, you know, landed on everybody and we are all trying to decide how to use it, when to use it, when not to use it, how to put together all the pieces.
And I think that I'm still bullish on it. I think the potential outweighs the risks in most cases, but it is easy and not unreasonable to be skeptical about the motivations of
[00:09:24] Matt Tower: some of these. Yeah, just on that note, I think it is interesting that they decided to do this directly, right? Like, yeah, they're offered on Coursera, but like they say in the announcement, like, we're planning to do this in app, right?
And there is also like a strategic choice to do this on their own platform versus pick a credential provider to do it in conjunction with, again, like it's not necessarily good or bad, but it is like a very specific strategic choice.
[00:09:54] Ben Wallerstein: By the way, like I'm not even one ounce, like skeptical of it or critical of it at all.
I just think that the thing that's interesting to me is like, is taking this, this something that's like very, very complex and unknowable. I feel the same way about like, you know, from a regulatory environment. Like I, I, I'm, I have a more of a sort of a like laissez-faire philosophy on that. Like, I don't know that everyone would agree with that.
And I think there are certainly risks in a number of contexts. I think we've seen in the past administration more of a, like a prescriptive or muscular federal role vis-a-vis ai. And, and this is a big tension, but I think there's a risk in like, and again, I haven't even seen it, so like anybody's like listening to this, which say like, well you haven't even seen it.
And like, I'm acknowledging that, but I'm just, you asking for reaction. And so I'm saying like, I think like the distillation of like very complex things into something that maybe is suggestive of a level of like, understanding of things that maybe we can't possibly understand yet, like is notable.
[00:10:46] Alex Sarlin: Yeah.
And yeah, I think there is gonna be a movement from some folks about, you know, what it looks like to do AI safely and pedagogically in schools. I agree with what you're saying, Matt, that I don't think open AI itself particularly has the trust of the education environment to be the certifiers of that. I mean, we saw common sense do some movements to privacy.
We're seeing the teach AI movement. Think about certifications. I, I think it's a moving target because it's a vacuum, frankly, because nobody knows what it looks like. It's just as reasonable, I think, for open AI to do it themselves as to do a complex third party or like create amalgam of really trusted nonprofit to create a thing.
I, I mean, they got it out very quickly, let's put it that way. Yeah. It's taken a lot longer either way. Yeah.
[00:11:33] Ben Wallerstein: Yeah. And again, like there's just like so many, like all, all sorts of interesting questions and layers of like transparency and you know, if you look at even. Just sort of like, like incorporation of AI features into like a Google Doc.
Like to what extent are educators aware of that? Do they understand the utility associated with that? And, you know, from a pedagogical standpoint with respect to writing? So again, like I haven't seen it, but if it was like, we are going to help you understand how AI's been incorporated into Google Docs and like how even, even some of the basic functionality around like correcting grammatical errors or suggesting words or phrases or things like that.
How does that work? How could that conceivably be applied? What might be the pedagogical relevance to you as an eighth grade English teacher, as a 10th grade teacher? What, whatever. I mean, like, I think those kinds of things are interesting. And then there's this other piece which is like what, you know, like securely solving for in terms of understanding these days.
Like what do interactions with AI look like? How and where do those manifest in a whole range of different contexts? And just trying to create some additional like clarity there where there's like a different sort of baseline level of understanding. So anyway.
[00:12:41] Alex Sarlin: And well, I just have to jump in there because that's, there's also context, you know, you, you name checked Google in there about working in Google Docs.
Google has a very robust ecosystem of certifications for teachers. Yeah. Has had it for a long time. So that's another, I think, reason that this is happening is that Open AI knows their in competition with Google for the K 12 teacher market space. And yeah, teachers have had a lot of Google certifications including AI certificates, which have also launched very, yeah.
For exactly the kind of, you know, eighth grade Google Docs use case that you're talking about. Google also made a, uh, some news this week for a couple of other things that I think are worth just mentioning in passing, because you just chose the sort of scale of this land grab, you know, this two 800 pound gorillas going at each other.
Google sold to the Pentagon this week, right? Millions of the Pentagon, uh, said that officially Google AI is gonna be, its a, uh, its AI provider that's millions of employees. BNY Big Bank, uh, is teaming up with Google to do ai and we had Google prototyping their smart glasses and sort of announced their first AI glasses that are coming out, which of course, meta has been doing AI glasses for a while.
Google Glass famously failed experiment. But you just see the size of deals you see in open AI make deals with the country of Greece, I believe Estonia, it really feels like these two generals, I don't know, they're taking over huge swaths of the business world, of the education world. And I think if you get inside each of these companies, I'm sure they're very aware of what each other are doing.
And I, I, my guess is that the certificates are a little bit of a response to the robust Google certification system as well. Matt, anything to add? We, we should move on to some of our education stories. Yeah,
[00:14:17] Matt Tower: well, I, I was gonna say, you talk about the competition between open AI and Google. I think it's also like sort of a competition.
And this is one of other, our other stories with regulators, right? And saying the house was debating how to protect kids online. And you know, I think a lot of the, you know, certifications are maybe less preemptive regulatory than they are just like a good thing. But I think a lot of these companies are taking sort of preemptive action to get out ahead of regulatory action.
Whether that's saying like, look, we are training people to be safe users of chat, GPT or, or what have you. So I think that's worth noting. And to Ben's point, like I assume part of the chat GPT cert is related to like how to use it responsibly and safely. Sure. But to me that's also like you've got yes, market competition, but also regulatory competition to keep in the back of your mind.
[00:15:11] Alex Sarlin: So let's talk about that house subcommittee story, because I think we saw Australia ban social media, try to basically ban social media outright over the last few weeks for children under 16. And we're now seeing the US House subcommittee debate a number of different bills about how to protect children online in this environment where I think people are truly getting afraid of this.
This is a special, you know, area about regulation, about student safety that I think both of you, I think a lot about. What do you make of these debates? There are 19 bills that have been introduced to safeguard miners on the internet and specifically social media. I think you call this sort of the Jonathan Height effect.
What do you make of it? This one's tricky.
[00:15:53] Ben Wallerstein: I think my home state center, Katie Britt, talks a lot about the fact that like this is a first generation of parents that are raising kids that have the front facing camera on a device, you know, and the implications of that. I've got a 14-year-old and a 16-year-old, so we're experiencing in navigating this in real time.
I wrote about this in the in whiteboard notes a few weeks ago, or a couple months ago maybe. You know, like, I came home one day and my son was like bouncing a lacrosse ball, like working on his lefty, working on his righty, listening to a book on tape at like two x speed, you know, for a test on Monday. It was a content driven test.
It wasn't necessarily a test for like deep thinking, you know, like he wasn't necessarily like pouring over text, annotating the tax. I mean, like, so, you know, that maybe serves a certain purpose. But I think there's this question of like productive struggle versus efficiency, which I think we all encounter.
We, we do, even in the context of air work at whiteboard. Mm-hmm. Obviously there are broader psychosocial implications, so it's really hard to solve for, I think it's a very hard thing to solve for outside of the context of very basic safeguards.
[00:17:05] Matt Tower: Mm-hmm.
[00:17:06] Ben Wallerstein: From a federal standpoint. As a parent. So I have a 16-year-old that just got her driver's license.
The rule in our state is you have, like, it's a provisional license until you turn 17. So it's like basically one family member plus one other person in the vehicle at any given time, which is a rule that, like I've observed, most people don't follow. So the question is like, as parents, right, set, set aside, broader set of challenges we have in terms of whether the adequacy of supervision and, and just sort of the multi-generational challenges that we often have in, in the educational context.
It's like useful to have some cover and some guidance. But again, these are things that aren't necessarily well understood. But on the other hand, they're things that I think on a superficial level seem really kind of like commonsensical too, but being, it's hard, hard to figure out from a regulatory standpoint.
And I think the ability to, you know, we have this sort of weird gray area right now where we have a bunch of rules that aren't enforced and where we don't have adequate mechanisms to enforce the rules that we already have. Yep. So that's, you know, that's another layer. So there's my useless answer.
[00:18:07] Matt Tower: I mean, I, I, I think what I struggle with philosophically is I spend, I don't know, 10 to 12 hours a day staring at screens.
That is my working life for better or for worse, mostly for better. So it's like how, and like I have to think about like, okay, when we get off this podcast, do I take like a two minute Instagram break or do I like go to email or do I try to see what's going on on Bloomberg? I have to make that decision super consciously.
And at what point does my 2-year-old have to start figuring out how to make those trade offs? And it's like, I'm pretty sure it's not two, but I I, somebody explained this to me the other day of like, what's the difference between two and five and 10 and 16 and 20? The action is still the same, but the person behind it has developed in different ways.
So I don't have a good answer. Obviously it would be nice to rest on some research, but the reality as we just talked about with the open AI certs is like there really isn't much to go on here and the amount of time it would take to develop it, the point is sort of moot at that point.
[00:19:19] Alex Sarlin: Yeah, and when you look at some of what's in the bills that are being debated right now, there's this sort of kids online safety act.
Which does have, to your point Ben, some of the sort of baseline provisions, right? It's trying to keep kids from being exposed to ads for drugs and alcohol or for illegal content or things that like you would think there would already be great laws against, but they're actually kind of aren't, and the enforcement is, is also very weak.
I mean that the FTC, which would be enforcing this is not that well, you know, structured or funded right now, but I think the one that's more relevant for the EdTech world and also this weird place where I think to your point Matt, it's like where the philosophy and the complexity really starts to come in is this COPA two, right?
A lot of us in the ed tech world know about the copa, that Children's Online Privacy Protection Act. I believe I got that exactly right. I'm not sure. But yeah, online Privacy Protection Act from 1998, they're sort of updating it and like, you know, things have changed so much since 1998. Right. And to your point, we all spend, all adults spend so much of our time online.
We all have to think about media literacy and our media diets and all these things. And meanwhile, front facing phones, social media has really created a sense of, I think, real panic among a lot of parents. The, the thing that's the risk here, I don't think this is a controversial take. The thing that's a risk here is that you have these debates happening in Congress.
Some people are sort of looking at it through more of a tech lens. Like how do you make sure that you're not hamstring tech companies? Some people are looking at it more through a protection of students lens. Let's make sure we're not allowing too much data to come out of students' accounts in various ways.
And every combination in between. But all of us at EdTech, we know that technology does rely on, on certain understanding of student data, on certain monitoring of behavior, on getting students' attention in technical tools, right? I mean, it's like, it's like obvious stuff. The risk here, risk slash, I mean, maybe risk isn't even the right word, but the thing that could come out of this moment is if the people who are very worried about social media start pushing back and start ending dark ux, you know, if they start pushing back on on addictive patterns or things that are baked into social media and gaming and video apps, that might have downstream effects, pretty major downstream effects, depending on how strong they are on the EdTech space.
Because if privacy becomes, just keeps rising in government mind of like children's privacy, we have to protect children's privacy. That's the number one thing. So much of what we try to think about when we think about personalization or have a good AI tutor or any of these things starts to get less and less likely to happen.
And I'm not taking a stance here. I'm not saying, hey, there should be no regulation. I don't think that at all. But I do think the effects of EdTech are a byproduct of these debates, which at heart I think are about the negative effects of social media. And it's really hard, I think, for people to disambiguate technology used for things like social media and for really positive use cases like education technology.
So we're all obviously all gonna watch what happens with this COPA 2.0 bill. And everybody in EdTech sort of needs to know what's gonna happen with it. But I think that. It's a moment. I mean, to all of your points, Ben, it's a moment where I think nobody feels that confident, that they understand that the answer is very clear about how much regulation makes sense.
Right now. It's just like, I think it's, it's very ideological and it's very murky, but whatever they do land on, we all have to know it very, we really, this is sort of, they call them data minimization roles that prohibit excessive collection of minor's personal data that is almost by definition, going to affect the EdTech vendors.
Even though there's different roles, it's gonna be interesting. FERPA and copa, our compliance are a huge part of, of anybody working in schools. As COPA changes, it's gonna continue to be, so, one other article that stuck out for me this week that I thought was, you know, it, it's not EdTech directly, but I think it has ramifications for a lot of what we do in EdTech.
There was basically a, a report earlier this year about from Harper Collins saying that there's been a steep. Decline in the number of caregivers who read to young children and, you know, including parents. And they basically said that less than half of children between the age of zero to four were read to every day.
Uh, and that's a decline of almost 10% and 15% from 2012. So you're just like, you're seeing the amount of out loud reading going down. We know from many, many studies that out loud reading is hugely beneficial for literacy and vocabulary, but it's going down. And this is one of those things where it's not directly part of the education system, but it's something that we know is hugely impactful, maybe even more impactful than what's happening in school on outcomes in literacy.
So the fact that it's going down in such a quantifiable way matters for the EdTech space. What, what did you make of that one, Matt?
[00:24:05] Matt Tower: Yeah, I mean, I think what's sad about reading these headlines, which. I think we see a couple times a year, right? As this sort of decline in engagement on the early childhood front is we know how much of a compound effect it has on the lives of those kids, right?
And I think that's like, you know, if you talk about trying to improve economic mobility and provide access to more opportunities, one of the clearest and most obvious ways you can get a return on investment is by serving this population specifically, right? It's order of magnitude, larger vocabulary by kindergarten.
I forget if it was this article or another that was like, they have 200,000 more words if they're read to on a daily basis than kids who don't have that opportunity. And it's like that sets you up for life. So, I don't know, it's sort of a personal pet peeve of mine that we are so in the weeds on workforce development and higher ed and and K 12 and we have pretty strict sets of regulations and guidelines for how to do this stuff well and some are more effective than others, but it's early childhood is just sort of a wild west, and yet we know that's where you can make the biggest difference in a kid's life.
[00:25:18] Alex Sarlin: Yeah, the study cited in this article actually says 300,000 words. 300. I knew it was a couple hundred thousand. Yeah, it's amazing. Five-year-old who is read to daily would be exposed to nearly 300,000 more words than one who isn't. I mean, that is a huge difference and compound effect is very real. One thing that stood out, this is an article from the 74 sort of calling this out and one, one thing that stood out for me here that is also, uh, speaking of compound effect.
There's a quote for many new parents, a dislike of reading stems from their own classroom experiences in the early two thousands. That emphasized reading as a skill for testing. So, so you could argue this ar this article is sort of implying that the accountability measures of the early two thousands, the No Child Left Behind era, might have caused such a negative connotation with reading for parents that they are not even interested in reading to their kids.
You just like talk about a generational compound effect. I mean, that is truly depressing. It, it also creates, I mean, I shouldn't say an opportunity exactly, but it creates a mandate for anyone in EdTech who's trying to cure early reading and trying to do it, especially B2C, right? If you're doing B2C, if you're trying to create decodables for kids that they can read on their own, or learning games or learning practice or anything that can actually help.
Support these families where the parents aren't doing the reading. It makes your moral mandate even more obvious. But, uh, I'm curious what you make of that connection to the NCLB era. I dunno if there's any proof of that, but it's a very interesting claim.
[00:26:48] Matt Tower: Yeah, I mean, I would be shocked if anybody was able to prove that.
I think it's like a, I think it's a fun headline 'cause it's like, you know, everybody loves taking pot shots that no child left behind. I suspect these parents just are themselves wrestling with whether to go to Instagram, you know, before bed versus reading a book. And that's, you know, I get that. Like I wrestle at those same things and it's, it's hard to choose the book, but I think it's different understanding what is personal sort of leisure time and what is like trying to support, give your child the supports they need to develop well, right.
And I'm not saying it's easy, but I am saying it's important.
[00:27:30] Alex Sarlin: And I think you're making a really interesting distinction there because in the past it has felt like reading a story to a kid didn't always feel like teaching, right? It felt like, oh, having a fun moment, reading something together, following a story, looking at the pictures, the learning was sort of considered a byproduct.
But in this current world where so few parents read for fun. Then I don't know if they see the reading as fun. They see it maybe only as a teaching. Yeah. And they have to put on all the devices and be like, well Guy, I should teach my kids something before bed. That's a very different kind of thinking.
[00:28:04] Matt Tower: And to be clear, like I would prefer that the parent treat the exercise as fun. Of course. And how fun
[00:28:13] Alex Sarlin: is it? I mean, that would be great. Exactly.
[00:28:14] Matt Tower: Like I, I would prefer that, but even if it's not, it's still important and this, you know, maybe you can bridge some of the gap through curriculum for preschool and daycare, et cetera.
But I don't know. There are certain things as a parent that you have to do that are not particularly fun. We don't need to get too graphic about it. So I would, I hope this is a fun thing for many parents, but even if it's not, you know, it is still really important.
[00:28:42] Alex Sarlin: Yeah. Also, I'm sure there are people listening to this who run EdTech companies for early childhood that do fun versions of reading and games and all sorts of things that they're like, no, that's our product.
We wanna make readings. I will say,
[00:28:56] Matt Tower: yeah, I will say, when I was investing, I always wanted to believe in parent education apps that helped parents understand and ideally sort of enjoy that educational portion of working with their kid on these types of things. I never saw one take off. Mm. Like I wanted it to work so badly as an investor, but the reality was it was just a really tough nut to crack of trying to educate both the parent and the child.
And, I don't know, I hope somebody figures it out, but it was a real tough sell.
[00:29:30] Alex Sarlin: And
[00:29:30] Matt Tower: the ones that I, I viewed.
[00:29:31] Alex Sarlin: It can be. I, I love Paloma. We've talked to Alejandro GIZ on this show a number of times. I think that is one
[00:29:38] Matt Tower: Sure. Yeah.
[00:29:39] Alex Sarlin: Has as good a chance of any of actually doing that. They sort of leverage parent teacher conferences, create all these activities for parents to do.
They ask 15 minutes a day from parents to they and, and just say, if you do these 15 minutes a day and we tell you exactly what to do and we give you all the readings, you're gonna see like huge gains. It's a good approach. But that said, I agree with you. It's a really hard not to crack. I think it's a very personal decision whether parents see themselves as sort of core educators for their kids.
I think some just inherently see that. They say, of course I'm the core educator. Yes, my kids are gonna go to school, but really it's really on me to make sure they know what's going on. And others are the exact opposite. They're say, it's not my job at all. I do so many other things. I work. I support teaching is not my role.
And I, I think, you know, the fact that that is so different between families, it's really a cultural and family decision and basically a personal decision. It creates this really bifurcated society in lots of ways where you have some incredibly intense academic families with a million extracurriculars in all sorts of things going on and others, and especially if there're a lot of privilege and, and money and then others who, where you just have very little happening at home and they fall way behind.
And we've all seen those studies about, you know, the more books a kid has in their house, the better they are. Yeah. You know, likely they are to, to succeed in all these different ways. It's something that I think EdTech could really affect, but it's a little hard to draw the line. Before we go to our guest segment, there's one more topic that I thought would be interesting to talk about today, Matt and I think we've both seen a number of different headlines about the college admissions process is sort of under the spotlight right now for a variety of different reasons.
Um, I'm curious what headlines stood out to you?
[00:31:16] Matt Tower: Yeah. Well, I think it's important to point out the seasonality, right? Of right now you are in the throes of high school seniors applying to college, right? So it's sort of ti the season to question college admissions and then come early January when early decision and early application decisions start coming out, it'll be the season to talk about, you know, the kid who got into every Ivy League school, right?
And then chose to do a Teal Fellow fellowship instead. So, so I, I think it's important to keep seasonality in mind as with all things in in ed tech. I think what is sort of most interesting to me from a market analyst perspective is you're seeing a lot more sort of real talk around how sophisticated the admissions process has gotten.
[00:32:02] Alex Sarlin: Yeah.
[00:32:03] Matt Tower: Really over the past 10, 15 years. Right. And so it went from a highly regional with, you know, at the top end it was obviously you had some national players, but college admissions is still fundamentally a regional game where like a lot of schools just sort of accepted most people. Right. If you had a certain set of criteria, you knew sort of what your chances were.
It wasn't that complicated. And now, you know, yield management is a term, it's a term of art for college admissions, which is sort of wild, right? Like we think of that as like AI farming techniques, yield management. But it really is for college admissions now too. And there was an article about the Millionaire Masters of Early Decision of the people who, this is their job and they get paid a lot of money 'cause they drive a lot of revenue to the schools.
So I think the term Millionaire Master was used in a pejorative, I think because, you know, society is maybe behind the admissions offices in terms of understanding that this is a complex business now and not the like mail a letter and you know, go have coffee with the admissions officer like it might have been 20, 30 years ago.
[00:33:12] Alex Sarlin: Yeah, and I think a, a related article to that, that is relevant to our audience as well is there was one in the Associated Press basically about how a number of different colleges are starting to use AI tools to help with the admissions process. Basically they talk about, Virginia Tech basically received 57,000 applications last year.
They couldn't keep up with even 200 essay readers, so they've used AI tools saved at least 8,000 hours. They talk about UNC has used it and actually gotten some pushback for using it. Caltech is launching an AI tool to look for authenticity for student applications. I mean, there's some really, the sophistication to your point, the sophistication of the admissions process, the sophistication of trying to get the most yield, trying to sort of get the right class and figure out the selectivity and figure out all the different pieces that go into this.
AI is starting to be one of the pieces in it. There's another a, a really good editorial. I thought it was a really good editorial in the New York Times today. Basically trying to say, Hey, early decision basically should be illegal. Like it's, it's hugely beneficial to colleges and not very beneficial to students in enough ways.
And it's really a way to maximize yield the early decision with binding decisions. So basically you apply to college early and if you get in, you're bound to go there and they can set the price only if it's a very unwieldy price. Can you sort of get out of it? I mean, some really strange dynamics. If you think about the admissions process as sort of an economic dynamic between two different parties.
Even though colleges struggle with admissions in a lot of ways, the selective colleges are on the other side of that and they get crazy numbers of admissions. Some of them through the common app. They have all of this choice about how to handle them, and they are increasingly pushing students into early decision.
It also talked about how a lot of state schools are now doing early decision, which they hadn't done in the past. So this is like a technique that has been used by a lot of very selective schools for a long time, but now it's actually spreading, even though it's arguably, I think, a reasonable argument that it is not good for students.
So. It's an interesting moment. College admissions is always a opaque and sort of painful thing to think about, especially in the modern era. I think everybody sort of hates thinking about it because the power imbalance is just so stark. The selective schools have such low selectivity and they actually, it's benefits them to have to be more selective.
So they want to be able to say no. At the same time. They want to be able to have enough students that they can guarantee are going to maximize their yield. So it's becomes this complex sort of transactional math problem. And as we all know that the implications for individual families and students are massive.
It has a real effect on people's lives and their prospects. Jeff Slingo would say, yeah, not as much as you might think, but it does have some. It's just such a weird moment. And you know, this is something, there are a lot of EdTech tools that help on the student side or the parents side or even on the guidance counselor side, support students in being able to apply to their schools and get more likelihood of getting in.
But it is interesting how every year, this time this sort of a feeling of like, why are we in this crazy system where colleges get to sort of look down and decide everybody's lives based on these like business metrics? Like it's pretty weird.
[00:36:24] Matt Tower: Yeah, I mean, I'm pretty sure I disagree on the like early decision is bad perspective.
I think I would disassociate the like societal pressure to go to a quote unquote good college at the age of 18. I think that is actually fundamentally unhealthy. Mm-hmm. I think like it's a really competitive marketplace. We've got 4,500 accredited institutions in the US and like in any other business, offering an incentive to like have a consumer commit to you earlier in a process is like a pretty well accepted
[00:36:58] Alex Sarlin: practice.
The binding, even though you can't set the price, that's pretty unusual.
[00:37:03] Matt Tower: Yeah, and I, and I think like I would advocate for even more, like we're getting better on the pricing front. In terms of showing students what the net cost should be. But it's like, there was an article this week about how Instacart is doing this too.
They're sort of like testing on an individual user basis, like how much they can charge for a banana, you know? And fortunately I think people were like, whoa, that's, that's sort of nuts. We do that with airplane tickets too. Like, you know, if you buy an airplane ticket three months in advance, you get the best price.
Obviously college is a bigger ticket item. But again, like I would posit that the problem is more assuming you have to go to a great college at the age of 18 to be successful in life and there's more and more information coming out about, actually it's a lot easier to get into Harvard if you apply as a transfer student.
That's like a fact that is, you can verify. And so I would wanna be talking to 18 year olds about, hey, try going to work for a year. Like try going to a community college and doing some interesting things on the side. There are a lot of different pathways to getting to those elite schools and what's actually problematic is saying everybody at the age of 18, it's like deterministic for the rest of your life.
[00:38:19] Alex Sarlin: That's true, that's true. I, I, I, I think there are multiple different pieces, pieces of this equation that are strange and they sort of all pile on top of each other to create a really intense and, and very, very broken system.
[00:38:31] Matt Tower: Yeah. It's super stressful to be clear, like it's super stressful. I don't envy today's 18 year olds.
It's so tough.
[00:38:38] Alex Sarlin: Yeah. One of the cases made in this editorial that I had not thought about, I think it's worth bringing up here, is just that it's sort of a way to get around, need blind admission because they're basically like people who apply early tend to be higher socioeconomic class. That's just how it sort of pans out.
So basically by accepting more of their class through early decision, they basically can fill it with high cost students in advance and then have fewer seats left for like, there's some odd dynamics at play. Yeah. In here.
[00:39:08] Matt Tower: Well, and, and with the state schools it's like what is the purpose of a state school?
Is it to train the state's workforce or is it to make money? Right. I think that's a super important question to ask. Yes. And as you know, there's a thing about the SECA couple weeks ago and how like the SEC is recruiting, you know, all the kids from New Jersey that used to go to New England Prep schools or New England liberal arts schools, whatever you wanna call it.
And it's like super interesting question. I don't have a good answer. Some of that is a function of state funding. Texas, North Carolina and California have set standards for. Yeah. The point of your school is to train our workforce and you can accept some OUTTA state students, but not a lot. Should other schools adopt that?
Maybe,
[00:39:48] Alex Sarlin: I don't know. I think the theme of the entire era right now for higher ed especially is just like, what really are you optimizing for? And I think it's created, I mean, boy, it has just been so messy being, you know, yeah, even following the higher ed space has been stressful. I think being in it must be even more stressful, but there's this feeling of all the different potential goals of higher ed are in, many of them are in conflicts one another.
Many of them are not decided by an individual school. They're sort of inheriting all these things from the policies around them. But it's been messy and I, I mean, just to bring it back to EdTech for a moment before we get to the guests. My, my, my last thought is just that I really feel like this is a sort of, in our lifetime, pretty unique moment in that I think the, just the faith that these education systems are sort of working as expected that they have the right incentives, that they're sort of, are doing what you would hope they would do.
Oh, yeah.
[00:40:41] Matt Tower: Yeah.
[00:40:42] Alex Sarlin: Is that, I think it's at an all time low among almost everybody in the, in the education space. Just very few people are into the status quo are trying to maintain it, and I think that creates a really interesting opportunity for new models. What do you think, Matt?
[00:40:55] Matt Tower: I
[00:40:55] Alex Sarlin: think
[00:40:56] Matt Tower: it is true that many people, I mean, there's surveys that are like people questioning the value of college.
I also think, like, my favorite quote is, change is good. You go first. And, and so like when you see new models, it's like people are so. Freaking like they hate change, right? They're like, you know, it uses technology, ergo it's bad, right? And it's like, come on, like the existing system is fine, but it's certainly not great and we should be pushing the envelope, you know, with constraints around making sure that you're not screwing over a particularly a child or a, you know, an under 18-year-old for life.
Similarly, high stakes, maybe not quite as high stakes as healthcare, but close. Like we should question the status quo and we should be willing to try new things while also recognizing that like we live in an incredibly complex society and a high school, you know, equivalency is not even close to sufficient to succeed in today's workforce.
You can question the value of college and also believe that it is fundamentally important for pretty much everybody to go through if they want to have any sort of economic mobility and potential for high wages down the line. And I like, I'm comfortable holding both of those things. True.
[00:42:17] Alex Sarlin: Yeah. And we survey just came out last month.
63% of US adults say that a four year degree is not worth the cost. Right. 63% right now. That was 40%, 10 years ago. So, I mean, you're definitely seeing people really questioning it, but you're right at the same time, there is still a huge wage premium for college. It's, it's
[00:42:38] Matt Tower: just right. What I hate about that survey is the implication is that college is therefore not important.
And I think that is the absolute wrong takeaway. The takeaway should be, we need to make college better, not, we need to not go to college. I think that's getting lost in the mess of the narrative around questioning the value of college.
[00:42:59] Alex Sarlin: I think you're gonna see a lot of change on right wing college because that that number is 74% for Republicans.
We've talked a while, that was one of my predictions a while back, that you're gonna start to see alternative colleges designed specifically for right wing families that are, are really scared. And you, we've seen a little bit of it. We saw the school in, in Texas.
[00:43:18] Matt Tower: Yeah. I don't think that's a right wing school.
There's a difference between traditionally conservative thought and right wing. But I, you know, I do think schools that serve very specific populations, you know, it's a point of differentiation. It's a way to stick out whether or not they agree with our ideological perspective is, is sort of a separate topic.
[00:43:42] Alex Sarlin: It is separate, but this is a political divide. The belief in college is, is quite political right now. Twice as twice as many Democrats believe that four year college is worth the cost. As Republicans, that's a lot. 22% versus 47%. This is a lot. Anyway, let's go to our guest segments. This has been so much fun.
Any of these topics we could dive into for a very long time. There's a lot here, but I appreciate you being here, Matt Tower. I appreciate Ben Wallerstein coming by, giving us some of his thoughts as well from White Word Advisors, and we will see you all soon. You know, if it happens in EdTech, you'll hear about it here on EdTech Insiders.
Let's go to our guest segments. Thanks Alex. We have a really exciting discussion today. We are talking to two EdTech entrepreneurs who just wrote a incredibly exciting new book, all about AI and education. We are talking to Maya Bialik and Peter Nilsson. They're the authors of Irreplaceable, how AI changes everything and nothing in teaching and learning.
Let me introduce each of them and then we'll jump right in. Maya Bialik is the founder of QuestionWell, an AI platform used by half a million teachers to create research aligned tools for teaching and learning. She's a former middle school science teacher and her focus is on how AI can improve teacher working conditions, make creative work easier, and enhance the quality of classroom materials.
She's currently pursuing her PhD at Boston University. Peter Nilsson is the editor of the Educator's Notebook, which is definitely worth reading, and founder of Athena Lab, an experienced educator. He recently served as head of school at King's Academy in Jordan. He's on the advisory boards for South by Southwest, EDU, the Center for Curriculum Redesign and Middle states Association's responsible AI and learning endorsement.
He's also a musician and is working on a musical about schools. Welcome to the podcast Maya Bialik and Peter Nilsson. Welcome to EdTech Insiders.
[00:45:38] Maya Bialik: Thanks for having us.
[00:45:39] Alex Sarlin: So good to be here, Alex. So as we can tell from your intros, you both do a lot. You are both multihyphenate, you are authors, you are entrepreneurs, you are advisors.
Tell us how this shaped your work on the book.
[00:45:53] Peter Nilsson: Well, I think the key for us at this point is that we come at this conversation from multiple perspectives as founders or as teachers or as school leaders or as researchers or as artists. And in doing so, that puts us in a position we hope to be able to close the gap in the conversation in education and technology.
And also between the promise of AI and the practice of AI in schools that I think is something that we're all feeling at this particular time. AI holds such promise and how can we translate that into practice? And so in the book, yes, Maya?
[00:46:25] Maya Bialik: Oh yeah. I was just gonna jump in and say, I think I always find myself at sort of interdisciplinary lines, fault lines.
It might be because I'm an immigrant and I moved here when I was five in a formative time. And so I'm constantly seeing what is an assumption and what blind spots does that lead to, and so I really always find myself drawn to the interdisciplinary and the transdisciplinary for the kinds of lessons that we can bring from one arena to another.
And also the kinds of blind spots that we could potentially close from the perspective of another.
[00:46:58] Peter Nilsson: And I think that that perspective that Maya describes is really key for helping close that gap. Being able to translate one sector to another, being able to translate one discipline to another is key. And so in the book, one of the things that we do is we close each chapter of the book with next steps for teachers, next steps for school leaders and next steps for technologists, including technology designers.
And in doing that, we hope that what we're doing is not looking only at what the tech can do, but also how our tech tools affect the human experience in teaching and learning. So we're not coming at that from just one perspective and want to help facilitate the conversation between different stakeholders, which is a big part of what the book aims to do.
[00:47:37] Maya Bialik: And the last thing I'll say is the most unusual probably identity for both of us in this space is that we're also artists to various degrees. And so I do improv and musical improv and Peter does music more. Bradley is writing a musical. I think you mentioned that. I'm just gonna plug it again and I think we'll be at South by
[00:47:56] Peter Nilsson: Southwest CDU this March actually.
Wonderful. Yeah,
[00:47:59] Maya Bialik: for both of us. I think the art side of things really informs how we view creativity and how we view navigating uncertain and ambiguous landscapes and doing so together collaboratively. So that adds like this whole other layer of not just like cognitively how we see things, but also the how of how we navigate things together.
[00:48:21] Alex Sarlin: Yeah, it's a really rich perspective, especially around ai. You've both been educators, you both do art and and creative work, which is a big part of what AI is doing right now. You're both entrepreneurs, so you know what it's like designing technology for schools. I mean, that combination of perspectives is pretty unusual, and I think it does, as you say, allow you to sort of bridge gaps and help people understand each other's perspectives at this really crucial time.
So in the introduction of your book, you introduce four teaching philosophies. So the book is not quite out yet, although it is available for pre-order. I am on that pre-order list. But you mentioned four teaching philosophies in the introduction that are important for this moment, and they really are sort of create the core of the book.
Walk us through these four teaching philosophies that are obviously connected to AI and how they're important in this particular moment.
[00:49:11] Peter Nilsson: Thanks, Alex. Yeah, so this research actually goes back 40 or 50 years. Historically, it's come under different names, meta orientations towards curriculum or conceptions of curriculum.
And what the research has shown over these past decades is that people have these hold these different perspectives on what the purpose of education, the purpose of curriculum is. We simplify it by referring to it as teaching philosophies to make it more accessible. And the research identifies four different teaching philosophies.
One is academic and that emphasizes rigorous intellectual training. Another is humanist that prioritizes student centered growth. A third is social, which seeks to address social issues through education. The fourth is pragmatic, which focuses on workforce preparation and measurable outcomes. And what's key in this conversation is that each one of these philosophies has thoughts about the other philosophies is that if you have an academic mindset, then you might have thoughts about people who are coming at it with a pragmatic perspective.
If you have a humanist or a social philosophy or are holding that predominantly, you might come at, you know, another one and have thoughts about, well, that academic focus is too elitist, or this other focus is too much of something else. And from the beginning, articulating these philosophies can be very helpful.
Now, what the research shows is that we're not all just one of these philosophies. We actually hold varying degrees of each of them. At different moments, we might lean more into one and the other. But the reason we bring this up at the beginning of the book is because this actually what we start describing is how we see these different philosophies mapping towards perspectives on artificial intelligence.
If you're coming into the conversation on AI from an academic perspective, you may see certain opportunities and you may see certain risks. If you're coming at it from a pragmatic perspective, you may see certain opportunities and certain risks. People with a social perspective might observe how a risk is greater for those marginalized community members who may be more susceptible to bias.
But the people from the social perspective may also see that AI provides greater mobility for those who can harness the tools. From an academic perspective, you might see a risk of academic integrity cheating, but you might also see an opportunity for greater self-direction in learning. And so. The reason why we think this is so important and why we put this in the introduction is that so much of the conversation that we're having right now in education around artificial intelligence is talking across each other.
Yes, because we're implicitly holding different values, and if we can identify these values and philosophies and we can map them, then in our conversations with other people, we can acknowledge those perspectives, why those perspectives come from those places, and then think intentionally about when and why we might accentuate the different philosophies.
And as a former school leader, this is often a question, a challenge too, where the institution has its own perspective, has its own value structures, and we might have to say, well, part of our job is preparing students for the workforce, even while part of what we're doing is teaching academic skills and being able to communicate that at this particular time, we are prioritizing this philosophy because that's what we need to engage at this time.
That helps us close the gap in this conversation between different stakeholders within the education community. Yeah, Maya.
[00:52:32] Maya Bialik: Yeah, and I was just gonna add that I think a lot of times in our current world, people feel very drawn to which side are you on kind of framing of debates. And especially with ai, there's a lot of side taking.
And so the reason that I think that this is so important, this framing of the four teaching philosophies, is that it helps people untether a little bit from having to choose a side and just experiment in the world of ideas for a minute. Because acknowledging that we all acknowledge the value of all four, and sometimes there are just tensions and sometimes there are trade-offs, and sometimes we have to make choices where there is no perfect choice.
Whenever we present this, I really enjoy, like there's a really qualitative shift in the room when we talk about this and people relax a little bit because they feel heard. They feel seen. Their philosophy is represented up there. We haven't forgotten about it. We also value it, but it also helps, especially when we put a table of the critiques of each one of the other ones.
People feel really seen 'cause then they see like their critiques are also seen and heard and validated. It's just that there are these fundamental tensions with anything in education,
[00:53:48] Peter Nilsson: and part of why it's so important to that we put this in the introduction, is that as we work through different chapters of the book, we'll explore complicated issues like writing with ai.
And at that moment, instead of making one specific case for or against a particular topic, we'll say from a humanist perspective, the case might be made about this from a pragmatic perspective, the case might be made about this. And this empowers schools and teachers and leaders and technologists to be able to have the language, to have a conversation more productively instead of talking past each other.
[00:54:19] Alex Sarlin: It's a really powerful framework and I feel my blood pressure going down, as you say, when I think about this, this way, because I think it clarifies a lot of the debates, a lot of the complication and confusion around ai. Obviously it's related to the integrity issues, you know? What do you think is the purpose of education?
If you think it's academic skills and, and being able to do specific things, then academic integrity becomes paramount. Or I think about the skills-based approach that so many people are running towards. And I think that's a way that, that's sort of a, a way to try to bridge the academic and pragmatic.
'cause you have academic skills and you have pragmatic skills. And when you use the word skills, everybody sounds happy, but I think your approach is so much better, which is let's actually peel back, let's actually get to the heart of what we all care about, get it out in the open and then not talk past each other, really discuss things from the different perspectives.
[00:55:06] Maya Bialik: That's the goal.
[00:55:07] Alex Sarlin: Yeah. And once
[00:55:08] Maya Bialik: we, once we put that in the introduction, we started realizing just how much of a theme it was throughout the book. And then we went back actually and realized, oh, here we're actually talking from this perspective, and somebody from that perspective might disagree this way.
And so then we've inserted it throughout the book to try to represent it. Of course, the book would be, it's already really kind of long, and so we had to cut a lot of that stuff, a lot of the stuff where we would want to go in every single direction all the time. But hopefully we've supported our readers to doing that for themselves.
[00:55:41] Alex Sarlin: There's a lot to say about this topic. I mean, it's a really rich topic and the title of the book is Irreplaceable, how AI Changes Everything and Nothing in Teaching and Learning. So what is Irreplaceable according to both of you? You know, what is Irreplaceable in education and must be attended to and is maybe at risk of being lost or overlooked with ai?
What can't be replaced?
[00:56:05] Maya Bialik: Yeah, I think this question is so important, and I think I like to place it in the broader context of technologies in general. So when we started with the internet, people were pretty focused on the technical aspects of like, okay, you can link a page to another page. And now decades later, we're obviously not talking about that.
We're talking about social media and how it has changed the fabric of society. So the effects of a technology are on a much greater timescale and are much less technical ultimately, and much more about human issues, both intrapersonal, interpersonal at different layers, how they change society. So like.
Even cars and not even information technology, just the technology in general. We're not talking about like horsepower of the car. We're talking, oh, now we live really far apart from each other. And like, how is this changing the structure of family? And so I think that is why it's so important to talk about what's Irreplaceable in education, because those are the things that will ultimately be affected.
And I think they can be affected for the better and also for the worse. In particular, I think that the quality of the instructional materials and the quality of an education can be affected for the better or for the worse. With ai, you can have slop or you can have this like really all knowing very contextual, personalized approach.
Agency I think is incredibly important and could be eroded with ai or it could be supported with ai. And so that's another thing that is not going anywhere, but it could be really fundamentally changed. And it's really important to be intentional about how we approach it. So simply like for a questionable perspective all the time I'm thinking, oh, I can just offer the user a suggestion of it and, and do it for them.
But I have to pause myself and think, well, what is the goal here? I actually wanna support teacher agency, so I'm gonna offer multiple options and I'm gonna give their rationales to actually support rather than a road teacher agency. And then the last one is human connection. So. Students connecting to other students, teachers, connecting to other teachers and teachers, connecting to students, not to mention all the other stakeholders in the school system.
That human connection is absolutely irreplaceable. And again, AI can work to connect us further or disconnect us further. So in a lot of different ways for each of those relationships, we just need to be intentional about those core things that are at the core and therefore are most important and irreplaceable, but still very much affected possible to be affected by ai.
[00:58:44] Peter Nilsson: So what we do in the book is we take these deep human themes that Maya's talking about that have carried across history, that have carried across different technologies, and we look at them very practically, how that plays out, and therefore, very practically and tactically how we can be sure that we are protecting these things.
And from the beginning, the sort of most simple and obvious are things like the core skills of a powerful education that help lead to quality agency and connection, like critical thinking, like creativity. Those persist. You know, before the internet, as Maya was saying, we were critically analyzing text sources and paper.
Then the internet came along and we are critically analyzing sources online. Similarly with artificial intelligence, we are critically analyzing the responses we get from our artificial intelligence tool. So most simply, these kinds of skills that we're talking about, these human skills that empower us with agency, that help us build connection, those carry on.
But more structurally, we can even think about things like the structure of school, which is also what leads to the structure of the book. You know, school has three contexts. If you think about it. There's teachers outside of class alone and with each other, planning for what's gonna happen in school.
There's students outside of class doing the work alone or with or with peers, and then there's teachers and students together in school. Those three contexts will persist. Of course, there are going to be exceptions we hear in your podcast about homeschools, about different kinds of ways in which some people are learning, but generally, for the majority of the cases, those three contexts of teachers alone, students alone, and teachers and students together will persist.
And so ai, what we write about is the way that AI plays a role across all three of those contexts. And so we look at for teachers alone, AI is a research assistant. AI as a planning assistant, AI is a feedback assistant for students alone. We look at AI as a learning assistant and AI as a doing assistant, and then for teachers and students together in the classroom, we look at AI as an administrative assistant and AI as a teaching assistant.
And critically what we do for each of those chapters is we, we aim to frame each of those chapters around a novel idea or a novel frame for how AI changes the functions in those different spaces. So for example, chapter two, which is about AI as a planning assistant, our theme for that is that teachers are the most creative people in the world.
And we explore why we break down the creative process, and we look at how AI supports teachers in their planning process to enhance and develop their creativity later. Chapter four is an update and forecast on the long sought and frequently failed AI tutor, which, you know, we've seen efforts. Justin Reich was recently in your show, in his book, failure To Disrupt Shows How Over Time our efforts with technology have often struggled to create this ideal, but we also explore not just why it has failed, but what's different now.
And we look speculatively into how the technology is developing into what's to come and offer some tactical steps for how we might advance towards that. In chapter five, which is about teaching as an administrative assistant. That core idea is that teaching is more complex than rocket science. And I think we backed that up actually pretty well in that chapter.
And we look at how AI can support us in that particular time. Yes. So this general structure of school is like something that is irreplaceable. And so then we ask how can we advance those human characteristics of agency of quality learning and connection in all of those three spaces, in those seven chapters?
Yamaya?
[01:02:08] Maya Bialik: Yeah. The structure of school is irreplaceable and therefore the needs of each of those spaces is irreplaceable. So that's why framing it as a blank assistant. We recognize that there's trade-offs in framing AI as an assistant. On the one hand, it's personal. Uh, it's like personifying it, which can have some downsides, but on the other hand, it's really framing the needs.
That are irreplaceable and how AI could help support those needs. So teachers will always need to do research. Teachers will always need to do planning. Teachers will always need to give students feedback. Students will always need to both learn and to apply that learning. And then together, there's always gonna be administrative tasks and there's always gonna be the need for teachers to keep improving themselves.
So there's structures and they're also more abstractly needs.
[01:02:56] Peter Nilsson: I think maybe one last idea in response to this question about what is irreplaceable is ultimately at the sort of human, personal, individual level for students and teachers. The science of how we teach and learn is something that is deeply human and unchanging.
Human learning and human experience generally have limits against which this sort of dream of matrix-like instant learning will collide. It will always take time for individual human beings to accumulate wisdom. Students will always come into class with a half-formed question and will struggle to form the right question.
We don't learn at the speed of silicon. We learn at the speed of humans, and this is something that will persist. And so in each chapter we dive into the learning science and we dive into pedagogy and offer examples for not only how AI can, but in many cases already does support these sort of timeless biological truths about who we are as humans and look forward to ways in which actually AI is starting to and will continue to be able to access that in a deeply human way.
[01:04:01] Alex Sarlin: It's such an exciting, I, I love how you have all of these different really compelling frameworks of how to split up the different ideas, all these different types of assistants, the teacher versus student versus teacher and student, the four teaching philosophies. If you just mentioned, you know, I think you're taking something that is so complex and as somebody who's done a lot of interviews, a lot of people, it is a very complex space.
Everybody comes at it from a slightly different angle. You're taking something so complex and really making sense of it in a way that I think the entire field desperately needs right now. So the natural follow up question here is if some of the things you've just mentioned are irreplaceable, human connection, agency, science of teaching and learning, your title is also talking about the everything that is changing, right?
There's nothing is changing and everything is changing. What is the everything that is changing with ai.
[01:04:50] Peter Nilsson: So if the things that Maya said at the beginning that are ultimately irreplaceable and deeply human is that we need these quality learning experiences that advance human agency student and teacher agency, and that build connection that is timeless.
That is not changing. But what is changing is that in our increasingly digital world, the context for our learning and the tools for our learning are dramatically changing. What we've all experienced pretty viscerally in schools is what Jesse Dukes and Justin Reich have articulated in their paper on arrival technologies.
That AI has come, unbidden, it has arrived, the context has changed. That's everything is changing there. We can't ignore that. The things we're preparing students for the way that we hope they will live their lives in this world is something that is transforming and we're reacting to that. And as a result of that, the narrative has been in large part about risk and damage to learning.
But one of the things that we think about in this time where there's so much is changing in the context and the tools. We don't get the future we want by saying what we don't want. We get it by imagining and then building the positive alternative future that we do want. And that's really the core of what this book aims to do, is to provide those principles and practices to accelerate our movement towards that positive alternative future that we're all trying to build.
[01:06:08] Maya Bialik: Yeah, and we also try to identify some of the most promising things that AI could do to help support these things. So I would say that one of the most promising avenues is probably feedback. So we know from research that feedback is absolutely crucial to student learning. And then we know from practice it's not possible to provide feedback to a hundred students every single day.
Even if all you ask them to write is a sentence that's hours of your day, that's like, it's simply not possible. And so that's something where AI stands to make a really big impact. Now, exactly how we set up those systems, the technology of it and the organization of it and the UX of it and how we incorporate them into our schools is all up for us to invent and experiment with and decide and figure out for ourselves.
But we try to identify the technical leverage points that AI stands to really help with. That's one of the big ones. But in each need, in each structure, in each organization, what are the big leverage points that AI is really poised to help just turbocharge
[01:07:18] Peter Nilsson: and I, I just wanna briefly credit both EdTech Insiders and my co-author Maya, for this next extended science metaphor.
I believe it was an EdTech Insiders post, I don't know, a year ago or a year and a half ago that was describing the Cambrian explosion of technology ventures leading to the Darwinian natural selection of which of those ventures is going to survive. And now I wanna credit my co-author, Maya, who as a science teacher, pointed out that in evolution there's what's called punctuated equilibrium.
Punctuated equilibrium is when the context of life changes suddenly and rapidly, and therefore the equilibrium of what life does. Switches in a very rapid moment, and if in this moment of punctuated equilibrium that we are all living in right now, the single largest factor that leads to whatever the new version of life, whatever the new version of teaching and learning looks like, we think may be feedback.
The availability for every person to have feedback at any time is something that is among the things that will most significantly or has the most significant potential to change learning.
[01:08:24] Alex Sarlin: We, we've seen a huge explosion of language practice partner apps, which I think speak to that exactly.
[01:08:30] Maya Bialik: To me, feedback is the most imminent form of the way that AI is gonna transform teaching and learning.
But I think that there are other ways that it's gonna take a little bit more time for us to noodle on, experiment with, publish about, and develop frameworks for, and then they're gonna be really, really transformative. So something like what I was saying at the beginning of creating high quality instructional experiences.
We know a lot about what makes high quality instructional experiences. There's a lot that we don't know. Mm-hmm. And we can create systems to try out different things, to surface the decisions and have people use their agency in ways that they couldn't before and see the effects of that. And maybe even speed up the learning process from the teaching process, meaning the research process of learning from how you're teaching and seeing the effect on the student learning and the effect on the student engagement.
And so I'm just saying that to say feedback is, I think, the most imminent form, but I also think that AI will transform things in a really fundamental way, just over a longer period of time.
[01:09:33] Peter Nilsson: We talk about this, I think we've had this conversation a long time ago, Alex, or maybe in in the chat about Amaras law, Roy Amara, the former president of the Institute of the Future at Stanford, who says that we overestimate the impact of technology in the short run and we underestimate the impact of technology in the long run.
And I think as Maya's pointing out, there may be other things that change learning more significantly in the long run. And you just opened that door in a comment you said earlier, which we explore in the book, which is about AI companions and the social emotional affective layer of learning and the role of AI in engaging that which has extraordinary potential and extraordinary risk.
And that's probably a conversation for a future call because it's its own sort of really rich vein of discussion.
[01:10:18] Alex Sarlin: I'm working on a, on a piece for the newsletter right now because about exactly that and your, your points earlier about how human connection is irreplaceable. I, I totally agree with that, but I also think AI can do some really interesting things to actually enhance human connection.
I think we think about it as a potential replacement, but that, as you say for another time, we are definitely gonna have you both back on when the book comes out for a full hour long discussion because there is so much to unpack here and it is hyper relevant to everybody in the EdTech Insiders community as well as I think everybody in education, WR large, Maya Bialik is the founder of QuestionWell AI platform used by half a million teachers.
Peter Nilsson is the editor of the Educators Notebook and the founder of Athena Lab, and together they have collaborated to write the forthcoming book, Irreplaceable How AI Changes Everything and Nothing in Teaching and Learning. Thank you both so much for being here with us on EdTech Insiders.
[01:11:13] Maya Bialik: Thank you so much for having us.
[01:11:14] Peter Nilsson: What a pleasure, Alex. Thank you.
[01:11:16] Alex Sarlin: We have a fantastic interview for our deep dive. Today we're speaking with Emily Gill. Emily Gill is the co-founder and COO of LEVRA, an award-winning ed tech company using psychometrics and AI to measure and develop human skills such as communication, teamwork, and EQ in Generation Z.
A former Clifford Chance Finance lawyer, Emily is also a certified coach, a chair of school governors, and she did her MBA at the University of Oxford. Emily Gill, welcome to EdTech Insiders.
[01:11:47] Emily Gill: Thank you so much, Alex. Really excited to be here with you.
[01:11:50] Alex Sarlin: I am so excited to speak with you as well. So we mentioned Generation Z and I think many companies just don't understand Generation Z that well.
They don't understand their approach to work and learning. How do you think about it and how do you inject this concept of human skills into the mix?
[01:12:07] Emily Gill: Yeah, great question. And I think there. Is so much loaded terminology around the word Gen Z. And it's often thought about that generation in a negative way because people often think, oh well they're lazy and they don't really care about work.
And what we actually know is that they just have a different way of seeing work and they value different things. So whereas I previously was a, a classic millennial and I wanted to be promoted and progress in my career for Gen Z, they care less about the job or the title description, and they really wanna find work that gives them a sense of meaning and they wanna be valued.
Valued. And so I think the real shift that we need to make when we are thinking about that generation is actually how can we provide them with a sense of meaning? Because their why really matters. And to follow on the second part of your question about human skills, it's. These skills are actually something that Gen Z need help cultivating.
Mm-hmm. They grew up in a different era. They were that generation that went through COVID and so we need to realize that actually they may need more of leadership abilities to help them actually develop some of those human skills. You touched on
[01:13:24] Alex Sarlin: Yeah. You, you used the term human skills and this is a, a concept that has different names in different parts of the sort of education and EdTech ecosystem.
Some say soft skills or durable skills, or we used to say 21st century skills, all these different names. You prefer the term human skills, and I'd love to hear you unpack, you know, what do you consider human skills and how do they create these kind of gaps in the workplace?
[01:13:47] Emily Gill: Yeah, so whilst we were at Oxford, we did a lot of research around this topic, and that's really where Lever was founded.
So we spoke to over 200 organizations and. Really to understand what does the word soft skills or power skills or human skills mean to those organizations. And so what we have done as a company is distill the key human skills into a, a framework, and that's based on 31 human skills. So we see it as an array or a toolbox of these crucial human skills.
And just remind me, Alex, what was the second part of your question?
[01:14:26] Alex Sarlin: Oh, I, I was just asking how do you, they show up as gaps in today's workforce.
[01:14:30] Emily Gill: Yeah, listen, I think the world of work has changed dramatically. Like everyone can acknowledge, we are living through an era of rapid change. The methods, the number of ways you can communicate the number of generations in the workplace.
There are five generations in the workplace. The way people learn has totally changed. The amount of time your manager or your leader actually invests in you has been diminished significantly. And that's all to say that the younger adults joining our workplaces aren't given enough time to learn these crucial skills.
Osmosis. When I was training to be a lawyer, and it was probably over a decade ago now, I was in the office 70 hours a week sitting in a glass box with a partner, and I was mentored. I was brought to every meeting. I listened to every phone call. I learned a lot of the nuances around these soft skills from being in person.
And now we all know that people are not in the office five days a week. They're not learning through in-person meetings. There's this hybrid working world. There's multi-generational workplaces, and so the way that you learn and these skills has completely changed. And we are looking to supplement existing ways of learning in order to give our young adults the best chance that they need to succeed.
[01:15:51] Alex Sarlin: So many great points here. The multi-generational workforce is a really interesting point. Five generations at the same time, the idea that Gen Z needs purpose and meaning to really feel compelled, but they're not getting a lot of face time or a lot of mentoring, and now that entry level jobs are changing, that may go down even more.
I, I, I think one of the things that's so complex, but really fascinating about human skills or power skills, durable skills, is that we all acknowledge they're so important, but they've been really hard to measure.
[01:16:18] Matt Tower: Hmm. And
[01:16:19] Alex Sarlin: that has been the big gap there. But Lever uses psychometrics, it uses ai. It actually assesses skills like empathy and teamwork in ways that we haven't really seen before.
How do you translate these things that formerly were so human? It was sitting in a box, watching how a partner does teamwork and then learning into something measurable, actionable, and accessible.
[01:16:40] Emily Gill: Great question, and that was the big challenge that we saw from outsiders. So both me and my co-founder come from professional services backgrounds.
We are a lawyer and accountant, so we thought if you are wearing a wearable like your Apple Watch or your Fitbit and you are tracking your hormones and your O2 levels, why can't you track your human skills? Yes, it is hard, but human skills can be distilled and broken down into key analytical components.
So we've worked with world leading Psychometricians over the past couple of years. To actually break down and understand these 31 human skills that I've mentioned and really to understand well, what qualities or what behaviors do you need to demonstrate in order to be able to ask a good question or demonstrate resilience or show emotional intelligence?
And I think because we haven't bought that data and that research approach to human skills, in the past they have seen fluffy, and maybe, you know, secondary. But if you start breaking down these key skills and you bring the data and the analytics to them, you are able to quantify them.
[01:17:54] Alex Sarlin: That quantification feels like a, a huge unlock in terms of turning human skills into something that can be taken at the same level and, and assessed and trained for at the same level of some of the other things we do in education that are, have been traditionally easier to assess.
I think that's a really exciting vision. One of the things that also people talk about with Gen Z is that it's a generation that's been steeped in technology, in social media, in games, in phones. Mm-hmm. And that a lot of people claim that that has actually been a decrease in the human skills and in social and teamwork.
But you claim the opposite, that technology can, if used correctly, actually enhance human skills rather than replacing them. What does that look like in practice?
[01:18:37] Emily Gill: Yeah. We use workplace simulations in order to give young adults a safe place to practice these skills. So we know human skills are like a language or sport or a musical instrument.
To get better at them, you need to practice. And so you go to the gym, you lift weights, you go on the running machine in order to improve your fitness levels. The same can be said for human skills. If you are not giving your young adults this safe opportunity to practice, to fail, to make a mistake about asking the right question, then how can you expect them to develop?
So we deliver AI simulations so an individual can actually have this role play conversation with their manager about asking for feedback or dealing with negative feedback. And this gives them this extra environment where they can practice. And that's also not to say like we are not looking to supplement humans here because I fundamentally believe human interaction and connection is crucial.
We are just saying these young adults aren't being afforded the same opportunities to develop and cultivate these skills as they were 10 years ago.
[01:19:51] Alex Sarlin: I totally agree. I think, you know, we're getting to this really strange moment where some of the experiences that you mentioned, like being able to get a mentor at your work or having your work invest time in you train you expect you to be there for a number of years.
A lot of these things are sort of becoming historic and I think the idea of using simulation to sort of bring back that workplace experience, that mentoring experience, that practice is a really valuable way to use technology. And I imagine AI is sort of at the heart of it.
[01:20:21] Emily Gill: It definitely is. It definitely offers this opportunity for repetitive practice and also to get instant feedback.
You know what? We know the generation or Gen Z as we referred to, love is the feedback. That's why they're on TikTok. That's why they're on Snapchat, because that instant sense of gratification is something they crave. So if they can have this conversation and they're provided instant feedback on, Hey Alex, you could have improved on actually asking more questions or thinking actively about how to focus the intention on another subject matter.
That's something the generation wants. And so if used in the right way and if used as part of a broader learning experience program, I think some of these tools can actually help to elevate learning experiences.
[01:21:11] Alex Sarlin: Yeah, the, uh, feedback aspect of AI is becoming, I think, a part of what AI can deliver. That I think is starting to become sort of universally accepted as one of the superpowers of the AI world that we're all in.
The idea of getting constant, as much feedback as you'd like. You can spend as you, you mentioned, you know, 10 hours in a workplace simulation, trying something again and again until you have it exactly right before your first day in the job. That's just a type of volume and specificity of feedback we've just never been able to have before without incredible human experts.
So it's just been really exciting to see. So you've been a school governor, you're a CEO at an ed tech company. You've went to University of Oxford for your MBA. You've sort of looked at this space very carefully. How do you see that not just the workforce education, not just workforce training, but the education system at large?
How might it have to change if we really get to a world where these human skills as you define them, become paramount and become quantifiable?
[01:22:09] Emily Gill: Yeah, you know what? I was at a educational government round table early on this week here in the uk, and we had this very discussion about what actually needs to be changing within school systems in order to help individuals become more equipped for the workforce.
So how we actually gonna change the structures, the curriculums, what is important at schools at the moment? Because a lot of what I see is like the focus is very much on revising for exams and preparing for university and college admissions process. And that's important, right? Like I understand learning your quadratic equations and your photosynthesis or whatever it is, is crucial, but how are we also changing?
Schools so that they realize a lot of those low value kind of memorization data analytics roles are gonna be removed by AI and technology and automation. So where are we as humans gonna stand out? It's going to be in our ability to communicate, to empathize, to. Respond to connect with other people. And so school systems also need to be able to adjust to the realities of like, what does a 16-year-old need to be learning and developing before they start in their first job or before they feel equipped to go to work.
And so looping back to your question, I do think a shift needs to take place in us actually understanding what are the skills we need our young adults to be able to possess so that they are ready so that they're resilient, so that they're adaptable when they do start in the working world. And I don't think enough has been done to really think about why are we sending children to school?
What do we want them to come away with in terms of the ability to take on the world? Yeah. And how should subjects and how should a school education change as a result of this fast paced world that we're living in?
[01:24:12] Alex Sarlin: Do you envision LEVRA being used at different levels of education where people can do workforce simulations from university, maybe even from K 12, and get that type of feedback and get a little bit of a taste of the type of human skills that they're gonna need in the workforce?
[01:24:28] Emily Gill: I would love that at some stage. So at the moment we're B2B, we sell primarily to organizations like Google, Deloitte. We do work with the University of Oxford. So there's been a little bit of movement towards the young adults, obviously, as all those experts know that out there that when you start delivering to K 12 and school childrens, there's a different dynamic.
[01:24:47] Matt Tower: Yeah. And
[01:24:48] Emily Gill: dynamic. In the ideal world, we learn of all of these skills through humans. Again, they're not picking up enough of the human skills in that way. So I would love it to be a supplement to the in-person communication and connection, which is fundamental to how children grow up. I don't think it should replace anything that's already happening.
But again, it goes back to the point. Can we provide young adults a safe place to practice, to learn to practice for their first ever interview? Because maybe walking in a a room with a manager or an interviewer for the first time is really intimidating. Of course. And so how do you actually practice getting the question wrong or improving?
And I think that's where Lever could be a really valuable tool.
[01:25:37] Alex Sarlin: A hundred percent. Rehearsal and practice and feedback are such a key to learning, and we often don't get to practice and rehearse some of the most important moments in our life before they happen. I think it's a really interesting observation.
I really think you're doing something fascinating in the skill development space, and I think everybody should keep an eye on LEVRA. You've mentioned you're working with Google and Deloitte, other big businesses. You're also thinking about quantifying human skills in ways that I think not many in the world are so far.
It's really exciting. Emily Gill is the co-founder and COO of LEVRA, L-E-V-R-A, a award-winning ed tech company that uses psychometrics and AI to measure and develop human skills, communication, teamwork, and EQ and others, 31 of them in generation Z. Thank you so much for being here with us on Ed Tech Insiders.
[01:26:27] Emily Gill: Thank you, Alex. Always a pleasure chatting to you.
[01:26:30] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.