Edtech Insiders

Week in Edtech 9/17/25: OpenAI Study Shows Teaching is ChatGPT’s Top Use, Google Launches “Learn Your Way”, Indian EdTech Funding Rebounds Post-BYJU’s, and More! Feat. Christine Cruzvergara of Handshake

Alex Sarlin and Ben Kornell Season 10

Send us a text

Join hosts Alex Sarlin and Ben Kornell as they dive into the biggest stories shaping education technology this week:

✨ Episode Highlights:
[00:04:32] OpenAI study shows teaching and tutoring are ChatGPT’s top global use cases
[00:10:50] Parents testify in Congress about risks of unsafe AI chatbots for kids
[00:19:38] Google announces “Learn Your Way” and AI video generation for YouTube
[00:22:47] Shift from SEO to AEO as answer engines reshape discovery
[00:26:39] UK secures $40B in AI investment and Indian edtech funding rebounds post-Byju’s
[00:29:23] Babbel launches AI voice trainer and McGraw Hill adds AI to ALEKS calculus
[00:31:09] Superintendent turnover rises while principals gain influence in EdTech decisions

Plus, special guest:
[00:34:29] Christine Cruzvergara, Chief Education Strategy Officer at Handshake, on redefining entry-level jobs in the AI era and launching the Handshake AI Fellowship 

😎 Stay updated with Edtech Insiders! 

🎉 Presenting Sponsor/s:

Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.

Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.

As a tech-first company, Tuck Advisors has developed a suite of proprietary tools to serve its clients better. Tuck was the first firm in the world to launch a custom GPT around M&A.

If you haven’t already, try our proprietary M&A Analyzer, which assesses fit between your company and a specific buyer.

To explore this free tool and the rest of our technology, visit tuckadvisors.com.

[00:00:00] Ben Kornell: And then on the learning and tutoring, I really wonder, is this an answer bot where it's like I'm just trying to get an answer for something? Or is it really tutoring and learning? You and I, when we talk about tutoring and learning, we're thinking about pedagogy, we're thinking about like growing my insights when a lot of other people talk about that use case.

It's like, give me the answer. That's true. And I really do worry that that conflation is actually one of the biggest risks for AI in education. 

[00:00:29] Alex Sarlin: AI operates in a very different way. The content is coming from the company itself. It means they, it can be tracked back to them, right? They can be held liable.

We could easily make a law that says, if AI Companion says something that could be interpreted as encouraging violence or self violence, that that company is liable. To the nines, to the bone. Right. And if you do that well, that incentivizes those companies to take that incredibly seriously. Right. And that's great.

That's exactly how it should be. That's different than letting in all the teenagers in the world to your platform to bully each other, which is where we've been with TikTok and Instagram. It's just a very different situation. I don't want us to. Overly pattern match.

Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders. 

[00:01:30] Ben Kornell: Remember to subscribe to the pod. Check out our newsletter and also our event calendar.

And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod

EdTech Insider listeners, we are back with another week in EdTech. Alex Sarlin is here. I'm Ben Kornell. We have so much great news to cover, but first a shout out to everyone who turned out to our happy hour yesterday in San Francisco. It was so great to see friends, old and new. And our community is strong, 250 people at a happy hour.

And Alex, they are not big drinkers, so it was still affordable. Like I'm really happy to host more parties with our very, very engaged EdTech insiders crowd that is also healthy and not heavy drinkers. So thank you all for coming out and then, uh, awesome post on substack basically characters as the new interface for Yeah, AI education.

The world is a buzz with your substack, Alex. So. It's great to see the conversation spiraling forward, driven by some of your eloquent thoughts there. So thanks. Lots going on. In terms of the pod, what do we have coming up? 

[00:02:52] Alex Sarlin: I mean, we have a murderer's row coming up. We have some absolutely incredible guests, so, okay, over the next.

Two weeks in the podcast, the new head of the GED, talking about how the GED is changing and what they're doing for the age of ai. Andrew Grauer, CEO of eo, the umbrella company with Course Hero and Quill Bott, and all these amazing companies. Dave Messer, product manager at Google Behind guided Learning, amazing conversation with him.

Maria, Santa Lucia from Microsoft Engaged. They're giving billions of dollars to AI education. It was an amazing conversation. And then finally, Jean Claude Ard, CEO of Digital Promise. That's in the next two weeks on this podcast. All of those interviews. So I mean, wow. Incredible people. Such big thinking.

People, just people who really think sociologically about how the world is changing and how we have to adjust and all the things happening in tech. Amazing conversations. So keep listening here. 

[00:03:48] Ben Kornell: I feel like the world of EdTech is happening on the ground real time, and these interviews are actually bringing.

So many of the insights and use cases, I feel like in our pod we often talk about these big picture things, but for those of you who just leased into weaken, EdTech, really take a second to dive into those interviews. I think our theory that use cases. Are are really where the action is for AI rather than these broad themes.

There's just so many great insights and takeaways. The EdTech Insiders Happy Hour was a buzz, just people talking about your interview with Arun at Thunk Bowl. Yeah, just so much interesting stuff. With that in mind, let's start big picture with Around the world and ai. What are the stories that are top 

[00:04:32] Alex Sarlin: of your box?

I mean, the big one for me this week, there were a lot of things happening, but the big one for me this week was that OpenAI put out a study with the biggest study, I think ever I believe, of how people are actually using chat, GBT, this enormous billions of use cases and live study. And what they came out with is the number one use case for chat GPT around the world.

Is teaching and tutoring, but far outpacing coding. That is big, big, big, big news. And it begins to sort of answer some of the questions we've had for quite a long time, which is these big frontier models are taking education very seriously and that's 'cause they're looking at their internal data and that's what people are using it for.

That's incredible. I mean, considering how broad the number of conversations they analyzed is immense and it, there's so much usage on a platform like Chat, bt. But within all that usage, teaching, tutoring, self-improvement is the number one use case. That is big news. What did you make of that, Ben? I think one of the interesting things about 

[00:05:30] Ben Kornell: the study is that a key takeaway from a lot of experts is people aren't using chat DBT for work, and the narrative is, oh, this may not actually be replacing workers and so on, but if you actually read the fine print of the study, it's only covering.

Consumer use cases for chat GBT. Mm-hmm. If you have a work-based account that's not included in the study. And of course, obviously if companies are paying for it, they probably don't want chat GPT utilization analyzed and shared with the public. So I think there is this question of, in the consumer universe, what is it used for in the working world?

What is it used for? That's a great point. I think the coding element of this, if you included the work world. Would be off the charts, like it would just make everything so dwarfed and so many of the people, drew, bent, had this great post around the new computer science is actually reading, editing, and evaluating code rather than writing it.

I think that that's the world that coding is entered into, and so then there's real questions around. What is the utility of AI in these other use cases where in work it's gotta pass a quality bar, whereas in consumer it may or may not. And then on the learning and tutoring, I really wonder, is this an answer bot where it's like I'm just trying to get an answer for something?

Or is it really tutoring and learning? You and I, when we talk about tutoring and learning, we're thinking about pedagogy, we're thinking about like growing my insights when a lot of other people talk about that use case. It's like. Give me the answer, and I really do worry that that conflation is actually one of the biggest risks for AI in education.

[00:07:14] Alex Sarlin: Yeah, and the way the study works, just to get into some of the details, the most common use case that they talk about is they call practical guidance, which includes tutoring, teaching, how to advice, so asking how to do something, which is. Specifically asking for an answer, but sort of a process answer and coming up with creative ideas.

So that sort of creative brainstorming, they also tuck into there. So that is part of it, and it is based on a million and a half messages between May and 2024 and 2025 by over a hundred thousand users. The other thing that stood out that made a lot of headlines is that the demographic changed a lot.

That at the beginning. Chatt was 80% male users, but within this study it was actually 48% male, actually majority female users. So that was interesting. I think, to your point about whether people are asking it for answers or asking it for teaching, it's a blurry line when you have somebody just talking to a bot.

By themselves with no oversight. And as you say, not in a work context with no auditability in a standard sense. I think yes, I'm sure there are lots of people asking for answers the same way. There are lots of people who use web search to find a single direct answer to something that they need an answer to right now.

How to cook a souffle or you know, what the capital of this place is or what a certain acronym stands for. But I think it's a blurry line when you talk about ai and I think part of why. They've all introduced these learning modes is because they understand that it's a blurry line and they know that people may be looking to actually learn something and be able to retain it and synthesize it and make sense of it and make meaning out of it, the actual learning.

Or they may be just looking for something to put into a worksheet right now, and it's a blurry line. But by offering different kind of options, you get both. And when we talked to Dave Messer from Google, he explicitly talked about how students. When they did all these focus groups with students, they wanted both modes.

They wanted guided learning where you would actually teach you something, but they also still wanted to maintain access to regular Gemini answers so that they could get answers. So it's blurry. It's not that everybody goes one way or the other. The same person might be using it in different ways within the same session, depending on what they're actually doing.

But it's so powerful and I think. It still should feel very validating. I think for all of us in EdTech that this technology that's changing the world right now, one of the absolute core primary use cases for users, all different kinds of users is education, or at least informal education. 

[00:09:30] Ben Kornell: Mm-hmm. I mean, one thing I would love to do is compare it with YouTube usage.

And I think there is this question of like, are people navigating away from YouTube into chat GBT, or is it going to be a combo? I think there's a lot of combination utilization, but imagine like eventually you should have your chat GBT. Queries paired with not only written answers, but also video links and so on.

And actually one of my headlines is really around AI engine optimization and how this is becoming the new search, yeah. Is through the AI window. But it did remind me very much of the kind of data I was seeing about YouTube. 

[00:10:11] Alex Sarlin: Yeah, and Google is very explicitly putting YouTube results in its answers, definitely within guided learning, but I think it can do it within Gemini as well.

And basically their internal logic says, what is this person asking for? What are they trying to learn? Is it something that might be better learned through something visual or through video watching a process or seeing something like that? If so, let's start pulling videos in. And I that's, I mean, they do that to try to simulate how a teacher might think.

Right? If it's something you're saying, how does a. Adam jumped from one quantum to the other. It's like you could describe that all day, but if you show a interesting video to show what it would look like, it can click really fast. And they want the AI to think like that as well. 

[00:10:50] Ben Kornell: Yeah, I mean, as this phenomenon of chat, GPT goes from something nascent to ubiquitous.

I think the other headline that caught me was, uh, parent testimony in Congress about AI chatbots and kids, and you were talking a little bit in your substack about. Characters as the interface, and it was really a glimpse into the dark side and as a parent hearing from parents who. Lost a child or whose kids were exposed to sexually graphic interactions.

It was really troubling. And the big target was meta and character ai. And there was a sense that the more open source the models are or the more open the idea is. The more it is rife with bad things for kids. And I think what you're going to see from that kind of testimony is this idea of trust becoming way, way more important.

I feel like philanthropic has been banging this drum since they started, really, as the team spun out of. Open ai, but it's this idea of trust and of safeguarding, you know, for kids, but also for work. Ultimately, like safeguarding is really the currency of the realm for companies to work with AI providers and more and more I think for consumers.

And so I think this makes me downgrade. My optimism about open source AI models, just because while that might be great for specialized use cases for the average consumer or average business, like the open source just has too much risk associated with it. 

[00:12:34] Alex Sarlin: I always go to the same sort of paradigm when we talk about this, and maybe I'll try to expand on it a little bit this time, which is that I think we are coming right out of an era where from the mid two thousands till the mid 2000 tens, social media became ubiquitous, absolutely ubiquitous.

It was the growth of Instagram, the growth of Snapchat, the growth of TikTok, and then we sort of all realized collectively as a society that it was doing enormous. Harm to its users, and it was sort of acknowledged by almost everybody at the same time. There was a tipping point after a long time, and there was a lot of congressional hearings and all sorts of things, and now I, I feel like it's become basically common sense slash common knowledge or the common belief that social media is more harmful than positive.

And I think we're all coming out of that experience. And nobody wants to be fooled again, right? Nobody wants to feel like, oh, another new technology. Everybody's excited about everybody's using. All of a sudden we're gonna assume that everybody's. The people are taking it seriously and thinking about safety, and then someday we'll look at the research and it's gonna be horrible, or we're gonna have horrible outcomes for our kids.

So, I mean, this is, it's definitely why the Senate is jumping on this so quickly. It's definitely why OpenAI is already rolling out these safety measures for users. They see the writing on the wall very clearly. This is not something they wanna hide from or try to whitewash. At the same time, I'm gonna say it at the same time.

We are very early in this world and I don't think there's good evidence yet on any side. I don't think that we can do the same kind of obvious pattern matching and say, well, social media was bad, so this is clearly gonna be bad. I don't feel like that makes any more sense than saying so clearly this is gonna be good.

There's a lot to figure out there and I don't wanna bury my head in the sand. I don't think parents should. I don't think schools should. Like we've interviewed some people on a deck Insider's Long form podcast about deep fakes, about some of the risks, and there are definitely real risks associated with this stuff.

At the same time, I just think we are in the very, very, very early innings of this and I, what I don't wanna happen is the backlash to be so reactionary and so fierce and so hyper regulatory that we don't even get to sort of experiment with some of the. Capabilities of what AI can do if we sort of nip it in the bud that early.

For one thing, I think the rest of the world will not do that. Europe might, but I don't think anyone else will. So I actually think we're gonna do ourselves a disservice competitively, but I also just think we're gonna just lose out on so much innovation, so many interesting use cases, so much that could happen.

So it's a little bit of a forced naivete and listening to the podcast will know. I don't want suicides. I don't want mental health distress or bullying or I don't want any of that to happen. At the same time, I don't think we're even close to the final form of ai. I think we are in the so early innings and we're talking about, I mean the kind of stats that are in this hearing Ben, about like huge percentages of teenagers already using chat BT as a companion or already using AI companions in different ways.

It's like, these are terrible AI companions, they're not very good and they're already. People are using them in huge numbers. If we had good AI companions, they'll be even more ubiquitous. And I know that's part of the fear, but what does good look like? Well, that's up to us to define. I don't think we should just throw the baby out with the bath water and say, AI companions, that's too scary.

We don't want that to happen for our kids. I just don't think that's the right move. 

[00:16:01] Ben Kornell: Yeah, abstinence has never really worked right for any tech thing. But where there is this controversy, there's a window for coming into the safe zone. You know, one of our biggest fears is that the under 13 experience in AI would scare away all the big tech players.

Right. But it's just become clear that it's so integral to their future that they're willing to take that risk. Yeah, and I think in a school context, what an AI. Can or should be allowed to do is quite restrictive, but we are not necessarily seeing the guardrails totally working in all of those situations.

So that's what feels like the wild West to me. But it is very sobering to hear real families talk about the real impacts on their lives, and maybe this is some sort of side effect that we as a society are willing to accept. I don't know that we ever made the trade off with social media, right.

Consciously, and we just ended up there. 

[00:17:00] Alex Sarlin: I mean, I wouldn't frame it as a side effect. We're willing to accept as much as we have to understand what the risks are, and then steer the ship. Away from them. And by the ship, I mean the frontier models, the education applications, the entertainment applications, all the virtual boyfriend and girlfriend apps, like we have to do something.

You can't just let it completely grow unfettered with no regulation. I'm not advocating that at all. I just want us to be careful about not overreacting and sort of shutting down the whole enterprise based on a bunch of assumptions that we are basically making about a totally different technology, which is what I kind of think is happening here.

And frankly, a technology that has many fewer safeguards. I mean, I think some of the surprise about social media was that when kids would get bullied in school on social media or they'd be exposed to things that were making them feel terrible about themselves, the companies started to more and more just sort of throw up their hands and say, it's too much to moderate, or We can't get ahead of it.

Or, yes, you can flag the content and then our moderation team will find it within 72 hours. And people are like, that is such an insufficient answer to this type of. Problem, but that's actually very different than ai. AI operates in a very different way. The content is coming from the company itself. It means they, it can be tracked back to them, right?

They can be held liable. We could easily make a law that says, if AI Companion says something that could be interpreted as encouraging violence or self violence, that that company is liable. To the nines, to the bone. Right. And if you do that well, that incentivizes those companies to take that incredibly seriously.

Right. And that's great. That's exactly how it should be. That's different than letting in all the teenagers in the world to your platform to bully each other, which is where we've been with TikTok and Instagram. It's just a very different situation. I don't want us to overly pattern match. 

[00:18:49] Ben Kornell: Yeah, on where this all heads.

I do feel like the big investments between Microsoft, Google Big company, so let's take open AI aside and Anthropic, which clearly they have to have AI work to be successful. Microsoft and Google are very large companies with very large market caps. They're going to find a way, I think, to make. The AI risk, like to just bring the risk level into its own high tolerance, so that makes me long-term optimistic and that the shut it all down move is not the right play.

I am curious, as you think about Google and. You know, they have another drumbeat of announcements. What's coming up for you on the Google AI front? 

[00:19:38] Alex Sarlin: I've just been so surprised and pleasantly surprised at how the amount of releases and ideas coming out of Google that are directly relevant to education, it's just a drumbeat, and the drumbeat has not stopped.

So we saw this week. We saw a new product they are announcing on LinkedIn called Learn Your Way. Coming outta the sort of research labs, which is an AI that tries to transform static content into interactive, personalized lessons that are trying to be at the age and and sophistication level of the students.

They are interactive, they're actually have game elements in them. I mean, every time you turn around they're pushing the medium. More and they're releasing something new in education and they're putting it through their research and trying to, and getting human evaluators to evaluate the output of it to see how it works and looking good so far.

But as soon as you get your head around one thing, the video overviews in in Google Notebook are out now and they're really cool. They just announced something that goes even further, so that's amazing. We are also seeing them put their. AI video generator, which has gotten a lot of attention. That's the VO engine into YouTube.

So you were mentioning YouTube just a few minutes ago, as it's a big learning tool. It's a hugely popular entertainment tool. They are trying to tie together AI video generation into YouTube and. When I talked to the PM and Google, we talked a little bit about how you have this interesting opportunity here where if somebody asks to learn something of Google guided learning, you can either pull up an existing video from the YouTube library, which is massive.

It's like huge amounts of video uploaded every second. Or you can make your own. You can make something new that's specifically for that ask in a video format. And then that could then become an asset that goes back to YouTube and is shared for the next person who asks that kind of question. Like the combination of user generated content and AI generated content is very powerful, and they're putting their AI video generator into YouTube shorts.

Also, at least a moment this week in which Google, Gemini. Topped the App Store in Apple, which has been owned by Chat BT for quite a long time. So that is, you can sort of feel the battle, the punches of the giants in AI here happening at the top of the App store. And then the flip side of the Google News that I think we've talked about a few times, but it's really huge here, is that it feels like Google.

You see headlines almost every week now about how Google searches. Uh, web search is really starting to take a hit and how downstream, you know, people who are trying to get found through search engine optimization are really starting to take major hits now because of the AI overviews, because of acha, GBT and other systems that give answers directly.

And there is starting to be this real pulse of this new idea of a EO answer engine optimization. And you know, in a world where. Google is the biggest beneficiary of SEO as a concept. They're the number one search engine, and this is search engine optimization. Right? By far, if SEO is giving way to a EO, as a lot of tech folks think is coming, I think you're getting close to sort of a point of no return for Google where they have to be like, web search is not our future anymore.

It's not the thing that's gonna make. All of our revenue, we have to find other revenue streams and it's gotta be through ai. So it's a really interesting moment. What do you make of any of this Google news? It does 

[00:22:47] Ben Kornell: tend to reinforce this idea that the era of search may be over and the era of AI assisted.

Is beginning and they see that play. I also think that Google is the type of company where they don't make big moves for small bets. They're just not going to do it. So that really reinforces the idea that this is billion, if not trillion, trillions of dollars of opportunity here. Which makes me excited for ed tech.

Like all of you out there, like let's say Google goes for a trillion on AI and learning, there's gotta be a billion or for some of us too, right? But it does feel like a landscape shift. I will also say kudos to the Google for Education team. They've created their AI literacy hub that's right, where you can get all of the access to lesson plans and resources.

And I think that the way Google for Education has always been on decade long timelines, thinking about, okay, what's the world gonna look like? How do we support that as it gives me more confidence, so. So, you know, something to watch out for if you are trying to create a game changer in the space that these big tech companies are coming in.

Coming in hard. Yeah. But one thing that I was looking at as well is just the. In response to all of this, whenever Google announces something, it feels like open AI comes back and announces something and the back and forth. So my sense is this kind of, the AI wars have shifted to AI learning wars. This is really good for us.

This is good for our space. 'cause they're pushing each other in a way that will really advance the field. 

[00:24:32] Alex Sarlin: I totally agree, and if you haven't used any of these learning tools or any of the new Google tools, I really recommend just getting your hands on them and trying them, because these are big companies and they're moving fast.

These are less polished features than you. Would've expected a few years ago from the likes of, of Google or Microsoft or OpenAI, which was not around a few years ago. But, you know, they're moving fast. So some of the things are released and they're not perfect at all, but they are pretty powerful. And you can sort of see that with, especially with a, a trained educator behind it, which is what a lot of them are being trained, used in, in the K 12 space.

You can do incredible things, but even with some practice as a student or as a novice user. You can do some unbelievable stuff with these tools. If you haven't put tried VO and Flow putting together, you can do generate videos and then put 'em into a timeline and actually make it a, a, a full video. That's all AI generated, but very consistent.

You can use video overview from Google to make, make things there, or they just released create the ability to create like blogs or sort of debate material, like sort of double-sided arguments I believe you can do in overview. It's just amazing stuff happening in the space and it's not the kind of feature war you were used to.

There's also really interesting stuff happening around the world in some of this space. So did you notice these headlines been about the uk? There were a whole bunch of huge US companies have pledged billions, over $40 billion of new AI investments in. England, and that includes Microsoft, that includes Nvidia, that includes, I believe, anthropic.

I should check on that one. But definitely those two. Yep. And that's exciting. And then we also saw, just in the ed tech space, an interesting headline about how in Indian ed tech funding has rebounded in a huge way, even the first half of 2025. So Indian ed tech has been, you know, has had huge peaks and valleys over the last few years, but it was pretty dismal recently.

And there are a bunch of, you know, $60 million rounds. Going into a bunch of different companies doing different things in Indian ed tech and there's, there's a lot of excitement in that space. So we're a little US centered here, but those are both interesting headlines in AI and ed tech. 

[00:26:39] Ben Kornell: Yeah. You know, Saana Labs, Joel Heller, mark from Sweden just sold his, uh, learning company to workday.

That's a big win for EdTech and the workforce learning. So, huge kudos to Joel. I remember meeting him in Beijing in 2019 and he was like, AI is gonna change everything. Yep. And we're gonna be able to create like, repositories of learning across companies where, you know, a new employee can tap into the collective knowledge.

And I'm like. That sounds super futuristic and really now building that into a platform with distribution like Workday is exciting. That's awesome. So that's the on the European front and then, you know, on the Indian ed tech front, just seeing a lot of the investment spiking. Tells me that the Bajes hangover is over and that there's a lot of companies there that have real revenue, have real growth paths and growth trajectories.

I don't know if we covered this, but the fire sale of all of the BA Jew component parts is basically complete and it's shocking and sad. You know what's left of the companies are selling for pennies on the dollar, but it does feel like, you know, there's a PA in US funding and us like EdTech markets due to the B2B market being really, really tough right now.

But elsewhere where B2C is thriving or where the governments are making real investments in AI for learning, that's really peaking. 

[00:28:12] Alex Sarlin: Yeah, the bajes hangover, it's a great way to put it. It's been such a dark cloud on Indian ed tech, and we know that our GSV friends have been doubling down on Indian ed tech for a long time and making some really smart, very sophisticated vets and holding a conference in India for EdTech.

And I think Baiju is, it's such a outlier. It's such a. It's crazy story that I think it's just made everybody nervous about Indian EdTech. But now we have companies like Stim, $4 million round Han, $4 million round cco, $28 million round border, plus 7 million verity, 5 million leap. 65 million from, from Owl, uh, jungle Ventures and others.

Like you're starting to see that, that excitement back in. And it's for the reasons that India's always been exciting. It's a enormous country, very dedicated to education with a growing middle class, with all sorts of infrastructure in place, and it's always made sense, but it sort of got thrown off by the incredibly strange story of Biju.

Someday we'll get to talk to Biju on this podcast. It's gonna be, uh, quite a day, don't you think, Ben? 

[00:29:14] Ben Kornell: We've been chasing that one down for a while, but I don't think his lawyers will let us yet. So in terms of wrapping up other EdTech news, that's hit your radar. 

[00:29:23] Alex Sarlin: Yeah, so, so speaking of that Asana acquisition, I think that was, that's a great call out.

A couple of interesting launches from big EdTech companies. So we saw Babel this week launch a Babel speed. It's an AI powered voice trainer to help people build confidence, and I think this is one of many interesting moves in the language learning space. You know, we've covered Duolingo and how Google entered this space, but there's actually a whole.

Bunch of really interesting moves, uh, that people are doing to do AI for language learning. Babble of course, is one of the biggest companies in this space, but there are a whole lot of sort of mid-size, small to mid-size companies, ferociously entering language learning because now you can have conversations, now you can do accent reduction or translation or give feedback or in all these different ways or build confidence.

So I thought that was interesting to see Babel jumping into that. And we also saw McGraw Hill. Launch a new AI version of their Alex suite. The Alex, A-L-E-K-S has been a adaptive learning solution for many years that, that McGraw-Hill bought and has scaled and is really, it has really, really good effects.

It's actually one of the, one of the EdTech tools that has true efficacy research behind it. And they just launched a sort of Alex for calculus with personalized support. You can imagine, you know what that looks like, but it's neat to see that. We're updating our market map for education and we're, and part of what we're doing is actually showing what the incumbents are doing.

You know, what is Pearson doing? What is McGraw Hill doing? What is ClassDojo doing? What is BrainPOP doing? And it's really been interesting to see this sort of drumbeat of AI features from the big companies. And they don't get as much attention because they're. No names. They're not fully, fully crazy new ideas, but, um, they're interesting.

So it is cool to see Alex, a very proven solution, sort of incorporate AI and Babel, a very well known and very successful ai, you know, language learning company do it as well. 

[00:31:09] Ben Kornell: Yeah, I think it's fair to say that language learning is, is in a whole new era now too. No. That's probably been our most successful vertical in EdTech from A IPO monetization kind of pathway.

And so that'll be a really interesting one to watch. And babel's a very, very large, large player. The other thing I'm watching is just on the educator front, you know, school budgets have been really tight. There is a surge in homeschooling. There's also an uptick in superintendent turnover. You know, one of our articles this week was really highlighting the fact that tenures are now starting to, in some states are under two years.

Oh my God. And so you're just getting to a place where. How do you have stability in schools and school districts? I talked to a principal the other day who was basically saying, this has become so normal that actually the principals of schools are far more important to the stability than superintendents are anymore because of that cycle of turning so much, and the regulatory restrictions, like what is a new superintendent really gonna be able to change or do in a short period of time?

Not very much. And so it does make me wonder whether the centrality of principals in leading schools is actually. Researching, which maybe that could be a good thing. So I don't know what to make of it, but if have decision making power for EdTech purchases, that's key, right? Yeah. And I think we always talk about top down versus bottoms up business models and so on.

But, um, the principal's in a really nice spot in that they're close enough to the work, close enough to the staff and close enough to the families that they might be a great arbiter of that balance. I just think that the. General feeling is that principles are overwhelmed or overworked in a lot of.

Situations, so that can be tough, but, so I'm watching that. That feels like a headline to watch throughout this fall and into the spring. 

[00:33:06] Alex Sarlin: Yeah, it's a back to school season. Thanks so much. This has been really fun. I keep tuned in here to some of these long form interviews. They are really, really interesting.

The GED one was fascinating and, and of course what is happening at Microsoft is fascinating. They're doing AI literacy all over the world in some really interesting ways. And what's happening at Google is fascinating. We talk about it all the time. Ben, wanna take us out? 

[00:33:28] Ben Kornell: Yeah. Well thank you all for tuning in.

If it happens in EdTech, you'll hear about it here on the weekend, EdTech with EdTech Insiders, and you can find out more at our substack EdTech insiders.substack.com, or you can check out us on any of your major podcast platforms. A thank you, a special thank you to our sponsors, without whom none of this would be possible.

Thank you all so much for supporting our work. 

[00:33:51] Alex Sarlin: Oh, and leave us a review if you haven't yet. We, it really helps. Thanks so much everybody. Bye everyone. We have got a special guest for this week's deep dive on this week in EdTech. We are talking to Christine Cruzvergara. She's the Chief Education Strategy Officer at Handshake, where she leads partnerships with over 1500 universities before Handshake.

Christine spent over a decade in higher ed working in institutions up and down the East coast, including all of the Georges in dc that's Georgetown, George Washington University, and George Mason University. Christine, welcome to EdTech Insiders. 

[00:34:29] Christine Cruzvergara: Thanks for having me, Alex. 

[00:34:30] Alex Sarlin: So first off, everybody in EdTech knows Handshake, but you have been making lots of moves in the last few years.

Tell us a little bit for those who may not have heard of what you're doing in terms of job placement and job boards within universities, tell us what Handshake has been and how you're starting to evolve in the AI era. 

[00:34:48] Christine Cruzvergara: Sure. So Handshake has always been the largest early career talent network, connecting universities and all of their students with employers and full-time jobs, internships, all that good stuff.

Where we are evolving to is really making sure that we are the platform for the next generation of talent for the AI economy. We recognize that things are changing very rapidly. Gen AI is obviously changing the way work looks, and we wanna make sure that we are equipping all of our universities, and most importantly all of our students, to be ready for the next generation of jobs that are coming down the pike.

And so we've started partnering with some of the Frontier AI labs to do some of that work to really help our students build the skillset that they will need for those jobs. 

[00:35:33] Alex Sarlin: I love how you mentioned you're sort of standing between the universities and the students, and neither side is super prepared at this exact moment to understand what an entry level job looks like in the AI era.

So tell us from all your insight and perspective across universities, what do you think AI is doing to reshape the definition of an entry level job right now? And what should universities and students be doing to ride that wave and come out the other side intact or even thrive? 

[00:36:01] Christine Cruzvergara: I'm really glad that you actually used the word reshape the entry level job.

I think right now in popular media, you're hearing a lot of people be, quite frankly, pretty hyperbolic around the elimination of all entry level jobs or 50% of all white collar jobs will be gone. I think I believe more that AI will reshape and redefine what entry level looks like, and primarily it will be around productivity.

So I think a lot of employers have an expectation that entry-level workers, regardless of what role or even sector you're going into, will have a certain AI fluency that allows them to pair AI with their actual knowledge, expertise. And in doing so, they will be. More productive. Maybe it's three x more productive, maybe it's 10 x more productive, we don't know yet.

But productivity and efficiency will very much be the markers of what I think a lot of employers are looking at. And so for students who are listening right now, your ability to have that skillset, to know how to use AI in an effective or efficient way, and to be able to showcase that for your particular types of roles that you wanna go into will be ever more important in the.

Next 18 months to two years. 

[00:37:08] Alex Sarlin: It's such a good point. And this skillset, you're mentioning what the competencies are to succeed is such a nebulous idea, but I think it's starting to take a little shape and you know, it makes me think of, this may be like dating reference or a silly one, but you know, at one point entry level jobs in technology tended to be things like data entry or creating PowerPoint presentations.

And then tools started to evolve and it started to be like, well, it's not really data entry anymore, but you do have to know how to operate a spreadsheet and do all sorts of. Formulas and to be an Excel master, and you do have to know how to consult and make sense of strategy. But in the AI era even those things feel like they're becoming baseline and it's, you can go further.

It's not just about knowing your Excel formulas anymore. It's being able to actually tell an AI to analyze a huge data set and turn it into a report and do all these different calculations with it. If you're a student right now, you're. Any age student, traditional or non-traditional. How are you thinking about how to uplevel your thinking as you come outta school to be able to really, really succeed in this economy?

[00:38:08] Christine Cruzvergara: I think there are a few things that I would point out. So one is in order to actually know if the AI is doing a good job at the task that you've just prompted it to do, mm-hmm. You still have to know what it is you're asking them to do. Otherwise there's no way for you to check it. Right. So I often say like you still have to have good enough expertise and knowledge in your actual functional area.

You also have to have good enough critical thinking to be able to take the output and be able to actually assess. Is this a good output or not? Or maybe some parts of it were good, but some parts could have been better, and that's where the human still comes into all of this. The next part is, I think as a student, there are certain skill sets that still have to be paired with AI knowledge or with tech knowledge.

So things like critical thinking, conflict management, strong communication. Being able to pull a team together, being politically savvy in the workplace, like these are all still skills that are incredibly important. I would've argued 10 years ago. This is what separates a good from great employee and often what separates someone who ends up rising.

In the ranks and becoming a leader within an organization. And I think now with ai, some of these types of more intangible core skills are gonna be even more critical for employers because they'll have lots of people that might be able to use ai, but not necessarily a lot of people that can coalesce a team or multiple teams and move them in the same direction when everybody kind of has their own agenda.

Right? Which we know. Happens in organizations. So I think these are the types of things that have to be paired together. I think as a student there are a couple of different things that you can do. So one of the things that we're actually going to be partnering with our universities on in the coming weeks is actually launching an undergraduate AI course that will actually help students to learn more about what is an LLM.

We use the acronym all the time. People talk about large language models, but what are they really and how do they work? How does pre-training and post-training work? There's a lot of work right now, a lot of gig work, project work available for masters and PhD students around post-training. And so we're actually partnering with a lot of labs to provide those types of high skilled, high expertise opportunities to students where we can also pay them very well per hour to do post training.

For some of these LLMs, being able to break a model actually allows a student to show how well they know how to use AI and how they might pair that with their work moving forward. Because when you know the limitations, you also know. Its capabilities. 

[00:40:46] Alex Sarlin: There's so much great stuff to unpack in there. I mean, first I'm hearing you say that some of the intangible skills are they, it's call them durable skills are going to remain durable.

Right? Leadership, being able to collaborate critical thinking. You mentioned a few in there. The idea of being politically savvy in the workplace are still gonna be differentiators even in the AI era. And I think that's news to a lot of people. I think it's a bold claim and it makes a lot of sense, but it's also like.

People don't know what's gonna still be around. And then I think this idea of collaborating with Frontier AI Labs and actually using your particular expertise if you're a master student, a PhD student in a particular area as a job using it in collaboration with AI to break models, to train models to evaluate the output of models.

That's a really exciting. And I think that's the type of job that is being created in this moment. You know, when they talk about the reshaping the economy, those are jobs that are being created right now. If you're a sociology major and you're gonna work with it, that was my major. So it comes to mind, but you know, sociology major, and you're gonna work with most cutting edge models to try to figure out what it can and can't do, what it's getting right and wrong.

That's really powerful. Tell us about what these. Collaborations with Frontier AI Labs actually look like How are you placing students and connecting the student population with these Frontier Labs? 

[00:41:59] Christine Cruzvergara: We've been really fortunate that all of the Frontier Labs have wanted to work with us to get high skilled, high expertise students that can help with their post training.

So we, over the course of the past eight months, have been able to offer opportunities to PhD level and master's level students where they are essentially helping to like break the model or stump the model. And as part of that, they are using their domain. Expertise. So we have people from music theory, education, bio chem, all sorts of different majors that are using their deep domain expertise to essentially teach the model more of that area or that domain that allows.

That domain to also further use AI in their own research. Right. We've heard from a lot of fellows that they are so excited that they're actually part of the frontier in building some of this, so that their own domain area will be able to advance faster in some of the research that they do down the road because of the work that they're putting in at the moment.

And I wanna be really clear, just because. There are lots of gig opportunities at the moment. These like project based work around this. I fully recognize that a lot of students don't wanna do this full time, or they're not interested in doing post training as their career. They wanna do this as a side job.

They wanna do this to help supplement their schoolwork or to make extra money on the side. Or maybe while they're searching for a job or in between jobs, they wanna have this opportunity and by getting paid. To do this work. They're also gaining a skillset that helps them to be able to be a more competitive teacher candidate or a more competitive music teacher candidate, or a more competitive biologist or chemist, whatever their domain area is that they're interested in actually going into.

This work helps them. And we actually have stories of some fellows who have had a lot of luck getting job offers after going through our Handshake AI fellowship program because they were able to stand out amongst other candidates because of the skillset that they had developed from us. 

[00:43:59] Alex Sarlin: It makes a ton of sense because if the future of almost every field is going to be collaborating with AI to get to the top of the field doing cutting edge research or doing really advanced work, then doing that type of work right now is a way to both advance in your particular domain, if it's education or music or anything, and also to advance within the AI world and start to, as you say.

Maybe for many of them post-training is not their aspiring career, but at the same time, it is a career and it is actually something that is emerging right now. It's becoming a career path I imagine, for many people, even though we don't hear about it quite yet. It makes perfect sense in terms of where AI is, so you could sort of level up in both your domain and in AI skills at the same time.

Right. Is that sort of the nature of the fellowship? 

[00:44:44] Christine Cruzvergara: That's exactly right. It's a win-win for all parties. You're getting paid good money. And just to give you a sense, we pay all of our fellows between $75 an hour to $150 an hour. So you're making good pay using your expertise in your domain. It's a win-win for everybody, and you're gaining the skills that you can use later on.

[00:45:02] Alex Sarlin: It's a really interesting model, and I will say it's a really clever move for a place like Handshake, because Handshake, you mentioned talent network. You know, it's already in a position where it has incredible insight into what people are majoring in, what kinds of jobs they're looking for, where they are within over 1500 universities at different levels, at undergraduate, graduate, you know, and PhD level types of students.

And then. You have the companies on one side, you know who's hiring and you have the universities and you know what people are majoring in and training. That's a data set and sort of a position to be in. That probably gives you enormous insight into where everything's moving. Do you feel like Handshake has an advantage in that and and it's part of what's helped it pivot and evolve in this really interesting moment?

[00:45:46] Christine Cruzvergara: I do, and our mission has always been to democratize access to opportunity for students. And so we're always paying attention to where is the opportunity. This is a tough job market right now. I think anyone who's paying any attention to how many jobs and how many applicants know that this is a tough time at the moment for job applicants, and so we're always looking for where the opportunity exists.

And we wanna make sure that we pivot our business in such a way that we're able to continue helping students to pivot themselves so that they can be the most competitive possible. I think by all accounts, handshake has the trust and the partnership of over 1500 universities and colleges. And the students have grown up with Handshake.

They're looking for full-time jobs on Handshake. While they are also many of them taking advantage of now these part-time opportunities through the Handshake AI fellowship. And so it becomes, as I mentioned before, a win-win situation for them to be able to both build and learn. While also applying to something where they can earn money.

Yeah. And I think those are really important components sort of moving forward. We wanna be able to be at that intersection. And I think the other component that I haven't really talked about is the employers. We know that employers are looking for talent that have some of this skillset, so they know that they can rely on handshake.

To continue to produce and provide great talent from these great institutions that actually are getting trained up in some of this AI fluency in real time with real labs, not just in the classroom. 

[00:47:24] Alex Sarlin: You know, as I hear you describe the system, it reminds me a little, you know, one of the tropes in the, in the bootcamp days was, oh, it's all these people who were English majors or who were philosophy majors, and now they're in the job market and they need to learn to code, or they need to learn data science or certain types of skills because it's gonna be a sort of translation of what they do to employability.

But one aspect of it was. The bootcamps didn't really care if you were an English major or a philosophy major. You didn't use any of that skillset. You didn't use any of that domain knowledge. When you're sitting there learning your Python, this is very different in that particular way. And I think it's actually really interesting because your students in your fellowship are using their domain expertise.

They're not saying, oh, I, you know, I was a philosophy person, but I ca gotta forget all that. It was a philosophy PhD, but I gotta forget all that because I gotta train for the AI era, which is like a pretty depressing. You know, way to look at the world. Instead, it's, I know so much about philosophy, I can actually use this to the height of my knowledge with some of the most advanced intelligences in the world.

That's pretty different. I'd love to hear you talk about that. 

[00:48:26] Christine Cruzvergara: I think one of the things that is so special about the fellowship is that it's really specific to the student's domain or expertise area. So we're not looking for people who just know about ai. We're not looking for people who just have done post training in the past.

We're definitely not looking for data labelers. We are looking for people who actually know something about chemistry or really know something deep about music theory or really know something about English literature, and they're taking that knowledge and that domain, and they're helping to create training materials.

For these LLMs, and I think for them it feels like a challenge in it's a true intellectual challenge, is probably the most succinct way to put the type of feedback that we've heard from fellows. They feel like they're actually using their expertise. They're using what they went to school for in order to actually put it to good use for society.

I think the other piece that I always like to emphasize as well that people don't often think about is. People often criticize that there can be a lot of bias in technology and that there can be bias in AI based on the pre-training data that it consumes. Um, one of the wonderful things about working with these labs in post-training is that we're actually able to hire an incredibly diverse array of students with different backgrounds, different genders, different race and ethnicity.

Different academic backgrounds to be able to help with the post training. And so folks are actually changing and evolving the future of what AI looks like because we have such a wide breadth of talent that we're able to tap into. 

[00:50:06] Alex Sarlin: It really is an exciting vision. And it's funny because I think, you know, it is a tough job market and there is a lot of.

Confusion, skepticism, you know, disdain even about AI from both college students in some corners, and from professors in some corners, and from employers in some corners. I think people are really trying to wrap their head around this fast changing world, but the vision you're putting out of this very sophisticated domain.

Experts, diverse domain experts, being able to use their domain knowledge for high paying jobs right outta school, and then both learn AI and continue to dive deep in their domain at the same time. It feels like a vision that, that a college professor, for example, would wish for their students. They'd say, oh, I'd love all my music theory students to be able to do this kind of thing.

I wonder if this is a message that you've heard as you talk to your universities in particular. Are they sort of like, huh, I never thought of AI as doing this kind of thing for the workforce. I'm curious how they react to it. 

[00:51:00] Christine Cruzvergara: We've gotten a very, very warm reception from our university partners. By and large, everyone is quite interested in partnering.

They're excited for these opportunities for their students. People have not hesitated to promote this to their masters and PhD students, which has been really wonderful. One of the things, to be honest, that actually surprised me while I made, I personally made over 200 phone calls to a bunch of our university partners when we first started just to get a beta and pilot off the ground.

And what I was really surprised. Was actually some of the faculty that we talked to, they were already doing this themselves, so they were actually already familiar with this work. They had already found it on their own and they were already doing some of this, and they were thrilled that now we were going to actually be doing a lot of it.

And they could just, not only themselves do it through us, but they could direct their students to do it through Handshake as well. So I was admittedly a little bit surprised that a few of them had already. Mm-hmm. Participated in this type of post-training activity on their own, but by and large people are very, very excited.

I think for some, it's been a new area to learn. They might not know a lot about reinforcement learning or they don't know a lot about trajectory generation, or they're not familiar with professional rubrics or things like that. And so we've had to of course, explain what the details are around some of those areas, but once they understand it, they've been really excited about the opportunity.

As you mentioned, it presents to their students. 

[00:52:24] Alex Sarlin: It is a really smart. Set of moves. I, I would think for a company like Handshake, it's already so embedded. It's already such a big part of a, a college student's experience, especially as they're making that transition into the workforce. And I feel like this sort of supercharges it and makes it hyper relevant for this very confusing, but also very exciting, you know, a I era.

So, just last question for you. You know, this is a period of transition, right? I mean, the AI was trained on. These huge corpuses of knowledge, including much of the internet, but they didn't have PhD students, you know, PhDs or or professors doing, you know, European history saying, Hey Wade, you don't know about this particular thing, but over time it'll know more and more and more.

I'm curious how you see the trajectory of this type of work. Will there continue to be this post-training world or at some point, will the AI grow and grow and grow and it'll only be the tiniest niche corners that need that type of post training? 

[00:53:17] Christine Cruzvergara: Well, I think at least for the next two to five years or so, there's still a lot of opportunity around some of these expertise areas, and I think that's one of the benefits of working with AI Labs who are really interested in deep expertise and having a pool of candidates like Masters and PhD students, because inherently getting your PhD is really.

Pushing the boundaries of knowledge. That is what PhD scholars do. And so as they continue to push the boundaries of knowledge, they can continue to help LLMs continue to also push the boundaries of what it can assist. With that being said, I also think that moving forward we're going to see more work and professional domains, professional rubrics, and so I think that's another area that, again, handshakes.

We'll be able to really support and excel in because we have this huge talent ecosystem, but we have folks that are already really amazing accountants or really amazing consultants, or really amazing doctors or lawyers. And those areas are going to also be needed for post-training moving forward. And so we might be able to help some folks supplement their income while they're in between jobs.

Or as they look to maybe transition from one career path to another career path. I think there are lots of opportunities that we can continue to lean into as we think about how we democratize this opportunity for not only our early talent, but hopefully also soon our mid and maybe more senior level talent 

[00:54:41] Alex Sarlin: too.

Yeah. You know, as I hear you talk, I think about, you mentioned the doctors as one of the use cases. It's like I've always wondered how doctors with all their full caseloads and all the things they have to do on a daily basis. Can possibly stay on top of all the medical research that's coming out, all the pharmaceutical research that's coming out.

And you can imagine that sort of flywheel in what you're talking about. If there's a certain set of doctors who, whose entire job is to translate the new research into language and findings that can help an AI incorporated. And then that AI is available for every other doctor in the world and they can say.

What's the most recent melanoma research you say? Actually, a paper came out two weeks ago that says this and this and this. You could sort of see that flywheel of humans and AI working together. It's a really exciting vision. 

[00:55:23] Christine Cruzvergara: Exactly. Yeah. We can really help advance a lot of things and I think make things much easier for folks as they're working in their day-to-day jobs.

Yeah, 

[00:55:31] Alex Sarlin: it's a really exciting, so Christine Cruzvergara is the Chief Education Strategy Officer at Handshake. They're obviously doing really interesting work and she leads partnerships with over 1500 universities, as well as playing a role between employers and this huge army of students, PhD students, master's students, and undergraduate students all over the country with a handshake AI fellowship.

Really interesting works. Thank you so much for telling us about it on EdTech Insiders. I think people are gonna find this very, very interesting. 

[00:56:01] Christine Cruzvergara: Thanks for having me. 

[00:56:02] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community.

For those who want even more, EdTech Insider, subscribe to the Free EdTech Insider's Newsletter on substack.

People on this episode