Edtech Insiders

Week in EdTech 7/16/2025: ChatGPT Agents, AI Companions for Teens, Google’s Gemini Push, Windsurf Talent Wars, Scale AI Layoffs and More! Feat. Writer Matthew Gasda & Marc Graham of Spark Education AI

Alex Sarlin Season 10

Send us a text

Join hosts Alex Sarlin and Claire Zau, a Partner and AI Lead at GSV Ventures as they explores the latest developments in education technology, from AI agents to teacher co-pilots, talent wars, and shifts in global AI strategies.

 ✨ Episode Highlights 

[00:00:00] AI teacher co-pilots evolve into agentic workflows.
[00:02:15] OpenAI launches ChatGPT Agent for autonomous tasks.
[00:04:24] Meta, Google, and OpenAI escalate AI talent wars.
[00:07:38] Privacy guardrails emerge for AI agent actions.
[00:10:20] ChatGPT pilots “Study Together” learning mode.
[00:14:40] Teens use AI as companions, sparking debate.
[00:19:58] AI multiplies both positive and negative behaviors.
[00:29:11] Windsurf acquisition saga shows coding disruption.
[00:37:18] Teacher AI tools gain value through workflow data.
[00:42:48] DeepMind’s rise positions Demis Hassabis as key leader.
[00:45:32] Google offers free Gemini AI plan to Indian students.
[00:49:39] Meta builds massive AI data centers for digital labor. 

Plus, special guests:
[00:52:42] Matthew Gasda, a writer and director, on how educators can rethink writing and grading in the AI era.
[01:13:30] Marc Graham, founder of Spark Education AI, on using AI to personalize reading and engage reluctant readers.

😎 Stay updated with Edtech Insiders! 

🎉 Presenting Sponsor/s:

This season of Edtech Insiders is brought to you by Starbridge. Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.

[00:00:00] Claire Zau: We're seeing huge unlocks for all these AI teacher copilots. They're still very much so handholding the ai. It's like, here, let me use this one tool. Let me give you all the material that you need to succeed and tell you exactly what I'm looking for. We're now moving to a world where instead of you doing the teaching to the system and handholding the system, the system takes your command and is doing the work of interpreting.

Figuring out what tools it has and then completing that task.

[00:00:32] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to ai developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech Insiders. Remember to subscribe to the pod, check out our newsletter and offer our event calendar and to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events, back channel insights from Alex and Ben.

Hope you enjoyed today's pod.

Welcome to the Week in EdTech. We are here with one of our absolute favorite guest. We hope to see more and more of. It's Claire partner, AI lead at G and Fountain of Knowledge about everything, AI and education. Claire welcome podcast. 

[00:01:31] Claire Zau: Thanks so much, Alex. I always have so much fun during these conversations and learn so much too.

So always a blast and very excited for our conversation today. 

[00:01:39] Alex Sarlin: Absolutely. The same way, you know, we are religious followers of your newsletter and I think you do an incredible job of bringing together so many different pieces of, of information and news across the entire globe and everything happening in ai.

I learn a lot every time too. So let's jump in. There is so much happening over the last week or two in the, not only the AI world, which we should get to, but the AI and education world specifically. But let's start with sort of AI in general. You know, you are in the valley. What are the busiest things happening in AI right now?

We have all these launches, all this talent war, all this crazy stuff happening. What is jumping out to you? 

[00:02:15] Claire Zau: Yeah, there is definitely a lot. I think probably the two that are most top of mind for me, one of them being hot off the press is just yesterday open AI launching chat, GBT agent. Which we can talk about just fundamental shift into age agentic workflows versus more assistant style workflows.

And then probably the second one that's, that I've been thinking a lot about is just the massive talent wars that we're seeing dominate the headlines around meta poaching AI researchers for a hundred million dollar packages, or the whole windsurf. Google open AI cognition situation. That was kind of one of the craziest 72 hours in Silicon Valley.

I'm sure they'll, they'll make an episode or a movie about it, hopefully. But those are probably the big two on my mind right now. But as you mentioned, there's just so many headlines happening right now. 

[00:03:02] Alex Sarlin: Yeah. So, well, let's start with the, the age agentic stuff, because that is very interesting. You know, chate, since it was first to market in the sort of consumer facing AI space, it's still, I believe the largest, you know, daily usage AI bot of any kind happening in the world.

They, they just keep announcing bigger and bigger numbers, bigger and more and more profit, and they basically just put out this chate agent we've been talking about agents for, for quite a while on the, on the podcast and in the AI community. But this is, I think, one of the first products that's actually labeled that way.

And it's specifically like. Use CHATT as an agent to connect all these different pieces. It can do PowerPoint, it can do calendaring, it can make appointments, can work within all your different tools, do Excel work and people have sort of been buzz like what does this mean? Both for our general of ai, but also might it mean for educators or students especially maybe college students who already use chat GT for everything.

But now it sort of can ally and autonomously do a lot of work for them. What do you make of it? I mean, age agentic workflows have been sort of hot and not over time, but I think we're just beginning to, I'm very excited about this and I think we're just beginning to see this world where people can sort of hand certain amounts of autonomy to agents and let them do research and web browsing and all sorts of pieces, all in one.

What do you think this means for the education community? 

[00:04:24] Claire Zau: Yeah. Yeah. Well, to take a step back and talk a little bit more about why this is different from, maybe I have gotten questions around this, like why, why is this different from Zapier or NAN, which is another agentic automation tool. But the big unlock here is what OpenAI did was combine two of their most agentic capabilities that they had to Right.

Built one being operator, which basically allows your computer to click and scroll for you. And then the second one being deep research, which allows you to do these long context workflow tasks such as deep research. So combining both of those, you get agents and what's powerful, I think, compared to maybe a Zapier where you can chain a bunch of different actions is with this one, I think the big unlock is that you can give it the intent and you don't have to step by step, say, if X, then Y, then z.

It just breaks down that with semantic intent, so it's actually able to interpret the outcome you're looking for. And I think that's actually a massive unlock, even in learning settings. When you ask for, if you look at all the AI assistant tooling, right now, we're seeing huge unlocks for all these AI teacher co-pilots, 

[00:05:30] Alex Sarlin: right?

[00:05:30] Claire Zau: They're still very much so handholding the ai. It's like, here, let me use this one tool. Let me give you all the material that you need to succeed and tell you exactly what I'm looking for. We're now moving to a world where instead of you doing the teaching to the system and handholding the system, the system takes your command and is doing the work of interpreting, figuring out what tools it has and then completing that task.

So I think that's actually gonna be huge if we're actually moving from a world where you are handholding an AI tool, for example, if you're a teacher, to now just using natural language to give instructions and the AI should be doing the actual figuring out for you. That being said, I think around maybe students, it probably means you can.

Give instructions to these AI agents and it will run in the background for multiple hours. You're seeing this in the coding space already, where software engineers, they give, uh, cursor or Claude code a bunch of instructions and tasks. They go to bed for eight hours and the next morning they wake up.

They have a bunch of completed code, and their role is no longer the actual builders, but the, they're more like supervisors who are reviewing. So I, I think that there also probably means a bigger shift in how we think about the role of the human in an ai, human workforce. 

[00:06:45] Alex Sarlin: Yeah. It, it also strikes me that there's a training component to this too.

You know, we've seen these polls over the last few weeks that educators are using ai, they're pretty excited by using ai, but they feel like they don't have much training, they don't have a lot of support systems around them, sort of figuring it out on their own. And as such, they have to sort of figure out the handholding that you mentioned on their own.

Be like, okay, well if I want this to happen, I have to ask it for this, then pass this to that, then ask it to do it look slightly differently. All these, there's a lot of, uh, guidance needed. And this sort of takes some of that guidance outta it and makes it so that an educator could say, well, I want you to go to the internet, find, find the results, Paul.

You don't need to tell it step by step. You can about, like you said, semantic intent, what you want as an educator, what kind of resources you want, what kind of task you wanted to complete, and it'll actually put the pieces together for you, including doing things like sending emails or doing the calendaring, which obviously really matters for efficiency.

[00:07:38] Claire Zau: Yeah. I also do think one thing worth mentioning is the privacy element that's probably still somewhat unanswered as we think about the current state of agents. So for anyone experimenting with chat GT agent, I think important to recognize that these are still B one of these tools and they as a firm are exercising more caution over capabilities here.

So I believe right now, if you are doing any sort of financial transaction or doing any sort of irreversible action, like sending an email, then it would ask for explicit user permission. They also have something called watch mode where they track whether or not you're doing sketchy things like. Or malicious activity.

And so there's all that and those are the guardrails in place. But I would still recognize that. I think if you're doing anything super personal, like I wouldn't necessarily plug these age agent systems into my full inbox because you don't know what they might draft up. There's still a lot of. That we don't know about how they operate.

So recognizing that it's a really cool shift in the right direction, but I wouldn't say I would rely on these systems as a full workflow assistant. It still very much that requires a human in the loop. 

[00:08:41] Alex Sarlin: And I, I think that's the conversation about ag workflows in general right now. You know, in any context, not just education.

People are trying to figure out the, just the, the latest MIT technology review is all about like, when do we hand ai, the keys is, how safe is it? Where are the guardrails? How are people thinking about it? That is not education specific, but given how many use cases there are in education, I still think it'll be exciting for at least the educators to at least be able to put the pieces together, even if they're not, as you say, going all the way to like, Hey, compose this note to all the parents and then send it to every parent in my class with a personalized note about each kid.

We're there yet, yet, but the direction is coming and at least possible it's gonna be really, we'll have to be careful about the guard rails. I don't wanna overstate what's there, but I'm excited just about the capabilities to be able to put pieces together in a really way to support. All sorts of people.

I think college students are gonna embrace this very, very quickly because it can allows them to do a lot of things they would do. They're already sort of piecing together AI to do so. One other piece of news outta Open AI this week that is even more specific to education was the piloting of a sort of study together mode, which I thought was really interesting.

And apparently this is, you know, it's like probably some kind of an AB test. It's only open for certain users, but philanthropic, you know, a few months ago put together a sort of learning project functionality within Claude where you can sort of say, oh, I'm doing a learning project so act as my tutor, you know, tutor me, support me.

Let's work together to learn something. And you can sort of trigger that and it looks like chat. BT is playing with something similar would too early to say whether this is gonna be incorporated into the suite. But it was just, the fact that they're pushing for it actively I think was exciting. Did you, what did you make of that?

[00:10:20] Claire Zau: Yeah, I definitely felt like it was similar. You're seeing this playbook from both philanthropic and OpenAI. Building more pedagogically backed versions of their chatbots. And it makes sense because I, you know, even as I think about my own workflows within these applications, I think I have a productivity mode where I just wanna get stuff done, gimme the answer as quickly as possible.

I am just pure efficiency mode. But then there's also other times where I'm like, I want to actually dive deep and go down all these rabbit holes. And I think you actually have scaffolded versions of that, like notebook L, which is exciting. But I also think there's now this middle ground where you have just learning mode within Quad, within Chachi, bt.

And I think that's really exciting because I actually think we've been promised AI tutors, but one of my frustrations with the space has been, in order for you to really fully develop and deliver on the promise of an AI tutor, you really have to have that memory layer. Yes. And realistically, the people who have the most access in surface area around the memory of what a student is querying.

What subjects they're dealing with or what touch points they have, and also being able to benchmark where they are with regards to those respective subjects. The reality is I think Attach PT has the most, or a cloud has the most direct access to that. And so being able to leverage that and tap that into the learning experience is probably really powerful.

And I also think it, it makes sense just even from a business standpoint, is they want to sell university and enterprise subscription. 

[00:11:49] Alex Sarlin: Exactly. 

[00:11:50] Claire Zau: It makes sense to bundle learning mode into the subscriptions that they're selling. And, and it's twofold. You, they're probably monetizing from education institutions but also.

They wanna get early penetration into students at the university level because those students will graduate into the workforce. They'll bring their BT and that memory layer into whatever job and, and, you know, build that use behavior. So it all makes sense to me and I'm excited to see what they do product-wise with it.

[00:12:16] Alex Sarlin: Yeah, and I think you're bringing up a really important point about, you know, where the tutoring happens sort of within the stack, right? That there's a couple of core capabilities you need to do tutoring. Well, you need a chat bot to try to act like more of a tutor and rather than a answer provider, right?

We talked to Conmigo years ago about it when they were starting this up and they said that they had to wrestle really hard with the early m, not give answers. We've somewhat past that, but not all the past. Role playing that these tools do when they're trying to be a teacher versus a productivity tool, just like you mentioned.

And then the second part is this is this memory layer, this idea of if we're truly trying to personalize learning or do personalized tutoring, that actually takes into account your learning goals, your personality, your prior knowledge, all sorts of things that can happen at the foundational layer, at the frontier layer.

And they have endless memory for it, or it can happen within an EdTech layer. And I think, you know, a lot of the EdTech tutoring companies, yeah, tutoring companies have been struggling, not struggling, but wrestling with this, you know, themselves, they've been making their own versions of different kinds of memory layers.

But it could be very interesting if it's sort of can be pulled in from a deeper infrastructure. 

[00:13:23] Claire Zau: Yeah. One of my thesis on the thesis in the space is around the role that this memory layer plays in in a tech ecosystem, because it obviously is a really valuable plugin to be able to understand. A user profile.

But if you're an ed tech tool, you're definitely not gonna be able to get the full surface area of someone's day, what they're asking. And so is there a world where ache, BT or Claude then becomes kind of your passport or your plugin around the web? So for example, I have my ACHE BT profile. It's collected a ton of memory around me.

There's still, there's still room for hyper verticalized experiences like Expedia, right? Like I don't think an open AI is gonna spend time building a one-to-one model of Expedia in the same way that I don't think they're going to build a one-to-one model of studying tools and LMSs and SISs. However, is there a world or any assessment tools, but is there a world where you have all these applications surrounding this core memory function and open AI and andro it own the memory layer and they just plug into whatever application.

So I don't know if that's the future, but I could see a world where if you're building an ed tech tool, there is some sort of function where you could plug into. Memory and then that activates your vertical. Anyways, early thesis, we'll see how it plays out. Well, that 

[00:14:40] Alex Sarlin: makes a lot of sense and I, I, I think it actually dovetails really nicely with something else that was being talked about this week and we'll, we'll come back to some more couple of open AI pieces, but which is, this two different reports came out this week basically about how frequently students basically are young people in the, you know, teenagers, basically between nine and 17.

A lot of these studies, there's common sense media one and one from the uk and internet matters, basically saying that students are increasingly turning to AI for companionship, for entertainment, for advice, for mental health support beyond, you know, academic support. They're, they're basically treating chatbots as friends, and there's a little bit of a moral panic aspect to this.

Some of the headlines are like, kids don't have real friends, and so they're turning to AI and there's some truth to that. We gotta deal with that. But it also, I think, dovetails really nicely with your point, which is that if somebody is spending a lot of their time talking to one of these frontier models.

As a friend, they're confiding in them, they're getting advice from them. They know how they feel about school and their teachers and their friends. And then you say, okay, now I wanna learn. Well, that's really important. Those two conversations, it could be silly for them to be happening in total isolation.

So the idea of having a underlying layer, you could say, oh, I gonna log into this ed tech tool with my CHATT memory function. Like, Hey, I wanna pass certain memories, certain things you know about me to the EdTech tool makes a lot of sense just as it would for all these other domain specific solutions.

So I'm excited about that. But as with almost everything at ai, and I know you face questions about this all the time, 'cause you present on this all the time, everything has this sort of double-edged sword. You can imagine a dystopian version of it, and you can imagine some utopian versions of it. But both of these studies, I think were framed a little bit as like Uhoh, let's be, really, this is happening faster than we expected.

Kids are seeing AI as. Ions, they're seeing them as friends. It's trusting them when maybe it shouldn't, but I see that as actually pretty natural when you're talking about an interface that's designed to act like a human, uh, which is what all of these models are. What did you make of those studies and, and do you think that that connects to this sort of idea of like, what does the stack look like and how do you bridge the gap between, you know, AI as a companion and AI as a tutor, or AI as a, as a study partner, all these important things?

[00:16:46] Claire Zau: Yeah, I mean, I think it goes back to the, the inherent tension for all these applications is you are offering a service and you have to measure value delivered to your user, but also your own engagement metrics. And so I think there was also recently a piece or a leak that came out that Meta had intentionally built in all these engagement hacking.

Features into their AI chat bots, and that's kind of their, their metric of success, which makes sense. You know, if you're a public company, you're, you're gonna be tracked on how many a monthly active users you have, how much time spent on platform. That's a metric of success. But at the same time, I also think that's a very dangerous metric of success.

What I fear is, I think these systems I totally hear you like, I, I agree. I think, you know, I generally am an optimist about most technologies, right? I think it provides a lot of powerful tooling. I, I know people who personally love having it as a sounding board. It's free, it's, or relatively free 24 7 available if you're really struggling with someone, is having something better than not having a therapist at all, or not having a companion at all.

So I totally see the value there. I think one of the things that probably needs to be solved from a architectural and structural standpoint around these tools is I think right now there's still very sicko, fantic and so. I fear that a lot of the user behavior is also driven by conversing with a friend who only validates you, right?

And they also have so much memory around you, which is actually quite a dangerous combination because it then really does act like a friend, but a friend that only validates you, which is problematic because maybe you sometimes do need an intervention or sometimes you do need to push back on you. And so I think that's probably one of the things that society and these platforms really need to think heavily about.

The social emotional use case is how do you balance the desire to build engagement, but then also make sure that you're also delivering a, an experience that also understands their internal stay and doesn't feed into more psychosis, like engagement or, or, you know, conspiracy theories. I think we're seeing a recent article where you had a whole suite of human tube.

We're falling for these conspiracy theories, and it's because AI is not going to necessarily push back on you, and it has this memory. So it's gonna be able to make it these conspiracy theories even more real because it's like, oh, I know this about you and here's how that relates to this idea and this, you know?

So I think there's still a lot of gray area and I'm curious to see how, you know, at the foundation model layer, seeing that they're the providers of a lot of these companionship experiences, what they do about it. I also think it, unfortunately. You have players, like most recently this week it's been trending, but grok releasing AI companions and they're very open about it being like AI girlfriend use case.

And so I think there's gonna be variants in how different companies present themselves around ai, companionship. I think meta and GR probably lean more. We're okay with AI friends and compan. Others that are like, uh, this is not the use case, but we know that it's happening. So it'll be interesting to see how it all plays out.

[00:19:58] Alex Sarlin: Yeah. You're homing in on something that is such a vital and really complex part of this, which is that AI is one of the first technologies that truly has like a personality baked into it. You know, some of the dynamics you're mentioning about, you know, the echo chamber or the validation or we've seen that play out in social media, we've seen that play out in YouTube, right?

The idea that if you start watching a certain kind of video, it's gonna start recommending videos that sort of push you towards extreme views. There's proof of that now in YouTube. There's also, that's, that's been true in Facebook and Instagram for quite a while. It's been true in Twitter and X and all of the Elon Musk stuff for a while too, and I think there's this feeling, you know, when you mention those.

Product metrics. I think it's such an important point, right? It's like if you're making decisions based on trying to keep people on platform, which is, I think that's basically what drove social media into this, into this really strange state. Instead of being like, Hey, we want you to, we want this to be a positive experience.

They're like, we just want you here and we want you here. No matter what you're doing here. If you're doing, if you're here conspiring, if you're here yelling at people, if you're here pulling your hair out, like that's all good with us because you're here, and that, that just becomes this sort of amoral stance.

Literally. I think grok especially, you know, I have very little faith in that tool or the, the people behind it, but meta is, I think at a crossroads here, right? Because they have the, they saw all the negative things that happened with social media. Front, I mean, yet they still seem to, from these early reports, be heading towards some of those classic product metrics and that idea of like one of the engagement pieces that came out, like you're mentioning was it was that it would be proactively sort of ping you and ask you as a user, be like, Hey, just checking in with you.

How's it going? What you know, what's on your mind right now? And things like that. And it's like, that makes perfect from a product perspective, that makes perfect sense, right? You're trying to build, build engagement, remind people the product is there that push notifications are in so many different types of products, but in an AI world that has a different flavor and I think people are really pushed back against it.

I mean such deep. I feel like we could talk for like four straight hours on this. It's so complex. 

[00:21:56] Claire Zau: I mean, there's a whole, you could probably spend hours talking about the future of companionship and human relationships. I've heard a lot of rhetoric around. AI leading to even more social isolation or, or our increased metacognitive laziness.

And I, I think that's a super fair point, but I think one thing to remember is I almost feel like AI is more a multiplier force. Like I think AI is a multiplier of everything. And I think what we're seeing is AI being a multiplier force for social media. And so when you're seeing things like loss in attention span loss in broader IQ scores or cognition, then I think that is actually more of effect of social media and our shift to short form content and the way that information is delivered to you.

And then I think because of ai, that is a multiplier of that trend, and that's what you're seeing around this loss of metacognition. I think the same thing where social media already set the fundamentals of creating these echo chambers and isolating people into their own views. What is a multiplier, that it's ai and that takes, AI takes that and brings it in the form of a one-to-one direct experience where you can go endlessly into, you know, a tunnel go down a rabbit hole of just confirming one specific view.

And so I think in that sense, I don't necessarily view AI as this like blanket enemy that's causing all these things. I see it more as like Promethean fire, a multiplier for scientific discovery, for innovation, for saving time. It's a multiplier in that sense, but it can also be a multiplier of very dangerous things like isolation and loss, metacognition and all that.

So that's probably one thing that, that I, I think of as you bring up this dynamic between AI and social media. 

[00:23:40] Alex Sarlin: Yes, a hundred percent. And, and you know, in terms of the education use case, it's a multiplier for if you're already somebody with any type of auto didacticism in you, you wanna teach yourself something, you wanna learn new things, you want, you're curious.

AI just allows you to do absolutely anything with that. You can dive so deep just as the internet did, right? So deep into curiosity. You can have things tailored to exactly how you need them, when you need them, um, in any context. But I think people's fear and, you know, it makes sense, people's fear is that over the last few years we've seen, if you amplify human behavior and allow people to sort of.

Double down on whatever they're already doing. And then you talk about teenagers, people get worried that, you know, young people, teenage and college students might double down on some of the negative aspects. I'm a huge optimist too, so I'm not, I, I don't mean to keep bringing this stuff up as a w Downer Rain Club, but I just think, I think it, it really behooves us as, as EdTech community and, and this is connected to the Learn Together mode and cloud learning mode and Google Learn LM the ability to really build out and prove the value and the engagement of a learning use case.

It's not like, oh, I'm getting on AI and I can either talk to my boring tutor or I can talk to my amazing AI girlfriend like. If that's where we go, we are in trouble. Right? Because it's really hard to avoid the poll of, of some of these things. So I, I think we really need to think about how do we, you know, how do we find ways to make AI education interesting and exciting and engaging and fun and, and social and not isolating, and not something that is so easy to pull away from and be distracted from.

The example I always use when I try to just like showcase what the future of this might look like is like, imagine a world where you're like, oh, I wanna learn X, Y, Z, and they say, okay, I'm gonna create a Netflix style series for you about learning that, and for the next week you'll binge on this chemistry based series that has all your favorite things.

It's any style you want, and by the way, you're gonna learn exactly what you need to learn and we can do that. But if we don't do that, the entertainment side's gonna take over. You know, 

[00:25:40] Claire Zau: there's really cool tool that I think is still in Google Labs right now. I just found it last week, but it's called Spark, I think.

Oh, yeah. Yes. It takes exactly that idea where you give it a prompt and you say, I wanna learn about the history of macho or tea. And it creates a mini series of five to eight three minute videos. It's animated. You can pick if you want Claymation or whatever other animation style. It's exactly what you're saying, this idea that there are ways to scaffold a learning experience, but I think people are just judging AI learning based off of the chat bot, which I think that's why I also have a bone to pick.

With that recent study on where they did brain scans of people using Chachi BT to write an essay versus using Google versus not using any tools at all. I think in that study what you had was a situation where people were not necessarily given instructions to self enforced learning and you're giving people a chat bot.

And I think of course, when you give someone a chat bot and tell them, finish this essay, they're going to do it in the most efficient way possible. And I think in the same way, going back to the AI multiplier argument, cheating has always existed for the last. Since education systems have been invested.

[00:26:49] Alex Sarlin: Assessment, yeah. Yeah. At 

[00:26:50] Claire Zau: our three ways of grading students, and I think that's, that's always been true. So those behaviors have always been true. But what AI does is it just scales that to a new level where instead of before you had to access it all online, access the piece that answers online, or you had to pay someone to write your essay.

Now you can just have that for free. I think AI in that sense is again, the accelerant and the multiplier of an existing behavior, and I think it, it was bound back down to, okay, what is fundamentally wrong about the original root behavior? And I think AI is, again, problematic maybe, and being a multiplier, but it's still, if you don't change the root of it, you're not gonna change what the the entire AI process.

In the same way the route is, social media, the route is social isolation, the route is echo chambers, or the route is cheating. Then I think we actually have to face those problems first as opposed to just. Blanket ban on AI because they're multipliers of this root behavior. 

[00:27:44] Alex Sarlin: Yep. Or, or the root behavior in, in some of these cases is an optimizing behavior.

It's saying, my job as a student is to get the grades. It's not to learn. It's not to explore curiosity. It's just to get through the assignments and get the grades I need to go to the next phase of education. If that's the core belief of a student, then why wouldn't they optimize? Why wouldn't they take the fastest route?

And I think that's been an issue at both a K12 and higher ed level. I think more and more that there's this sort of core attitude of just sort of optimizing for grades or forgetting, forgetting through, rather than actually trying to engage in any meaningful way. And I think that those. Beliefs lead to that type of behavior and that type of accelerant of, of behavior.

And one of the guests today, on this weekend, EdTech, is a, is a journalist who wrote a really interesting piece about exactly this, about how educators in the age of ai, it accelerates this sort of race to the bottom in terms of if students believe that their job as a student is just to do whatever silly assignment is thrown their way and get whatever, you know, get through it.

And they don't care about writing or reading, they don't care about expressing themselves. You know, if they don't see the point of it, why wouldn't they cheat? What do we do in a world where that becomes normalized? It's really interesting. You mentioned the windsurf situation, and I know that's, you said that you know that the episode someday of, of the future of Silicon Valley or whatever show it is, is gonna be all about for those who had not followed it.

This is not a purely EdTech story by any means, but it's a coding story. Can you just walk through what happened over the last week between OpenAI and Windsurf and it fell apart. Yeah, 

[00:29:11] Claire Zau: so a couple months ago now there was potentially an open AI acquisition of Windsurf and it was about to be one of the biggest AI acquisitions of the year.

I think it was to be about $3 billion for windsurf. Windsurf for some background is basically similar to Cursor. It's a co-pilot for coding, but, but very powerful. I would say Co Cursor and Windsurf are probably the two big ones in the space and so early to the space. And it made sense because a lot of these AI labs want to get exposure to different vertical applications.

You have open AI launching Codex, so hopefully they were probably gonna integrate Windsurf into their code. CLO is well known for their coding capabilities, so they're kind of collecting all these very different vertical applications and coding is one of the big use cases. So. Stop deal was on the table.

Everybody was excited about it. It was gonna be a great exit. I'm sure the employees were already, you know, thinking about what kind of exodus looked like for them and, and, you know, given their hard work, it was an exciting outcome for them. However, I think due to some tensions between Microsoft and OpenAI and, and those are continuing to brew, since Microsoft is the largest investor in OpenAI, I think the exclusivity window finally closed and Google immediately swooped in.

However, for Google, what they did was offer a 2.4 billion licensing deal of windsurf technology, $2.4 billion deal for. Licensing plus the founders. So it wasn't a full acquisition. And the reason for, for doing it this way was because if you've seen with, for example, the Adobe Figma deal or, or various others, m and a in technology has been relatively scrutinized.

And so we've been seeing a lot of this trend of big AI labs and big AI companies, or even just big tech in general, try to avoid regulatory scrutiny by doing these acquihires where they license the tech and then the founders go join the big acquirer. So you saw that with Microsoft and Inflection ai, where they basically paid to buy back Mustafa Deploy, 

[00:31:16] Alex Sarlin: yep.

[00:31:17] Claire Zau: And then license their technology. Same thing with Google and character ai where they bought back the founders but still licensing the tech. But this was problematic and it was a whole big deal because. It was seen as a coaching where everyone, except for the founders and Google benefited from the situation.

So they assumed that the founders got a great exit, obviously. But if you were an employee and had been working on Windsurf for the last year, you were left with nothing and you were left with this company that really didn't have very much potential because your greatest talent had all been coach. And so it was kind of this zombie company and people were like, this is, you know, operating in bad faith.

People have said that the founder of Varun of Wind had been muzzled by Google. So there were all these dynamics, but last minute cognition, swooped in cognition is another one of these big coding players and acquired the company properly. Everybody got accelerated messing in their shares and they all, you know, a great exit.

So it ended up being a happy ending, but it was a whole situation that demonstrated I think a couple things like one the crazy acqui hiring that's happening, I'm sure. I think one of the big things people are hinting at is. At one point the FTC is probably gonna not be okay with these acquisition disruptors.

It's probably this continued pattern is probably not great for VC funds because investors are a little bit shaken by this because they wanna mark up their mark exits, right? And this is not seen as an exit. I'm sure behind the scenes they are, they structured deals in the way that they would get returns.

But it's also problematic, I think, optically. And then for the employees themselves, it's kind of a a bad deal because you take a lot of risk. You usually take a pay cut to work at these early stage companies and you wanna be able to reap. The benefits of that exit. But in these scenarios, when your only, your founder is being taken and you're not necessarily benefiting from a true conversion of your equity in the firm or in in the company, then it's a pretty bad deal for you.

So it just lay out all these different dynamics about the current state of m and a current state of working for a startup. It really demonstrated, I think a lot of times in the AI world right now, what really matters is the talent itself. Um, the fact that they are basically paying these billion dollar, multi-hundred million dollar deals.

For the founders themselves, I think really speaks to the state of AI talent as well. I'm happy to go down that, but that was my very quick explainer on the whole situation. Hopefully it made sense. 

[00:33:41] Alex Sarlin: Yeah, I mean, when I hear you talk through the, all the nuances and the, it's incredible to me how much of how much money there is in this, in this space right now just flowing from place to place, but also how much it's focused on very small numbers of people.

So I mean, we also saw this week scale ai. We saw, you know, meta had announced a similarly sort of complex as semi acquisition of scale ai. The Alexander Wang, the head of that company, has gotten this incredible payout, but they just reduced their force, you know, by, I think up to almost 15% this week. And it's exactly the same dynamic you're talking about, where it's like people see individual.

Ai ai, there are not that many people in the world so far. There will be more and more, but who are like true AI sort of masterminds that are really, really successful in it. And I think the competition is so fierce for them that money is just sort of endless. Um, Noam Sha, I think I have that name right from character, was one of them was like Google for a long time.

Left Google to start character ai became the see success, and then they aqua hired him back for some huge amount of money. And then, you know, we, of course the news in in general, there's just been this incredible talent war for individual people. Meta has poached a lot of people from Apple recently has poached people from open ai.

And it's just like, I mean, the whole thing feels like. Wild West, but I mean that like, it literally feels like wild west. It feels like the early days of people trying to just grab land, find people to ride alongside them. And it's such a new space and it, the business potential is so massive that the money, you know, just the amounts of money going it are huge.

I do think relevant to EdTech in a specific way, and I'm curious if you agree with this, clay, you feel free to, to push back. But the coding space is a very obvious place that is gonna be disrupted like crazy with ai. We've reported on this podcast about how even computer science degrees are already starting to go down because people are so quickly anticipating AI sort of totally disrupting what coding looks like.

And so places like Cursor and Windsurf are these massive acquisition targets and you see all the frontier models creating their own coding, you know, mechanisms, copilot and cloud code and, and all of those things. And, and Microsoft has done this for a while too. So coding is sort of, I think the first.

For a whole industry that is so obviously gonna be disrupted by ai, that everybody is just sort of flipping tables to figure out who to acquire and who's who to work with. I think on a smaller scale, certainly on a smaller scale, but in a smaller scale, maybe in a year or two we might see something similar in education.

We may see some of the more sophisticated AI native ed tech startups start to be acquired. Maybe not acquihire quite as much, but sort of, you know, acquired by some of the big tech players. Obviously education's much smaller field than, than than programming and coding. But at the same time, like I've been continually surprised and I, I think you have as well, you tell me if I'm, if I don't wanna put words in your mouth at how much.

These frontier models, models, and these giant tech players seem to care about education as a use case. We just saw Google this week announced all of these notebook, lms, preloaded notebook, LMS with all of Shakespeare's work in them for, for education purposes or things with the economists. I mean, people really are doing educational stuff.

They're putting money into it. They really care about it. So I think someday we may see a mini version of some of these crazy, you know, whirlwind fights about for talent or for the hot startup in the education space. Am I right or am I wrong? Where do you see it going? 

[00:37:03] Claire Zau: Yeah, I could see it. I would probably say it feels like, at least my prediction is the more you lean productivity, the more immediate the demand is.

And coding is probably one, you know, lean productivity, I think. 

[00:37:18] Alex Sarlin: Yeah. 

[00:37:18] Claire Zau: You're also seeing. Acquisitions like Grammarly acquiring Superhuman? I think 

[00:37:23] Alex Sarlin: yes, 

[00:37:24] Claire Zau: everyone wants to craft productivity because that's relevant to teachers. That's relevant to students. That's your knowledge worker. So I would say things like, uh, yeah, anything that's more adjacent to productivity is probably more within the more immediate acquisition target space.

I could see a world where, for example, a big AI lab wants to acquire a, a tool that is heavily used by teachers. And I think the reason for that is because as I think about how the next three to five years and potential m and a and where there are alignments in the ecosystem, I think it goes down to who owns what data.

The reason why a OpenAI might acquire a Windsurf or any other application is because these vertical applications have a lot of vertical surface area on how a. Coder does their work. So by owning a cursor or windsurf, you have a lot more data on, okay, this is a software engineer starts their code base.

This is what kind of bugs they typically face. Here's how they work as teams. Like all of that is very critical data for building agents that eventually will also work in these spaces. So I think how I see it playing out is if you think about a magic school or a school AI or a brisk, what they have that is super valuable is a lot of data in how teachers work.

So that is not something that you can collect or find on the internet. None of that is written down. Anywhere of teachers like to do this first and then they switch to this tool and then they use this presentation. All of that is taught and sits in the brain of, you know, of the educator workforce. But that is not translated into a workflow right now.

What I would say, these teacher AI platforms have an somewhat of an advantage or even incumbents in the space like an instructor or a power school. They just have a lot more insight and literal digital touch points into what buttons are clicked first to do what actions, and that is extremely valuable data.

If, for example, an open AI or an anthropic or perplexity wants to build a teacher co-pilot, they would need to have access to that data at scale. 

[00:39:41] Alex Sarlin: Very, very well put. Yeah, no, I totally agree. And I mean 

[00:39:44] Claire Zau: my, my thesis, but we'll see. Maybe not. Yeah, 

[00:39:47] Alex Sarlin: all three of the platforms you just named along with people like the company formerly known as quizzes that just rebranded, all have, you know, at this point, millions of educators using the platform pretty, pretty consistently so that, you know, I mean maybe single digit millions, but still millions, that starts to become a really, you know, pretty big data set.

And a also, it's a data set and it's also a reach directly into schools. 

[00:40:10] Claire Zau: It makes sense why Google would launch Google classroom tools because. You can see a world by owning the actual infrastructure behind it, plus the tools. You basically get a lot of insight into how people are navigating your platform.

And there's a world where Google Class wins or, or wins significant advantage and their tools deliver that much more value because they know that, okay, Claire typically clicks this tab first, and then she typically opens a Google sheets and then she typically does this. And once you collect enough data around how people use your platform, then you can automatically suggest, Hey, I've seen you do this action five times in the last month.

Would you like me to create a workflow, a agentic workflow for that? So you can see that big unlock. If you own both the actual application, where people are doing the work, plus the actual infrastructure and and platform, then that creates a really powerful combination. Yeah, 

[00:41:02] Alex Sarlin: and then there's that open question about whether there's something similar on the B2C side, on the direct to student side, because I think for the most part, individual learners are gonna, the frontier models more frequently than specific learning tools.

But there are certain apps, there are certain learner specific or you know, consumer facing education tools that are starting to be used at scale. And I wonder if there's something there for the same reason, you know, data on how, how learners are interacting with ai. I think it's less, there's less of a case to be made there, because I think all these frontier models and a lot of the, the big perplexity for example, gets a lot of direct access to, especially higher education students and how they interact.

But it's, it's, it's gonna be interesting to see, I could imagine a world in which. Certain education specific consumer apps get so popular and sort of become such a go-to that they could become a potential acquisition target for some of those same reasons. But I don't think we're there yet. We're low on time.

But there's one thing I really wanted to get your take on. So we, there's been a, a bunch of news out of Google. You mentioned, we, we mentioned the notebooks. You mentioned in passing that they, they're doing all sorts of investments in infrastructure. One thing that, uh, there was an article this week, uh, from Business Insider that I thought was really interesting about specifically the, you know, DeepMind, CEO, Demis Saba, who was really the claim of the article is that DeepMind is increasingly becoming sort of the center of Google ai and by Defactos somewhat of the center of Google, even though it's.

I dunno exactly what year, but a few, a few years ago, and it was sort of core to their strategy. It's become more and more core to their strategy. It was interesting, they even floated the idea that it's possible that the Dennis could be the next Google, CEO if Google continues to invest in AI the way it has been.

Curious what you make of that and whether you think that's gonna have implications for any of us in education. 

[00:42:48] Claire Zau: Yeah, I think, I mean, I think it's exciting. I have seen and heard amazing things about Demis and, and his work. Um, he's notably a chess prodigy and has just been a pioneer in the space for so long.

I actually think what you're seeing is almost companies adopting this like dual leadership structure where, for example, you had OpenAI, OpenAI recently hired the former Instacart, CEO, I'm blanking on her name, but she is basically leading tap GBT as a business. So really operationally leading the actual business layer.

How do you monetize this day-to-day operations? And then you have someone like Sam Altman who's really. Way more plugged into the research front achieving a GI. So I almost feel like you have that structure playing out at OpenAI, you have that playing out kind of maybe with meta where you have, I would say Zuck feels like he's, or my sense is Zuck will probably still stay plugged into the applications and the massive surface area that Meta's products cover, WhatsApp, Instagram, Facebook, whatnot.

[00:43:49] Alex Sarlin: Super intelligence team is sort of, has a different goal. 

[00:43:52] Claire Zau: You have a Alexander Wang figurehead that's leading super intelligence at the same time. And I think my prediction is probably Google will bring in some sort of similar structure where you have Sundar remaining plugged into the core search and Google suite of products, but maybe you have a Demis that is leading that a GI vision.

I think for all these companies, there is clearly a massive target on a GI and having someone to lead that and the face behind that is, is important. So I see. That's my personal sense is that's what Demis is gonna fill. 

[00:44:23] Alex Sarlin: Yeah. Yeah, that, that makes a lot of sense. Super interesting. Last piece of Google News, that's, that caught my eye this week and we didn't get a chance to talk about this huge funding, the OpenAI philanthropic Microsoft teacher training piece.

Ben and I talked about that last week, but I, I would love to get your take on that as well if we have an extra moment. But the last Google thing that struck me as interesting this week, and you know, GSV has such a heavy presence in India. You have a Indian conference, you have lots of big Indian portfolio.

Google offered their premium Gemini AI plan to Indian students for free. And I think that, that, it was an interesting press release because I think it was similar to what you mentioned before about how does this huge land grab basically where all the. Frontier models are trying to say, how do we secure the next generation of users so that people learn to use AI through Gemini or through Claude or through chat.

BT and Google's making some interesting pushes directly to India. Curious what you make of that as especially, 'cause I know this is from the economic times in India, it's through the, the offer is, can be used until the uh, in until fall. It's Gemini 2.5. It's deep research, it's notebook. Lm, which we mentioned earlier, is becoming more and more part of the education suite.

What do you make of this? 

[00:45:32] Claire Zau: Yeah, it's exciting to use. I think more AI usage makes sense and I think. AI and education forms is important. It's always exciting to see AI expand its access, especially, uh, you know, when you're offering it to students and people who have historically might not have had access to ai otherwise.

I think what you're seeing play out is this dynamic where you have US AI and China ai and there's all this geopolitical dynamics to navigate there. And I actually think what's interesting is, for example, the Middle East is kind of this middle ground where you have them internally trying to build sovereign ai, but then these massive partnerships of like a Microsoft and open AI or an anthropic to bring their frontier models to different regions.

I think you're seeing that play out. I would say what's been interesting, especially since we have so many portfolio companies in India, is actually there is a lot of demand and actually comfort, for example, with voice ai because people use voice to commun, you know, voice messages is so common. So there's actually a lot of demand.

There. But what's interesting is as we know, most of the internet data sets are largely Western and English facing. And so there has historically been not that much data or good enough voice models for all the dialects within India. So that's something that I've generally observed. But I think what's exciting is hopefully as.

Groups like Google or OpenAI realize the importance of a lot of these regions and the importance of diversifying the types of models they provide. Hopefully in trying to reach those populations, they will put more effort into those developments as well. So yeah, generally exciting. I'm excited to see, see how it all plays out.

[00:47:11] Alex Sarlin: Yeah, great point. And I, I love your point about the US China, you know, we think, I, I, I think a lot about the, the. The land grab between Yeah. Gemini and Chad. Bt but you're right. Globally it you also have Alibaba and you have Deeps seek and you have Manus and it becomes a geopolitical situation as well.

And, and you're right, the Middle East and, and South Asia are like front lines for, you know, who's gonna use what AI and China, you know, is moving very quickly in ai. And, and there was a fantastic article this week really in Times editorial basically about how so many of the statistics around basically compared to the the Sputnik moment and how there's a David Brooks article, right?

Like it, when Sputnik happened, we invested so much extra money into our, into research and into education, and now we're in. Crazy competition with China about ai and they're running the table on us in certain ways, and we are divested, divesting as you know, we didn't talk about this today, but divesting in education at almost every level of research and almost every level NSF, the education department is maybe on its last legs at this point.

There is a case to be made that Indian people will end up using Chinese ai unless we sort of figure, figure something out. So I'm happy to see Google, but 

[00:48:21] Claire Zau: I think what is almost kind of interesting, and maybe this is a, a slightly dystopian point to end on, but you know, there was that recent announcement that Meta is building a a one gigawatt data center that is comparable to the size of Manhattan.

And there's two of them. One's called Prometheus, another one is called Hyperion. And the idea here is that we're now moving in the same way that you had massive build out of factory floors for human workers in the industrial revolution. Now you just have these factories for digital labor. And I almost wonder if.

Obviously the current administration has its own agenda or beliefs around why they need to reduce funding into education. But I also see in parallel to seeing this rise of digital labor, I wonder if there is almost this belief that we actually don't need to train any humans because eventually we'll just have, if we put enough and dump enough money into a GI, we'll have selfsustaining AI systems that can do the work of a, of very smart humans and therefore we don't need to invest.

I'd like to believe, but an interest just as I think a lot about these data centers as these factories for maybe that's, that's the calculus behind not investing so heavily into human expertise and the build out of human knowledge. I dunno. Yeah. 

[00:49:39] Alex Sarlin: Oh, that's a great point. I mean it is, it is definitely dystopian.

But it is a great point. And I think last thought for me, I think part of what's so interesting about this moment is, and I've mentioned this on the podcast before, is that technology changes quickly, but I don't, you know, we have seen in our lifetime, you're younger than me, but in our lifetime we have seen technological changes that literally change how we go through our day.

I mean we, you know, the mobile phone revolution, the internet revolution, big data, you know, everywhere at cryptocurrencies, you could argue are heading that direction. I think there's this feeling of like. Now I almost feel like we overestimate the speed of change because we've seen it happen so quickly and this idea that AI has just really hit the type of scale that we're seeing in the last few years, and we're already sort of seeing this almost like post-human future post-education future, I think it might not happen that quickly, but who knows?

I mean, gosh, it is a, it's a little bit scary. 

[00:50:32] Claire Zau: Yes. I think always gives me comfort is, you know, we tend to overestimate the short term and underestimate the long term, but for what it's worth, everybody who has a very valid and deep fear of AI displacing all jobs. I think to your point, it is going to play out in a much longer timeline.

Go check out the anthropic vending machine study where they basically did an experiment and had Claude run a vending machine and it actually did terribly. It was. It did not make any money, so for now it can't even replace a person who supervises a vending machine. So if that gives you some comfort, I don't think we're there yet where it is fully displacing.

Entire human roles. 

[00:51:16] Alex Sarlin: That's very well put. This is a little bit of a pivot, but I think it's connected. Um, one other piece of news this week is that OpenAI is apparently making a web browser, and we talked last week about how perplexity basically just launched a web browser as well. I think that is also gonna be an interesting area for all this agentic stuff and the speed of change to really be visible.

I think in these couple years we're starting, we've seen AI as this sort of. Particular applications, but it is possible that quite soon the way we interact with the internet will be directly through AI enhanced browsers. And I think that will also be this major accelerant into sort of how quickly AI makes its way into our daily life and potentially how quickly.

It starts changing our productivity and what learning looks like, just side thing, but I don't think we should sleep on these, uh, these new web browsers. It's possible that they will be one of the front lines of where this stuff happens. Obviously, Google Chrome is doing this as well. A slight pivot, but I think it's related.

I dunno, maybe not. Claire, this has been amazing. As always. We have two great guests. Please stick around for our two guests for this really great week in EdTech, who we talked to. Matthew Gasda. He's a journalist who wrote about AI backlash among teachers and how educators can quote unquote defeat ai, but in a, a really interesting way.

And to Marc Graham, who is a Teacherpreneur out of Scotland, who has a company called Spark Education ai, all about making reading more engaging. So stay tuned for that. Claire Zao, thank you so much for being here with us on Weekend Ed Tech. 

[00:52:38] Claire Zau: Thank you so much. I had so much fun. See you later. Bye. 

[00:52:42] Alex Sarlin: For our deep dive in this week in EdTech, we are talking to Matthew Gasda.

He is a writer and director. His novel, the Sleepers is in stores now, and his next play collection, zoomers and other plays will be released in September of this year. His work caught my attention with a really interesting article he wrote for Compact Magazine this summer called How Educators Can Defeat ai, really going in depth about how AI and writing are on this collision course.

Matthew Gasda, welcome to EdTech Insiders. 

[00:53:12] Matthew Gasda: Happy to be here. Nice to meet you. 

[00:53:14] Alex Sarlin: Yeah. So first off, tell us a little bit about your background. Your, your writer and director. Writing is core to your professional identity. You've tutor in it, but how do you see AI changing this world of writing so quickly and why is it something we should be concerned about?

[00:53:29] Matthew Gasda: Yeah, I mean, and just for context, I have been in education, kind of had a two track career. One as a creative writer, a director, and as a teacher, and then tutor. I taught middle school. I've been an adjunct, I've taught high school. I've been a theater teacher, English teacher, and a tutor of like a lot of different subjects.

[00:53:48] Alex Sarlin: Yeah, 

[00:53:48] Matthew Gasda: I'm 36, so that's, it started when I was 22 in some capacity. So yeah, I think I'm old enough to be able to, and I've been doing some kind of like writing and reading pedagogy for close to 15 years. So I think it's fair to say that I've seen a slice of, I've seen a couple different time slices of writing culture, reading culture, academic culture, and I've also gone from writing on a typewriter when I moved to New York to having a chat.

BT subscription. Yep. And I have a lot of ambivalence about ai, but I, I'm also, I believe in engaging with it at least far enough to understand it. Yeah. I'm a writer and a teacher basically, and I take a lot of interest in technology, sometimes skeptically, but I think skepticism applies engagement, so. 

[00:54:28] Alex Sarlin: Yeah.

What jumped out to me in the way you were thinking and writing about it is that I think you're, you, you point out some really important pieces of both the risks of chate and other AI tools in writing and just the sort of absurdity of the writing endeavor in schools at, at all. So let me just read a, a couple of sentences because I thought this stood up.

It says, the chat GBT essay is the reductio ad absurdum of the cultural axiom. That school is about getting grades. School is about getting into college. College is about networking and getting a job. This is, you know, one perspective, right? All of this can be achieved without much effort now. So why make the effort.

Stressed, screen addicted. Young people see absolutely no answer, and the pedagogical culture around them provides no persuasive alternative. The logic of great inflation and competitive college placement incentivizes doing what you need to do to survive. Writing essays is like a preh tale, an old feature no longer connected to survival.

So that is about the education system itself. And how does writing fit into it? Tell us more about, that's a powerful statement and I think a lot of people recognize it, but also scary for educators. 

[00:55:38] Matthew Gasda: Yeah. I mean, I had this theory that cuts across a couple different domains that we had been moving we, or like my pet aphorism or theory is that we've been coming not, it's not only the case that it wasn't the case that AI was just invented and suddenly everything was infected with ai.

But I think we were moving towards algorithmic and algorithmic culture before 

[00:55:58] Alex Sarlin: ai. 

[00:55:59] Matthew Gasda: Yep. I had a meeting with someone about, or with an editor yesterday, about maybe writing a piece about theater and prestige television, and I shopped pilots and even right this, especially in the like 20 21, 20 22, right before CHATT was released.

And I had executives tell me all the time, like, oh, we really like what you're doing. It's like we have algorithms that tell us where things go, how things work, where we put commercials, how we like down to the color schemes we use in shooting the shows. And if you abstract the idea of an algorithm away from just something that runs on a computer, on software or within software, all of a sudden it's, to me, it's not hard to see that there's a lot, there were a lot of mechanized, automatic aspects of our society.

[00:56:45] Alex Sarlin: Yep. 

[00:56:46] Matthew Gasda: And so it's not enough to just criticize AI if, if you have an issue with algorithms running things or some things, well then you have to understand that. To my mind, that goes beyond just saying, okay. Cancel my subscription because there might be other kinds of automatic, there might be other automatic functions in your life that are just as dangerous to free thought or to just personal reflection as as an ai.

As m, and again, I say this not because I think LLMs are universally, but I think they serve a lot of functions. I run a business, like I run a theater company, I run a tutoring business. They have made me more productive in a lot of ways. But so I just wanna caveat that because there are things that I do wanna automate.

And so part of the idea in this, in the compact piece is that AI gives us a chance to actually, or should give us a chance to rethink what we're choosing to automate literally, and what we actually wanna be, kind of what we might call like wild or unmediated or, or organic, some, whatever kind of metaphor you apply.

[00:57:50] Alex Sarlin: Yeah, 

[00:57:51] Matthew Gasda: I think that's very, and as a long time English teacher or English. Whatever, writing, there's nothing more algorithmic. There's nothing Moree than the five paragraph essay or then the standard college essays are standard, standardized, but essays are standardized and basically algorithmic, and there are ways to hack them.

There are ways that a tutor like myself can help you do well, even if you don't really understand what you're writing. 

[00:58:16] Alex Sarlin: Right. 

[00:58:16] Matthew Gasda: And without writing it for you, like, I'm a good enough tutor or teacher can kind of trick a student into producing an essay, not unlike how a student can trick an AI into writing an essay.

[00:58:26] Alex Sarlin: Yes. 

[00:58:27] Matthew Gasda: So in other words, I can prompt my student, well, why don't you structure it this way? Or Why don't you think about it this way? And a student can prompt an ai. But I think all of this is kind of bad in the sense that I do, of course, pedagogues play a role. Like I, I do wanna, I, I like teaching, I like tutoring, but I don't like the moments where I feel like I'm, there's so much pressure on the student to produce something.

I've been forced into the position where I'm no longer dialoguing with them about what they're. Dial with them about the algorithmic aspects, the automatic, um, formal aspects of what they're doing to try to just get them across the finish line and help them achieve a baseline debate. So to me, AI is proof that like, yes, this system can be hacked.

In New York City, it's often hacked, so to speak, by highly paid tutors, right? Tutor. So now it's just gonna be hacked by ai. Well, okay. My takeaway is let's just, let's actually rethink why we're writing. Let, let's actually rethink the kinds of writing and reading we're asking students to do, and one that's focused on real reproducibility, real internalization, real thought.

Basically, I see AI as a chance to reevaluate the last 75 years of American education, at least humanities education. Not the last three years. 

[00:59:40] Alex Sarlin: Yes. That's very well. But you know, it, it strikes me when, as I hear you talk about this sort of the standardized testing, the optimizing the algorithmic sort of thinking, the idea of, oh, you're learning to write, well, how are you learning to write?

Are you learning to write to a standard that is exactly how a college essay will be judged? Or how a t essay will be judged or how a class essay will be judged? Or are you learning to write to express something very specific to persuade someone to get your own thoughts out there to do something totally creative and interesting that nobody's ever done before?

And I think that the education system has over time, hued more towards this as you're calling it the, the algorithmic model. But there's a reason for that. Part of it is. It's easier to grade, it's easier to standardize. You can compare students' essays to each other. If you're com, you're reading 3000 college essays.

You don't want each one to be this absolutely wild format list changing like it, they're designed to be compared. And I want to quote one more thing from this article that I just think is so interesting. You know, I wonder if there's an opportunity here to actually move. Writing education out of the algorithm model and towards a more expression based model.

Uh, something that actually is designed for people to express themselves through writing or speaking in a way that is actually useful to their life or, or useful to their, their outcomes and originality might become more interesting. So quick, I wanna quote you back to yourself one more time. Right. But I thought this was interesting.

That too. 

[01:01:05] Matthew Gasda: Yeah. 

[01:01:05] Alex Sarlin: Yeah. It says, young people use fewer words to do fewer things less often because there are fewer embedded social rituals in which you need to employ verbal skills. Every year I've been teaching or tutoring, I've observed a generalized incremental decline in the ability of teenagers to organize, index and relay information.

And I think that is, I think a lot of educators and, and ed tech folks who listen to this will recognize that feeling of, hey. Verbal skills, yet writing is not something people do in their regular life that much anymore outside of maybe social media and these little chunks. And if that's true, why learn it?

How might we as educators start to put the pieces back together where instead of writing for algorithms, how might people write for actual useful outputs in their life given that verbal and written skills are something they don't use that often? 

[01:02:00] Matthew Gasda: Yeah, that's a great question. I'll start with an anecdote, please.

I guess when I was like between 24 and 26, I was a master's student. My second year I taught, I won't say where, but I was assigned to teach freshman like rhetoric and composition to undergraduates too. So 18 year olds, and I was like 20, yeah, I guess I was 25 or 26 at the time. And I remember really hating this kind of generic composition class that every freshman.

Most, a lot of freshmen maybe, maybe not anymore, but we used to be required to take, but it's supposed to be English 1 0 1, but it's often not a lot of English. And as a grad student, we had to talk about our experiences with our students in a, we had like a seminar on our teaching process. So we had a pedagogy class as part of our teaching.

So we had to kind of like report every week on the two classes we, we taught. And I lied every week. I totally threw out the curriculum the first semester and signed a lot, but I signed more books, so a lot more reading. And I told my students on day one, I said, I'm gonna lie and you're gonna have to do more reading and you're gonna have to do more writing, but it's gonna be less boring.

And so if someone comes in to observe, we're gonna all pretend we're doing the original, the curriculum we're supposed to be doing. And I'll give you a heads up the day before. 

[01:03:24] Alex Sarlin: I'm sure they 

[01:03:24] Matthew Gasda: love 

[01:03:25] Alex Sarlin: that. 

[01:03:25] Matthew Gasda: I would imagine. So, yeah. Next semester there was a giant rush to take my unlisted class. 

[01:03:32] Alex Sarlin: Exactly. 

[01:03:33] Matthew Gasda: It wasn't listed as like professor, I wasn't, I was a master's student.

And so the English department was getting a lot of emails asking like how they students were asking, they would have to take the, they had to take English 1 0 1 at some point during their first year. So there was a second wave of, of freshmen. And so that second wave started requesting my class without really knowing how to request it.

And the English department had to send out like a, a department wide email without naming me at all. Just saying, please do not accept any requests if you're getting directly emailed for students switching classes. They, they didn't credit me or anything. They didn't say, they might've also not known why.

In to take a particular block of English 1 0 1. And then there's a second anecdote I remember from this class. I forget, I think the second semester, one of the books I assigned was the first volume of Carl Ova Nagars My Struggle, which is like this, do you know it was the six volume Norwegian? 

[01:04:29] Alex Sarlin: Yeah.

[01:04:29] Matthew Gasda: Pretty dark, but very, very accessible at the time. This was 10 years ago, so everything's, it might not be accessible anymore, but it was, it's very plain English in translation and they had to read the first volume. And I remember beginning now, I wouldn't expect, I tr the sad thing is like, this seems very naive and almost like innocent.

The idea that 30 kids would at least attempt to read a novel and, but anyway, the first block for the class, I could tell that almost nobody had read it. And there's like this big, because I was like relying on reading passages out loud to prompt them. I was like, you don't, you have no idea what's in this book.

None of you do. Or passed a certain point. And there's this like long passage where he gets caught lying. By his father. And there's this long, we read, like we spent the class talking about this passage where he's lying and the shame he feels. And at the end of the class I said, Hey, I know you're all lying to me and that's okay, but I wanna know why.

You don't need to tell me why. If you really read the book, talk to me after class. But I think almost all of you didn't. And please like next class, just write on a piece of paper why you didn't read it anonymously. Just put it on my desk. I'll read them before class and then we'll talk about it. But for next week, I actually also try to read the book.

It's a good book. And you obviously enjoyed talking about it in class today, so actually maybe think about it. I really just expected them to come in and say It's too long. And that's it. I thought was, I thought the only reason that I would hear that they didn't read it. And I would say 80% of the responses were it was too personal and it made me feel really anxious and like vulnerable.

Wow. And I was totally shocked they were, everyone said they started to read the book and only once we'd had the class discussion did they realize that they liked it and wanted to read it. And we actually ended up having like a, a good semester. This was pretty early in the semester. And again, do I think every student read every book that I assigned?

Absolutely not. I don't think I read every book I was ever assigned to college and I like to read. But the larger point is that there, I recognize there was a relationship between, and then there are assignments, obviously were to write essays like they're, they did actually have to do more writing for my class, but they had a lot less, there were a lot less structural obligations.

Right. I allowed them to write by hand if they wanted to. I allowed them to, to kind of turn in their essays however they wanted. This was not like a triumph, like I'm sure I made a ton of mistakes. It was a lot more grading. I remember that being an issue. Like I just, okay, they're now turning in these like 15 page like personal essays.

They've gotten really invested, but now I have to read them. That was, 

[01:06:54] Alex Sarlin: that's the tradeoff. That's the tradeoff. That's where we're at. 

[01:06:58] Matthew Gasda: Yeah. To be honest, I remember that being a little overwhelming on my end. Like, okay, I had 60 pages of graduate work to turn in and that being like, holy shit, I've given myself huge.

But yeah, so I think in my experience, and I think this might even be really even more true of today than 10 years ago, is that there's so much anxiety around just with younger Gen Z, around public speech, UN speech expression, free expression, non-internet based expression, non-ironic expression that interesting.

There is a lot of alpha in the arbitrage between the standard essay and the free essay. I also wanna make a point that I didn't write in the compact piece, but which in the course monologuing now has occurred to me that if I were to, I'm not running any kind of teaching program right now, but if I were in charge of an English department or in charge of an ed tech company, I think there's a lot more to be gained from teaching grammar on a granular level and then releasing students to deploy that grammar in long form, in a much freer and more personal way.

Yeah, but what I, the way to me. Almost universally English is taught as students are not really taught grammar, and instead they're taught these kind of structural devices. Right? And so you get really poorly written and unclear, but mathematically essays. And so I think that's of mistake that's being made or that's been baked into, into the way English is taught.

Maybe because 75 years ago everyone could write a sentence without having to be taught how to write a sentence. Yeah. So to answer your question even more specifically, I would do like a semester of just grammar in a really kind of math like way, which is really, I think, really easy to teach because in a lot of, I think often young people are great folks to learn grammar in my experience, if it's just taught honestly and not as like punishment, and then just like have them write letters to each other, have them come in and write in a diary for 10, like build up strength, two minutes of diary writing, five minutes of diary writing, right?

Ungraded, this becomes even more enabled as. Kind of ban phones or like you could, I mean, I think like having AI teach you grammar for half an hour and then writing by hand in a diary is like a perfect synthesis of new and old. A perfect synthesis of like what is AI gonna be extremely good at grammar, way better than a person I teach?

I'm sure. Yes. What is an AI not gonna be good at writing in a journal by hand? You can't do that for you. 

[01:09:30] Alex Sarlin: Or original, different types of expression that go beyond standardized structures, which is I think where ideally we could, we could, yeah. Or writing 

[01:09:39] Matthew Gasda: up a personal letter to your classmate, right? Yeah.

Et cetera. Yeah. Why would an AI put, have AI teaching you like how to structure your thoughts on a sentence level that actually might give people a lot of confidence to actually, right. And I think as a Callie, I think there's a lot of cognitive drag and energy lost when you can't construct a sentence.

Yes. Agreed. So it's really weird that we're asking people to do more than that. 

[01:10:04] Alex Sarlin: AI is changing the game in all these ways, and I think that it is actually starting to open up these questions about, you know, what are we truly trying to accomplish when we teach writing? What, what type of writing do we want students to engage with?

What is the role of grammar teaching? You know, like you're, you're, you're bringing up there. I mean, one thing I'm particularly excited about, and I know we're, we're almost at time here, we've got, we've unfortunately have to wrap up. It's a lot more to talk about, but one thing that I'm intrigued by, and I know this might be even further down the rabbit hole, but because writing you mentioned this, if you're asking people to do all these really personal essays, it takes a lot more time to read, make sense of, and give feedback on them.

Then if they were five paragraph essays, of course, I wonder if they're, you know, in this sort of AI arms race where both students and teachers and professors and and TAs are all getting sort of additional capabilities and productivity gains through ai. Maybe there's a world in which students can create much more creative, open-ended, less structured writing, and then.

AI can support, not replace, but support some of the feedback and some of the, because ai, even though it, it could be taught to be very structural, could also be supportive in making sense of all sorts of different types of writing. So I just wonder if on both sides, maybe giving both students and educators AI capabilities may be able to allow us to break out of the structures, you know?

[01:11:26] Matthew Gasda: Yeah. And my gut feeling is AI is more valuable right now for teachers than students in certain domains. But I maybe I, maybe I actually, I say that out loud and I'm like, oh, maybe I'm, 

[01:11:37] Alex Sarlin: it's hard to know right now. There both sides are using it for all sorts of different things. Like there's war happening.

Yeah. Yeah. 

[01:11:42] Matthew Gasda: Lemme, lemme rephrase. I think in the, I imagine that it's sort of a front loaded, back loaded thing where in the beginning, I think, let's say the beginning of a year, the beginning of semester, I wouldn't wanna just put my students on an AI system. I do think maybe I'm old fashioned, but I do think that there's a social.

I would my teachers be able to spend much more time interfacing with students because AI is doing the busy work of teaching in the background. That is a huge, and even as a ability up really quickly to organize my thought when I'm like really quickly has been. Extremely helpful and it just helps me better tutor faster, but it's because I have content knowledge and I'm not an empty vessel.

[01:12:24] Alex Sarlin: This is so interesting. I highly recommend we'll put the link in the show notes to Matthew's piece, how educators can defeat ai. It was in Compact Magazine in June. We're also put a couple of additional links that he is including a TED Talk that he recommends for anybody trying to get their head around this incredibly complex topic of writing and ai.

I wish we had more time, but thank you so much for being here. This is really interesting and it was a pleasure. The topic of the day. Right now, I feel like every week there's yet another discussion about how is AI gonna change writing, and I think we all need to really be thinking very deeply about it right now.

So your piece is excellent. I really appreciate it. I think our audience will as well. Thanks for being here with us on EdTech Insiders. Thanks so much. For our deep dive this week. On this week in EdTech, we're here with Marc Graham. He is the founder and CEO of Spark Education, ai, an application that uses AI to personalize reading experiences and engage young learners.

Marc Graham is an experienced primary teacher with over 15 years in the classroom, across the public sector in Scotland and in international schools in North America. Marc Graham, welcome to EdTech Insiders. 

[01:13:30] Marc Graham: Thank you very much. Very nice to be here, and thank you for having me. 

[01:13:34] Alex Sarlin: I'm really excited to talk to you.

So first off, tell us about what you are doing with Spark Education ai. You are an Ed Youpreneur, as we call it in the era, teacherpreneur, as we called it on the podcast. You're bringing your personal experience in the classroom to this really exciting approach to reading and personalized reading with ai.

What does Spark education do? 

[01:13:53] Marc Graham: I love that term. An entrepreneur. Exactly. I've yet to hear that one, but I love that. So basically at its core, spark education, AI is trying to solve the problem of reluctant readers. It came from my experience of a teacher. Finding it increasingly difficult to engage my students with reading.

Whether or not that was the typical boys that tend not to pick up books or the bookworm girls in some cases, if you want to use those stereotypes, who would want to excel and continually find new texts? I just found that I was spending hours and hours trying to plan for, to engage and make these students enthusiastic about reading.

So what Spark education AI does is it takes the students' own ideas. It creates either fiction or nonfiction texts based off of those ideas, and then creates a text based on a teacher assigned reading age. So the texts will be a suitable differentiated link. Specific to that student and a vocabulary difficulty specific to that student.

So what it does is it basically creates an endless library of texts, an endless reading scheme for each and every one of your students. On top of that, what it does as well, is it. An answer to the age groan of comprehension that you often find in your lessons. I don't think I've ever taught a subject that so consistently is hated by my students.

Maybe I was just always doing it wrong, but teaching reading comprehension was something I always found my students really didn't enjoy. And so with Spark Education ai, what happens now is that the student's comprehension is assessed based off of the stories that they're creating. So not only are they engaging with the, the actual text that they've created, reading to their differentiated and personalized level, but then they're actually being assessed on that as well.

I suppose in general, what does. Gives those comprehension questions is matched up to an assessment known as the NGRT assessment, which is a globally used reading assessment, summative reading assessment. So it uses the exact same eight benchmarks within the N RT assessment. That data is kept for the student to view, for the teacher to view, view, and it just really gives us a more rigorous data set for us to use as a formative way of assessing students day in, day out, and hopefully through the gamification features as avatars, as streaks, as leaderboards and so on.

Through those, hopefully inspiring the students to read more and more as well. That's certainly what I've found with my classes and the other, the other schools who have been using it so far. So in a nutshell, that's Spark education ai. 

[01:16:41] Alex Sarlin: Oh, that's great. We talk about personalized education and through AI in lots of different ways.

And what I, I, I like the way you're defining it here. You're talking about how, you know, leveraging AI's capability to bring together different types of inputs and then create, like you mentioned, unlimited libraries of outputs. So you're taking interests from the student student, you're taking standards from these standard, this international standardized exam and their framework, and then you're taking inputs from the teacher about what's happening in the classroom or the reading level of the student.

And you know, this is the kind of thing that. In the past without pre AI days, putting together all those different pieces of data and information, what does the student care about? What do the standards say should be learned? What does the teacher say about the students that, you know, levels. It wasn't possible.

Now, it's not only possible, you can do it over and over again at any different moment all the time. I'm curious, you know, as a, you mentioned, you know, 15 year educator, what was the spark for you when you said, oh, AI has all of these capabilities and I can put it together to solve this incredibly core problem, which is disengagement and lack of interest in reading or reading comprehension.

What, what sort of, what was your AI sort of conversion moment? 

[01:17:46] Marc Graham: Yeah, good question. I think, just like you said there, the reality of a teacher is that they're spinning so many different plates, particularly in the, the primary or elementary sector you're expected to have on several different hats every single day.

Whether or not that's to be a mathematician, a, a writer, a sports coach, a therapist, the life of a teacher is never ending and the job of a teacher is never ending. And so what happened for me was I started probably like most people using chat gt in my work life and my personal life and finding that it was making my life a lot easier, solving a lot of problems for me.

I've always been someone who liked to create my own resources for my classes. And so I started to think, well, surely there's a way that I can be using this brand new super creative resource that is AI and molding it into a, a creative, in a creative way to create so. My life easier as a teacher and easier life in my classroom, more exciting for my students.

And that's where Spark Education AI came from. It came from that ambition to wanna do that. And then just teaching myself the basics of how to use the no code software, how to build the platform itself on a basic level. Honestly, just having a lot of fun with it. Yeah. Basically just enjoying learning all these new skills and, and putting the, putting my ideas into practice.

[01:19:17] Alex Sarlin: Love that. I love the way you said that. It's like, how can you make life more engaging and more exciting and interesting for the students, and then more efficient and clear and fun for educators as well? I, I agree. I mean, when I, we talk to so many different people on this podcast about, about AI and different approaches to it, but I think that sort of core of this is really fun.

This is cool what it can do. I think sometimes it gets lost in all the discussions about, you know, the capabilities and the, the nuances and how it can adjust for this and this and that. But it's like, this stuff is amazing and I think a lot of, a lot of students, you know, inherently realize that as well.

It's really fun. 

[01:19:52] Marc Graham: That's such a good point though, because, and I do, I recognize what you're saying about we can sometimes get lost in all of the other things that come along with ai. I think that they're, they're super important. You know, the, the ethical concerns, the safety concerns in terms of, of using AI I think are so important.

But at the foundation of it, it's such a great tool that whether we like it or not, I think is here to stay. For sure. 

[01:20:15] Alex Sarlin: Yeah. Yeah. No, I agree. And that's a great segue to the next question I had for you because we are at this moment where all these polls keep coming out. There was a big Gallup poll just a couple of weeks ago about AI usage in the classroom and what educators are using it for.

And I think there's starting to be a little more of a consensus to exactly what you, you just said. People are saying, AI seems like it's here to stay. It is not just a fed and education, it's a true change in how technology works, and we should. Get our heads around it yet there's a lot of downfalls, right?

There's a lot of risks. There are ethical risks. There's risks of, of losing the nature of what teaching looks like and the auto grading and cheating on the students. You know, there's a whole lot of things that people are concerned about, and those are all, I think, legitimate. I think, I'm sure you would agree.

At the same time, I feel like as an education technology community, not losing sight of, like this is one of the most amazing, exciting, interesting tools that can do things like personalize reading and get students more engaged and, and, and be able to create stories about things they care about or nonfiction about things they care about.

How do you think about balancing the legitimacy of the concerns for educators and the excitement, again, for educators and those outside of the education world? 

[01:21:25] Marc Graham: Well, my role, part of my role is in the leadership team at the international school I worked in for the last three years was to look at policy and to look at global privacy laws, whether or not that was GR, which is the main one here in Scotland and across Europe.

And looking at how we were ensuring that we were staying ethical and, and legal with these policies across and, and laws across the whole board. So not just without, and not just with ai, but with how we were handling our data, what websites our students were accessing, what information we were giving up, et cetera.

So when I said it was super exciting creating Spark Education, ai, I actually started with all the boring stuff first. Um, so I made sure that everything was built with these privacy concerns at the center of it. So, for example, it's a student facing. So it's one that the students engage with the AI themselves on a very basic level.

If they go into the fiction section, it's just four prompts, genre, character setting and plot that they input. And then if they're in a nonfiction, it's just topic and subtopic. Yeah. So they're only engaging in a very narrow set of parameters. However, no student data is taken in from the app. So we don't take full names, email addresses, date of birth, nothing like that.

It's all just with a unique ID that the students are linked up. Additionally, they are, the teachers are the only ones that actually sign up as an adult with any sort of, and it's only a small amount of personal information. And this just ensures that anything that is getting put to the AI. Even though it doesn't go to the larger learning model, the AI that we have added in, not to our application, there's nothing that's connected to an actual human.

So I wanted to make sure that I got that spot on right at the beginning, um, just to make sure that, you know, anything that the students were being exposed to, anything that they as a real life human being, were interacting with the ai, that it was all done completely, legitimately and, and safely and legally.

So. For me, it was just about starting and iron out all of those problems before I started with all the fun stuff. Now I feel confident as a teacher, as an educational leader, I feel confident having conversations with any teacher, with any school leader, decision maker, whatnot, because I know that I've spent so much time getting this part right, because for me it's like anything within the educational sphere, child safety and child protection is the most important thing that exists within a school.

[01:24:03] Alex Sarlin: I like that approach. So you sort of start with foundational safety, privacy, ethics, and then once you're confident you have that all in place, then you can start to open it up and think about all the fun capabilities and all the things it can do for the gamification layer, the personalization layer, the interest base, the, I think that's a very clever approach.

I wanna ask you, one of the things in that's I think really interesting in your personal story and it's relevant to, I think a lot of people who listen to this podcast is you mentioned that getting involved with no-code tools starting to real to build your own personal and then sort of professional skillset with ai.

One of the things that's so exciting about AI is that it basically gives anybody the power to do incredible things and to take their expertise and use it in new ways. And so for. Educators. It's giving many educators the ability to turn what they do in the classroom, into applications, into companies, into entrepreneurship, or you know, teacherpreneur type activities where they create their own companies, create their own products, which can then be used in in other schools.

And that's an incredibly exciting moment. It's never really been true in education before where we have people coming outta the classroom who may not have a formal coding background or product background, who can then start their own companies and build really exciting products. I'd love to hear you talk about that journey of going from a classroom educator to a entrepreneur and what you would sort of recommend for others who are anywhere along that path, as well as what you'd say to established ed tech companies who are seeing all of these new startups and ideas coming from educators in the classroom.

[01:25:35] Marc Graham: Okay, great question. What I would say is that in my entire life, because I went from primary school to high school to university to becoming a teacher, so I've known no other work environment apart from schools, and I know the wealth of expertise that exist within schools. Teachers are world class creators.

They're literally doing it every day 10 times with all the different lessons that they've got and all the different styles of learners that they've got in their class. So teachers, being creators and teachers being innovative is definitely not a new thing is, is always existed. However, now what's what's happening with technology is that new technologies are coming into creation that are allowing everybody to access the ability to create.

Now, I've always been super enthusiastic. Technology. It's always been something that was a big part of my life. My dad was a film, TV, and multimedia lecturer when I was younger to animation classes and things that probably most kids my age didn't really get exposed to. So I've always had a real passion for technology within learning.

But it wasn't until I met a friend in Mexico who started putting me onto different tools, different softwares where I could actually bring all of my ideas to life. One of those was a software called Bubble, which is a no code software and just through. A split screen between Bubble on One side and Chat GPT on the other and many, many, many different trials and errors.

I was able to put together the, the very basic version of what Spark Education AI is just now. But, you know, without that very first version, I wouldn't have given myself the push to know that this is gonna work because I was able to then test that with the students in my school. I was able to get their feedback, I was able to see how much they enjoyed it.

So yeah, I would say to any teacher who has an idea, I know how busy life is, I know how precious your time is. But sit down, have a conversation with, with the AI at times about what you want and what you're trying to achieve, and give it a shot, give it a bash to see what you can come out with because it could be a complete game changer for you.

Spark education AI has, has really a shift in my journey as a professional. I'm really excited as to where it's taken me so far, and this is only after six months, so I'm really looking forward to seeing where it's in the future. You mentioned as well about EdTech companies and what I would say to them and give the advice, 'cause I think they're the ones that are in the position probably that I want to be in.

From an educator point of view, and as someone who had the fortune of working and mentoring many other teachers in technology, is that try to dumb things down a little bit from time to time? I think it can. Some of the tools, particularly within the AI sector of the educational sphere that are coming out are quite overwhelming with the number of different tools that there are on these software.

So you can click, click on the tools section at the top of the, the toolbar and it'll come up with maybe 40 different tools for all manner of different things. And I think that can just really put teachers off because like I said before, they've got so many different spinning plates and then to think that they've gotta spin another, another 40 of those, rather than just another one or two to learn can be really offputting for them.

That's where Spark is just something that it's created with the hope that. Anybody can pick it up and you should be able to navigate from start to finish pretty straightforwardly without any support. 

[01:29:26] Alex Sarlin: I think that's fantastic answer and great advice for both aspiring and early stage entrepreneurs and people who are just getting off the, who are thinking about getting their ideas out of the classroom and into the world.

And I'm really excited because I think it's going to create more connections between established ed tech world, which honestly wants to have more connection to the classroom, wants to have more, wants to understand the needs and aspirations, and also the creativity, as you mentioned, the incredible the world class creators that teachers are to really work with them, but sometimes there's a gap between that.

So I'm, I'm excited to see these two groups come closer together and the more entrepreneurs, teacher entrepreneurs that are out there, I think the more, the closer those relationships get. So. Spark education. AI is using artificial intelligence to personalize reading experiences and engage young learners.

Mark Graham has, has been in the classroom over 15 years. He mentioned he, he just got back from, uh, teaching in Mexico and now he's back in Scotland doing really, really interesting work. I'm so glad to be able to join you early in your journey and help amplify what you're doing with Spark Education.

Thanks so much for being here. I wish we had more time as, as always, but No, no, thank, yeah, it's a blast. So nice to meet you and hopefully people here this have, you know, think there are all sorts of ways that Spark can be used in their classrooms or in their product suites. Thanks for being here with us.

EdTech insiders. Thank you very much for having me, Alex. Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.

People on this episode