
Edtech Insiders
Edtech Insiders
Claude for Education: How Anthropic is Shaping AI’s Role in Learning with Drew Bent
Drew Bent leads Education as part of Anthropic's Beneficial Deployments. He also co-founded the tutoring non-profit Schoolhouse.world with Sal Khan. Prior to that, he wrote code at Khan Academy, taught high school math, and has been tutoring students for over a decade. Drew has degrees in physics & CS from MIT, and an education master's from Stanford.
💡 5 Things You’ll Learn in This Episode:
- Why Anthropic made education one of Claude’s first real-world focus areas
- Insights from Anthropic’s Education Report on how students actually use AI
- How Claude’s “learning mode” fosters Socratic dialogue and reduces “brain rot”
- The tension between LLMs as answer machines vs. pedagogically sound tutors
- What the future of AI tutoring, peer learning, and collaborative tools could look like
✨ Episode Highlights:
[00:01:13] Drew Bent’s journey from teaching and tutoring to leading education at Anthropic
[00:02:50] Why education emerged as a top use case for Claude
[00:04:20] Findings from Anthropic’s Education Report: how students really use AI
[00:06:59] Claude and Bloom’s Taxonomy—strengths, gaps, and risks of cognitive offloading
[00:09:22] Building learning modes: shifting from answer machines to Socratic dialogue
[00:12:53] How Drew’s tutoring background informs AI product design
[00:17:26] The role of AI in fostering—not replacing—human-to-human learning
[00:21:00] Students’ call for “learning mode” to avoid brain rot
[00:26:41] Sneak peek: Anthropic’s upcoming learning mode for Claude Code
😎 Stay updated with Edtech Insiders!
- Follow our Podcast on:
- Sign up for the Edtech Insiders newsletter.
- Follow Edtech Insiders on LinkedIn!
🎉 Presenting Sponsor/s:
Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.
Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.
Tuck Advisors is the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.
[00:00:00] Drew Bent: The AI can analyze how the conversation went, how the tutoring interaction went, and then get personalized feedback to the tutor, often encouraging them to have change their talking ratio and spend more time listening and asking questions of their learners. So that's a great example of an AI model encouraging humans to spend more time with each other.
[00:00:33] Alex Sarlin: Welcome to Edtech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at Edtech Insiders.
[00:00:49] Ben Kornell: Remember to subscribe to the pod. Check out our newsletter. And also our event calendar. And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoy today's pod.
[00:01:13] Alex Sarlin: We have a very special guest today for our interview. We are here with Drew Bent who leads education as part of Anthropic’s Beneficial Deployments team. He also co-founded the tutoring nonprofit Schoolhouse.world with Sal Khan. And prior to that he wrote code at Khan Academy, taught high school math and has been tutoring students for over a decade.
Drew has degrees in physics and computer science from MIT and an education masters from Stanford, not Badger. Nice to see you. Great to see you again.
[00:01:44] Drew Bent: Good to see you again, Alex!
[00:01:44] Alex Sarlin: And welcome to the podcast.
[00:01:45] Drew Bent: Yeah, I'm very excited to be here.
[00:01:47] Alex Sarlin: I'm really excited to talk to you. So first, let's talk about Claude for Education. Anthropic is obviously, frontier model that has been doing incredible work. I personally use it every day and talk into it just as a partner for everything I do. But Anthropic has put a lot of energy into education. Tell us about why Anthropic has chosen to focus on education as one of its first major real world applications, and what does that mean for your team and what you're building?
Of course. I'm happy to share why we've started this Claude for Education Initiative. I think taking a step back, it's a fair question about why any of the AI labs are focused on education in the first place. Yeah. I can't speak to other labs, but what I can share from the Anthropic perspective is it was late last year that we were looking at the data of how Claude was being used overall around the world.
You may not be surprised to learn that coding was the top use case. But two out of the top four use cases. And so right below there were education related learning and teaching academic research.
[00:02:50] Drew Bent: Yep. And so I think this was really important for our team to see, because, you know, we built this general purpose technology.
But we care not just about building the technology, but about the responsible deployment of it and how it gets used. And here it's getting used all throughout education by students and by teachers. And there's all sorts of hopes that we've all had in the ed tech field for what AI can do to education.
But we also know there's a a lot of concerns. And so this is. For us, primarily an exercise in thinking about the societal impact of our technology and what we can do to be more responsible stewards of this technology. Yeah, and I think it's very easy to just put the blame and the burden on, you know, why are students using it to cheat in these cases, or why are teachers automating parts of their jobs in these cases?
Really, we have a responsibility as the AI Labs and Anthropic and all the other ones. If we have a technology that's being used by millions of people in this way, that we have to first study it and then figure out what we can do to make it better for these educational purposes.
[00:03:51] Alex Sarlin: A hundred percent.
And you know, as part of that work, you've just put out a, uh, in April Anthropic’s education report where you looked at how students are actually using it. And you have a really interesting taxonomy of sort of how you put it together. You have this direct work versus collaborative work, problem solving versus sort of asking for output.
Tell us a little bit about the findings from that report and what was sort of, what was surprising or counterintuitive for you about how students are using Claude and uh, and for education.
[00:04:20] Drew Bent: So this April education report that you're, you're talking about here was our first attempt and we will have more research on this.
So stay tuned to study how in this case, students are using Claude.
[00:04:32] Alex Sarlin: Yeah.
[00:04:32] Drew Bent: We thought it was really important to share some empirical data and to. Frankly, make this open source and share it with the public. Because there's been a lot of studies, of course, around how students and teachers are using AI.
There's been sort of small, you know, maybe RCTs and sort of smaller case studies. But we of course, have access to a lot of data and because we have this privacy preserving tool where we're able to study, you know, 500,000 conversations. Yeah. Without looking into any individual conversation. The interesting part of this is that we have Claude itself.
Go analyze all the conversations, summarize them, bring it up, and then humans review at the end. We were able to share this data and there were lots of great use cases, right, of people using it to build their own projects and artifacts with Claude, practicing for oral exams in school and all of these things.
But importantly, there were also a lot of these worrisome use cases that, you know, people have been wondering about, but we could see it at scale. On our platform and, you know, we would love all the AI labs to share similar types of data because I don't think this is specific to us.
[00:05:35] Alex Sarlin: No.
[00:05:36] Drew Bent: As you mentioned, we had to figure out a way to categorize this.
It's hard if you just look at the AI conversation data to know exactly the context in which a student's using it. Are they asking, you know, help on a particular question? For a homework problem, for an exam, they're preparing for just some like self-learning they're doing. So that's why we built that taxonomy that you're, you mentioned, and actually I will say you and Ben were very helpful as we reached out initially to you all to get a sense of what are the existing taxonomies out there to study what student usage looks like of these AI tools.
And we really couldn't find one that encompass what we were looking for. So that's why we built this new one in sort of a bottoms up way.
[00:06:15] Alex Sarlin: Yeah.
[00:06:16] Drew Bent: But the part that I wanna pull out is Bloom's Taxonomy, which, you know, many of your educators, you know, listening to this will have thought about it in a classroom context.
And you know, I was a former teacher: high school math teacher. And I was often thinking about this taxonomy of these, you know, cognitive skills that I want my students to work their way up from just simply, recalling a fact and remembering it all the way up to being able to analyze and create their own knowledge.
Right? And so what we did is we took all of these conversations and we actually sort of flipped it and. Can we analyze Claude's Claude's behavior in these conversations with students to see how well it's exhibiting different parts of Bloom's taxonomy?
[00:06:59] Alex Sarlin: Hmm.
[00:06:59] Drew Bent: What we found surprised us, which is that it was an inverted pyramid where Claude was often performing these top cognitive skills, like creating and analyzing way more than it was at the bottom of the pyramid.
And so from a turn test perspective. Wow, this technology's incredible. But from a cognitive offloading perspective, it started to raise concerns about, well, if the students are not, if they're able to work with an AI and have the AI exhibit these top cognitive domains, then what does that mean for their own thinking?
The good news is that if a student and Claude are co-creating, they can both be at that top level of creating together, but it doesn't necessarily happen out of the box. And so that's what we have to think about when we think about our own product development, is how do we encourage that. And then one last thing I'll say on this is there was one gap in this pyramid where Claude was not doing a lot of it, which is evaluating.
So Claude itself was not necessarily evaluating a lot of things. Makes sense. But that is where the humans of course, and when we think about AI fluency and what it means to be a good user of AI or collaborator of AI in 2025. Being able to evaluate the output is so, so important. So when we work with students and teachers, it's often focusing on that, that competency.
[00:08:16] Alex Sarlin: It, it also makes sense because the sort of built in behaviors of, especially in sort of general purpose mode of the frontier models, including Claude, are not to evaluate, you know, if a student comes in and says, I'm thinking this and this, I wanna write a paper and here's my thesis. It's not sort of designed to be like.
Uh, I'm not sure. That's a great thesis. I'm evaluating it and I think you could do better. That's sort of not, you know, so it makes sense that the evaluation is sort of the weak link in the taxonomy, just because that's not how it's designed.
[00:08:43] Drew Bent: The key point as well is that like LLMs, were originally built to be answer machines.
[00:08:48] Alex Sarlin: Exactly.
[00:08:48] Drew Bent: Answer questions. Great tutors and teachers don't just give answers and neither should LMS in the educational context. And so that is the key thing that, you know, we have been, and I'm sure we'll talk more about it, but that we've been working on in our learning mode that we released last April, and there's a lot of these learning modes that we're starting to see and that's right.
All of this is trying to say, this technology wasn't built for educational purposes. It's being used so much, it has so much potential, but we need to help pull out these strong Socratic pedagogical behaviors, uh, so that more students can access that.
[00:09:22] Alex Sarlin: Exactly. So I mean, it must be such an interesting role being inside Anthropic and really focusing on education because Anthropic is.
Truly cutting edge world class, one of the very, very few frontier LLM creators that can compete with absolutely anything. It's an incredible tool and set of models and all of these things, but it isn't necessarily custom built for education. It in fact, it's not built for education by nature. How do you sort of balance the general purpose answering machine power of LLMs with Anthropic with safe, useful, you know, tools for learning in the classroom. I think this is like the, it must be such a tricky push and pull, trying to think of how to put the pieces together in a way that that isn't an answer machine. That is actually, you know, pedagogically sound and connect actually help students think and not offload as they like, say, or outsource their critical thinking. How do you balance it?
[00:10:12] Drew Bent: It's a great question and I will say we are custom building it now for education, right? 'cause we realize that this is an area we have to prioritize. But to your point, we are an AI lab first and foremost, and we have many technologists here. We also realize it's important to have teachers, you know, I mentioned, I was in the classroom.
We have a lot of teachers here at Anthropic, particularly working on these initiatives. And so I think you're starting to see this at other AI labs as well, and I think it's great. I think when all the AI labs start to hire more former teachers, educators, people who've worked in ed tech, that's really important.
Of course, there's a lot of stuff happening at the application layer, and we have a lot of amazing ed tech organizations that build on top of our API that go much deeper into particular use cases in the classroom. Magic schools of the world use Claudes, and a lot of them love it because of the, you know, strong focus on privacy and all of the guardrails that come with API.
So all of that's really important. But I think it's also important to have some of these educators in the AI labs themselves, because even when you do have these specific tools for education, you will still find that people come onto Claude or these other chat bots. The general purpose tool and ask education questions of it.
So this isn't direct answer question about how to deal with that tension other than we deal with the tension all the time of, you know, some people are trying to use things in a productivity perspective. Even teachers, sometimes they just, you know, they want to create a syllabus for their class as a first pass that they can then, you know, revise.
And so having a Socratic dialogue for a teacher who's trying to create a syllabus. May not be the best format. They may just wanna create some output that will be helpful for their class. There's always this question around how do you encourage people to have more of this like active learning while also meeting them where they're at.
[00:11:57] Alex Sarlin: It's a fascinating tension. I think your point about sort of where the learning layer happens? Is it happening within the core? Model or is it happening within the application layer like a magic school or what is happening at what level is, is something continuously being balanced and measured? And EdTech is a huge part of that.
And you know, you, you've been a teacher, a tutor, a nonprofit founder. You are working in Khan Academy, you've been in the ed tech world. And you mentioned that a lot of educators and ETech veterans work at Anthropic, which is amazing to hear. We've also talked to, you know, Shantanu Sinha, one of the co-founders of Khan Academy, who now is at Google doing AI. Good friends. Right. Tell us how your personal experiences, you said you were a tutor for over a decade. I have that experience too, as a teacher. How do they influence the way that you personally look at designing and testing the learning capabilities and features at Claude? What do you bring from those experiences, that I think that you feel like add to the conversation.
[00:12:53] Drew Bent: Yeah, I mean I think you can relate to this through all your tutoring and my tutoring as well. All of those identities of being a teacher, a tutor, working on a tech nonprofit, all of that are, you know, key to how I see the world. Yeah. But the tutoring one in particular. I think it is, especially in this idealistic one-on-one tutoring interaction, which I love, if you can get the time to just work one-on-one with a student, really support them is like, it's just so precious.
It's, and so often in the classroom you're trying to approximate some of that, right. As a teacher. But one thing that you realize from tutoring is that if you ask a student, and I did this a lot in my classroom where I would ask my middle and high schoolers to peer tutor each other. Yeah. And often their initial instinct was.
To go explain the concept in depth to their fellow students, and they would spend 90% of the time talking. And then they would ask just some questions. And then, the person they were tutoring, the two T would spend 10% of their time talking. And then when you look at the research and you look at just some amazing tutors' minds, it's exactly flipped and it's pretty painful and hard to do well.
But if you have someone where the tutor's talking 20% of the time, and the student that they're tutoring is talking 80% of the time. Then you start to see maybe some of the Socratic dialogue, but not just that, but really trying to get that ratio right, at least as a proxy. That's really important. And so when we're working on schoolhouse to world, a similar type of thing, it's like how do we teach that skill to the tutors?
We have 15,000 high school tutors around the world tutoring, so we were trying to teach that. What's interesting now is in an AI lab, it's also like how do we get that behavior in the AI as well?
[00:14:35] Alex Sarlin: Exactly. '
[00:14:36] Drew Bent: cause AI also will talk, all the LLMs will talk a lot. Talk. They talk. Yep. But again, with the right prompting and with the right product affordances, and if it's not just solely a chatbot, but if you have cloud artifacts and all these different interactive elements as well where students can be co-creating, we see students even using Claude code.
And so there it's much more of a conversation back and forth, but I think that's like an element that I've just had with my own one-on-one tutoring experiences then. Was relevant in teaching relevant in a school still world and now relevant in Anthropic.
[00:15:11] Alex Sarlin: Yeah. You make such a good point. And it's funny, yeah, I've been playing with the different study modes and, and learning modes and all of the frontier models and you're right.
Something that I don't think I would've noticed until you said it, but it's really true. They, even in the learning modes. They're very garous, right? You say one thing and it sort of gives you a whole slew. It's like, well, here, here's a whole bunch of ways to think about that. Which one are you interested in?
And you could do this, or this, or this, or, let me explain it. And I know it's, it's better than, you know, the learning modes are better than the off the shelf general purpose mode, but it's still too much. But it's too much, right? It's so funny, we always get the transcripts of all of these podcasts, and a personal goal for myself is to sort of talk as little as possible in these, and I never connected it to tutoring, but I think you're right that the best tutors do a lot of listening and that much talking as much as possible.
[00:15:55] Drew Bent: I like when you talk, it's a conversation, so don't be too hard on yourself if the ratio in this conversation looks different because it's a conversation. But I hear your, I hear your point.
[00:16:07] Alex Sarlin: I appreciate that. True. It is a conversation, but I think job interviews are another example of, you know, when, when the, the more that the interviewer speaks, the sort of the, it's not always That's good.
Usually for the candidate of the interviewer speaking more. It's a wacky situation. Oh, interesting. So you mentioned the schoolhouse style world. It's built all around peer tutoring. It's built around human connection. I'm curious, as you think about sort of the future of Anthropic and Claude, you just announced a new model this week.
There's also a, uh, I wanna give a little pitch for, there's a new Andrew Ang class, you know, for deep learning about Claude Code, which is. By all accounts, the absolute cutting edge, best coding tool. I'm not a coder myself, so I can't speak to it, but I've heard that from many, many people. Lots of great stuff happening in our cloud code makes everyone a coder.
So they, they, after this, you and I, again, the hack on something together, let's do it. Let's do some, but to your point. But when you look at the sort of future of AI tutoring or AI conversations, do you think there's going to be an opportunity for AI to support more social and multi human interactions?
None of us want a future where every student is only interacting directly with a computer. We've seen some EdTech startups that start to bring facilitation or small group support or tutoring for multiple kids at the same time. They've done it at the application layer. I'm curious if it's something you ever think about at the foundational layer.
[00:17:26] Drew Bent: I agree a hundred percent. That's the goal. The goal is to have more human to human interaction and more quality, human to human interaction. Yes, and I can speak to when I co-founded Skull Star World with Sal Khan, Shahir, Mariah, like all of us, our key insight here was that human to human interaction is at the core of education and.
A lot of people were craving it on Khan Academy of like, how do I chat with Sal after he does one of the videos? You know, I wanna have this conversation and that's hard to scale up. And so with Schoolhouse World, it was about how can we create this entirely free peer-to-peer volunteer tutoring system where we could connect many thousands of students to have these human to human interactions.
And then of course, AI came out and we started to use ai, but it was often behind the scenes. It was using AI to, for example. We still do this now at Schoolhouse, we record the conversations, save them for 30 days, and then from these tutoring interactions, and then the AI can analyze how the conversation went, how the tutoring interaction went, and then get personalized feedback to the tutor.
Often encouraging them to have changed their talking ratio and spend more time listening and asking questions of their learners. So that's a great example of. AI model, encouraging humans to spend more time with each other. And now it's the same thing with Anthropic. And this is a big part of my reasons to also come to Enro, a lot of alignment with, with the values of Anthropic, but one of them being it's in our name, Andro.
The human element matters a lot here. And so when we see, for example, teachers using Claude. Some of the top use cases are helping them do sort of administrative tasks and things that would've taken them hours before so that they can then spend more time like in office hours with students. You know?
Same with the students. Like we see them learning about a concept on Claude. Of course, you are interacting with an AI there, but the ideal thing is then you can come into the classroom and spend more time on hands-on projects working with other students. So this is easier said than done. I'm not trying to trivialize this because if you do build a technology and it is very powerful, people will use it a lot, but the goal is not to have everyone spend all their time in the AI.
It's really to open up and free their time to have more of these human to human interactions.
[00:19:51] Alex Sarlin: A hundred percent. I mean, one use case I was brainstorming with Claude about this for a while at one point, and we were thinking about different ways that an AI could support facilitation. And you know, something as simple as when you have a small group breakout, AI could be the one that does all the icebreakers and keeps people on task and make sure that everybody is aware of the instructions and what their roles are and sort of keeping things moving.
Like there's a lot of really interesting opportunities for AI to sort of serve as a facilitator for multiple people and make. Peer learning into something that's just really rich and exciting. Almost like they can almost be like a dungeon master. Yeah. In a, uh, learning experience where it sort of is slightly, it's just guiding everybody, but they're still working together, conversing, talking.
It's a really exciting model. So. I wanna ask more about Claude's learning mode. It's been really interesting and it was inspired by real student behavior. You obviously, as you said, you did aggregated data analysis to see how students were using Claude. Can you walk us through that sort of process of taking all of that data that you, you were finding from how students were using Claude for direct output, for collaboration, and sort of baked it into this product feature about these learning projects and this learning mode.
[00:21:00] Drew Bent: So, yeah, we released our learning mode this past spring after we saw a lot of the data from our research at the quantitative level in terms of studying these half a million conversations of Claude. And we saw the need for the building for Claude to not just answer questions, but also to ask you questions and to have Socratic dialogue and all the best practices that teachers and tutors know.
But to bring that into Claude, the thing is we also did a lot of focus groups. People would often think that it was the teachers that were telling us, oh, you need the learning mode. And we did of course, hear this from teachers, like, you know, they want what's best for their students, long-term learning.
And so they were worried about cheating. And so learning mode was helpful there. But it was the specific moment. I can remember the conversation we had with a focus group of college students. Yeah. They were the ones who I think first maybe even coined the term learning mode. And they were worried and they are worried.
About brain rot, and that's the way they frame it. Interesting. And of course it makes sense. I think students know what's best for themselves, but there's always this challenge of what's best for their long-term learning. And then in the short term, you know, it's late at night and they need to get ready to finish some assignment.
And if the tools make it so easy for them to just steer in the direction of just using. The AI is a crutch, then we will all do that. Not just students, adults as well. Sure. And so, it was very promising. That was, you know, students also want to have these product affordances and product features that encourage more learning.
And so that's what we've been working on. And I'll just say like, I've been very delighted to see that. All of the AI labs, and we weren't the first, and we are not the last either. There's been a lot over the last few months that have been working on different types of learning modes and open AI and Gemini.
For us, we call that race to the top, where we think, especially in an area like education or beneficial deployments, having a competitive pressure for all of the AI labs to be doing more in education, more research around it, more product features to support learning, and really working directly with students and teachers to build these features.
Like that's really good.
[00:23:10] Alex Sarlin: Yeah,
[00:23:10] Drew Bent: So it's very different maybe from other areas of industry like we, but, but this is a general principle of Anthropic race to the top. About race to the bottom.
[00:23:19] Alex Sarlin: I love it. Yeah. And, and it's also something that comes out of the school policy, but it's true. I have felt that as well from my perch.
You know, looking at these tools from the outside, it feels like the idea of there being a competition, who can have the best learning mode, the best AI tutor, the best conversational, constructive, you know, reducing brain rot and incorporating learning science. Like, what an amazing competition to have, like this is a great universe that we're in right now. Exactly. It's a good timeline where all of these amazing big tech companies who frankly can focus on anything are focusing on education. Let's talk quickly about the research and the learning science. I know I only have you for a few more minutes, but this is something I'm so curious about.
You know, we saw. In Google's case, they did this very specific thing where they made this LearnLM, this sort of model for education and then folded it back into Core Gemini, and then it became sort of the orientation that fuels their learning products. And I would imagine that Anthropic may, you know, without giving any industry secrets away, I'm not asking for them.
But you know, I'm curious how you think about getting the research. You've mentioned Socratic dialogue or. Bloom's taxonomy, you know, some of the things that we, we know about learning science from the research and tutors know. How are you thinking about getting that into the mind frame of Anthropic’s, into Claude's sort of personality when it comes to learning mode?
[00:24:38] Drew Bent: I think it's the answer's all of the above. There's some stuff at the model level, making sure that these models have been trained on, not just, you know, question, answer productivity use cases, but thinking about things that also make sense for the education context. Yeah. Then there's stuff at the prompting level, which you see a lot of, all of these Learn Modes that you've mentioned, I think have some pretty key part at the prompting level, which is again, we found that 5%, you know, just throw out a number there.
Maybe 5% or so of our students were using Claude in these very sophisticated ways and turning it into Socratic tutor type of interactions. But not everyone is taking the time to write a long, lengthy system prompt. Sure. So we, that's where I think the AI labs can help a lot and then there's stuff at the product level as well to like how to surface that.
So it's all of the above. And I think, you know, Google with Learn lm, they've been doing this for a while, I think is, is a great example for. The broader field.
[00:25:37] Alex Sarlin: Yeah, that's a really great distinction. It happens within multiple aspects of the experience with the prompting, with the product, with the model, and you can sort of put all the pieces together and I'm just excited to see it continue to evolve.
It's really exciting. Learning mode in Anthropic is really great and it's so exciting that, you know, in the last few weeks we've seen several of the major model providers and labs present their version of learning mode, and now it's just. To the top.
[00:26:03] Drew Bent: I'll preview something that's actually coming shortly.
I don't know if it'll come before or after this gets released, but we've, you know, a lot of people you mentioned use Claude Code and Claude for coding, and so we hear this a lot on the programming, the CS students side and really just everyone, junior programmers, people in the industry saying. I don't want my programming skills to atrophy.
I don't wanna lose, lose that, or if I'm learning for the first time, like how do I learn when a coding agent can do 20 minutes of a task? So the little teaser here is that we will shortly be launching a learning mode for quad code as well.
[00:26:41] Alex Sarlin: Very, and
[00:26:41] Drew Bent: So I think this is like all these other learning modes, it's just a start, like we need to iterate on this, and we've been getting a lot of feedback.
We need more feedback. I think this is gonna be really, really important when everyone is asking, well, what about someone who's learning to program? Should they learn to program? How can they learn to program? And so we're thinking about this at the level of all of our different product surfaces.
[00:27:02] Alex Sarlin: That's incredible.
And that's exactly how it should be. You can jump right into creating your own applications without having to know how to code yet. But then you should be learning along the way because if it's, if you're just letting the tools do it for you, you're not learning how to do it yourself. So that's very, very exciting news and we'll see if the dates work out, but hopefully that will be out by the time this podcast is out.
And we will link to it in the show notes, if not. That's something to look forward to very soon. That's really exciting. Drew Bent, this has been a pleasure. You're welcome back anytime. I'm literally a daily user of Claude. I use it for almost everything. And Drew Bent leads education as part of Anthropic’s Beneficial Deployments.
He was also the co-founder of the tutoring nonprofit Schoolhouse.world with Sal Khan. Really exciting work and thank you so much for being here with us on EdTech Insiders.
[00:27:47] Drew Bent: Thanks so much, Alex. Big fan of the podcast. It's an honor to.
[00:27:57] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.