
Edtech Insiders
Edtech Insiders
Why AI Education Should Start at Age Six and How to Do It Right with Sam Whitaker of Studyfetch
Sam Whitaker is the Senior Director of Social Impact and Institutional Development at StudyFetch, an all-encompassing AI learning platform for students. His primary focuses are bringing advanced AI education to underserved communities around the world and developing safe and responsible K-12 AI solutions.
💡 5 Things You’ll Learn in This Episode:
- How StudyFetch is redefining AI as a learning tool, not a shortcut.
- The story behind Sparky, the AI tutor designed for real engagement.
- Why AI literacy is becoming essential for K-12 students.
- What a million student interactions reveal about how AI is used for learning.
- How StudyFetch is expanding access to AI education from North Philadelphia to Uganda.
✨ Episode Highlights:
[00:03:08] What makes StudyFetch different from generic chatbots
[00:06:47] Sam on why AI literacy should be taught like reading and math
[00:09:09] Why AI must be implemented despite hesitations: “If we wait, we lose.”
[00:12:32] Explaining the difference between using AI to learn and learning AI itself
[00:15:29] A third-grade use case for prompting skills with Sparky the tutor
[00:23:46] The tension between growth and staying true to learning principles
[00:32:55] The big reveal: findings from over a million StudyFetch conversations
[00:38:52] Why students prefer step-by-step help in math and STEM subjects
[00:42:34] On making AI a true equalizer in education, not a divider
😎 Stay updated with Edtech Insiders!
- Follow our Podcast on:
- Sign up for the Edtech Insiders newsletter.
- Follow Edtech Insiders on LinkedIn!
🎉 Presenting Sponsor/s:
This season of Edtech Insiders is brought to you by Starbridge. Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.
This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.
[00:00:00] Sam Whitaker: I really hate the term move fast and break stuff. I think it's overused and I don't really like it. However, in this case, we kind of have to, we have to do the best we can and get the best product out with the most guardrails and with the best protections we can put in. And then. Figure it out along the, see how it works, but be nimble enough to make changes and updates as we go, because again, if we wait, we lose.
[00:00:28] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at
[00:00:42] Ben Kornell: EdTech Insiders. Remember to subscribe. To the pod, check out our newsletter and also our event calendar.
And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod.
[00:01:08] Alex Sarlin: We are here with Sam Whitaker. He's the. Senior Director of Social Impact and Institutional Development at Study Fetch an all-encompassing AI learning platform for students. Sam's primary focuses are bringing advanced AI education to underserved communities around the world as well as developing safe.
Responsible K 12 AI solutions. Welcome to the podcast, Sam Whitaker.
[00:01:32] Sam Whitaker: Thank you so much for having me. That's a mouthful of a job title and it's really hard to fit on a business card, but it kind of encompasses everything I do.
[00:01:40] Alex Sarlin: Yeah, I have had some job titles with like 10 words in them as well. It's a thing I'm really excited to talk to you today.
You just came out today, we are recording this at the beginning of May, and you just today came out with a. Major usage study about over a million conversations of how people are using study fetch, and there's all sorts of interesting results there. But before we get into that, although I, I really do want to first, for anybody who is listening who has not yet heard of Study Fetch, they don't use it or they haven't encountered it, it's tech for students.
Tell us a little bit about what it is, how it's grown, and what it offers for students in terms of learning opportunities.
[00:02:17] Sam Whitaker: Sure. Thank you. And thank you for having me and thank everybody who's listening. And also thank you for the heads up on the study in the business. That's what we call a teaser, so make sure you stick around through the whole podcast so that you can hear all the good details.
So Study Fetch, as you said, is an all-encompassing AI learning platform built for students. So we have a full suite of teacher tools as well, but we're built for students. We believe wholeheartedly that AI is transformational, can be transformational for learning all around the world. We've built our system to cater to students.
We build our system to get the best of AI from students and be a true learning platform, not a shortcut platform, not a cheating platform. Maintaining academic integrity, maintaining critical thinking skills. We build our system that way, and if you stick around to the end of the podcast, you'll find the stats to back it up.
[00:03:08] Alex Sarlin: When you talk about all encompassing some of the things that Study Fetch offers, it offers AI tutoring. It offers the ability to create podcasts out of your study materials, to create visuals outta your study materials, to get personalized feedback on your essays, to create quizzes, to create flashcards, to do explainer videos, which is a feature that I know people have sent to me and said, this is really cool.
These explainer videos, it can create a study calendar. It can turn your materials into all these different formats. You can even have call. A call with your AI tutor, which is pretty wild. So this is like, you have a lot of different types of, you can record live lectures, you know, a lot of different types of ways to use AI for learning.
[00:03:48] Sam Whitaker: We do, and that was intentional. And thank you for the rundown. I mean, I feel like I could probably take the rest of the half hour or whatever off. You could just run with that. You're doing a great job. It was intentional to have a lot of different options because every student learns differently. Every student consumes knowledge differently.
Every student retains knowledge differently. We wanna offer as many options as we possibly can with ai.
[00:04:11] Alex Sarlin: Yeah. When I look at this study fetch tool and website and sort of how your approach, it feels like you are embracing some of the capabilities of AI in a way that is really exciting. I mean, you're already in many, many different languages.
You're embracing the omni modal nature of AI and the ability to make sense of large amounts of complex information like. Notes. Notes and turn it into any format that a student might wanna do, as well as the personality driven. You have this concept of Mr. Sparky, this sort of dog tutor. I'd love to hear more about that.
It's called Mr. Sparky when it's cited as a co-writer of your research, but it's called Sparky. I feel like Study French is a almost paradigmatic example of. If you said, what could AI do for education? It's all of these things. Everything we just mentioned are some of the things that AI can do, and you're sort of grabbing all of them and giving them all in really nice UI to students,
[00:05:02] Sam Whitaker: and that's the idea.
That was the intent from day one. There's so many things, ways that AI is going to change our lives and so many ways it's going impact the workforce in all these D believe. Healthcare and education are the two areas where AI can make the most impact. So we chose education, the idea being that you can take the best parts of AI and package it together in a learning platform and get the best results, whereas, uh, generic chat bot the chat GPTs of the world, they're built for pure productivity.
They're built. Gimme the answer. Tell me how to.
AI tutor Sparky. He'll, thank you very much for being called Mr. Sparky. By the way. He's hoping to get to Dr. Sparky one day, but he's not quite there. But yeah, the idea is that Sparky's there as a learning companion, as a tutor, as a way to get students to learn and encourage students to learn as opposed to giving them the answer.
And that's the silver bullet, and being able to replicate it and be able to produce it at scale and make it available.
Can now have the best learning experience available.
[00:06:12] Alex Sarlin: Yeah. So let's talk about the everyone piece of this. You've said that AI education is becoming as essential as some of the core school subjects like literacy and numeracy. I agree with that. I think it is changing our world so quickly and we've seen other countries, notably China mandate.
Early AI education for learners as young as age six. What do you see as the future of AI literacy for K 12 students? And when you say it feels like something everybody should and must sort of know, what does that look like in your mind? Where do you think this should go in terms of universal AI education?
[00:06:47] Sam Whitaker: Well, I'll start with where we are and the problem there is so many students have zero air literacy trained. It just doesn't exist, and I get it. There's a lot of fear and a lot of misconceptions about AI in schools, especially when you're in the K through 12 space. But the problem is it's not optional.
AI is not optional. It's here. We can't put the toothpaste it.
School and he was saying that he was talking with the admissions people. One of the things they're looking for is engineering students who can appropriately use AI in their engineering studies to generate better results, faster results, all of those things. And so in a lot of schools, especially in the K through 12 side.
Ban kind of ai. So it's the analogy I.
That's kind of where we are. Students are getting to college and they're getting into the workforce, and they're just expected to know how to use ai, but we're not teaching them how to do it, and it's a problem.
[00:08:01] Alex Sarlin: Yeah. I am personally pretty bullish on AI education. I tend to look back at the coding movement, computer science movement in schools, which started in the mid eighties.
It's still going in 20, 24, 40 years later, we're still sort of trying to get it into schools. There's been some big victories, but at the same time, I think to me, the biggest learning that we've all learned from the computer science sort of revolution, the fact that colleges cannot hire enough computer science professors, continues to be the growth number one major or up there.
But the K 12 system is still critically moved towards it over all these years. I think people are realizing we can't let that happen again, and we are starting to see, I mean, we just, this week we saw executive orders about AI education being mandated, but even outside of that, I think even at the ground level, I've been encouraged at how much people seem to realize that turning their back on ai, which is a lot of people's immediate instinct when it came out.
It's just not the right move. I feel like very few educators and superintendents still are taking that sort of AI is not for US stance. Do you experience that as well or do you still feel like there's a lot of pushback?
[00:09:09] Sam Whitaker: I. I think so. I think everybody kind of realizes that it's not optional. However, there is a lot of reticence on the student side, so how do we introduce it to students?
And I get that there have to be guardrails, it has to be done responsibly. But when we talk about how we're going to implement ai, I do a lot of conferences and I talk to a lot of students. I go to schools and a lot of times students will ask me about, you know, entrepreneurship and getting in and what should they do.
I say the same thing every time. It show up. Just show up. Go to conferences. If you like a company, just find somebody and send them an email and ask for an internship. You'd be amazed what happens if you just show up and start, like put yourself out there. And we kind of need to do the same thing on a macro level with AI in schools.
We need to show up. So the executive orders out there, but it's kind of on us. We have to implement it. Industry has to implement it, schools have to implement it. We have to find ways to work together. I was reading an article the other day, it was actually on the China implementation, and one of the lines from it, I don't remember the source, so I'm sorry, I won't be able to quote it, but it was, if we wait, we lose.
And it's just, it's very to the point and we need to. So many things about the way we think because AI is moving so fast. I was talking to professor at in. It's on AI implementation in the schools and it's a one year study spread out over three phases and we're small pilot and four months. This pilot trying find a tactful way to tell him in one year when you have results will be.
Null and void because the technology will have changed so much. I really hate the term move fast and break stuff. I think it's overused and I don't really like it. However, in this case, we kind of have to, we have to do the best we can and get the best product out with the most guardrails and with the best protections we can put in.
Figure it out along the way, see how it works, but be nimble enough to make changes and updates as we go, because again, if we wait, we lose.
[00:11:12] Alex Sarlin: Yeah. I think that makes a lot of sense. And let's talk a little bit about, you focus on underserved communities, institutional partnerships, which I think is really amazing.
And sort of expanding that concept of AI access to many students who might not have. A parent at home who is using ai, they might not be reading New York Times and get ideas about what chat two BT is. There's really two elements of AI education that I think we're talking about both at the same time, but I'd love to sort of make sure we're separating 'em out.
There's AI literacy, meaning understanding how AI works, that it's out there, what to do with it, what it can and can't do. To your point about, you know, college access, how might you use AI to enhance your applications to things, to enhance your skills and abilities so that you can go further in your career?
There's sort of like AI as a concept, this learning how to use ai and then there's using AI to become a more effective learner. Like many of the tools that Study Fetch offers, being able to take notes from a class or a YouTube video and turn it into. Anything you'd like A whole study guide with practice questions, with flashcards, with explainers, with all these things.
I know these are overlapping, but I feel like we often, when we say AI and education, we mean both. Can you talk about both of these things a little bit? Especially because in your role at Study Fetch, you think about both of them. People learning what AI is and how to use it, but also using AI to learn.
[00:12:32] Sam Whitaker: So going back to another analogy, because that's how I communicate, it's analogies are movie quotes.
That's all I got. So what I'm trying to stick to, the analogies would be a little bit professional here. So I like to use the analogy of driving. So when teenagers start driving, they get a learner's permit. They go to classes, they get their permit, they have to drive with their parents for a while. Then depending on the state, they can't drive at.
And they're free to because a car is a big and dangerous and powerful thing. AI is a big and dangerous and powerful thing, but unfortunately we're not going through the learner's permit phase in a lot of cases, especially when you're just giving kids unrestrained access to a chat PT or a Gemini or something like that.
So the way we've built Study FET is to be kind of that pun intended. Where students have an opportunity to learn in a safe environment and where it's based on your uploaded course materials and everything's ENC case, whereas the screenshots I always show are Chachi PT next to study fetch with the same prompt.
Write me an essay on viruses. That sounds like I'm in eighth grade. Sure, I'll write it for you. Here's your 500 word essay. Whereas Study Fetch will say, I can't write it for you, but I can give you some ideas and some help on how to do it, and here's how we'll write an outline and here's how we'll do our opening paragraph and our closing and our middle, whatever it might be.
That's the difference, and it's because we're purpose built and the chat GPTs of the world, they're meant to be broad. They're meant to cater to everyone, but we need solutions that are focused on specifically learning. When you talk about how that starts, that's the hardest thing for us. We can do multi-variable calculus for post-grad math students, so much easier than we're trying to figure out how to work with a second grader or a third grader.
So one of the examples that we're building out right now is imagine a third grader taking a geometry class and they just learned what an octagon is. Teacher will be able to tell, study fetch, tell Sparky, forget what an octagon is. And then have the student teach Sparky how to draw an octagon. So think about the things that they're learning there.
Number one, they're demonstrating their knowledge of an octagon. And number two, it's a little bit of teach the teacher, which is a very effective teaching method. And number three, without even knowing it, they're learning those prompting skills that are key to ai, those skills that are gonna allow them to tell the AI to do what they need it to do.
And when they're doing that in third grade, by the time they get to ninth grade, it's second nature. I personally, I'm as in the AI world as you can possibly be, and I still catch myself sometimes when I'm writing, trying to come up with a word or trying to come up with a phrase. I'm like, oh my God, why don't I just ask AI and try to get some inspiration?
But the next generation, it's not gonna be that. It's just if we do it correctly, it's gonna be integrating with everything they do, and it's gonna be that companion and that learning companion, and eventually that companion for productivity that they need.
[00:15:29] Alex Sarlin: I love that learner's permit analogy and pun, that would be a great name for a company learner's permit.
You're right. And I think a lot of the tension around, especially at the very beginning when Shachi, BT and Gemini and Claude and all of them sort of suddenly appeared in our lives and it, everybody sort of realized, oh wait, this thing can answer anything and it's sycophantic. Back then. Right. Especially sycophantic.
So it'll do whatever you ask it to do. It'll do it as well as you want it to do. It's not always right. It's sometimes hallucinating it, sometimes it replicates biases, all of these things. But even outside of that, it'll just do whatever you want. It's very powerful to your car analogy. And it does feel like as an education community, we are over the last couple of years trying to get our head around.
Okay. If you have this suddenly incredibly powerful technology, what does it actually mean to provide it to students in a way that is ethical, is safe, is smart, is actually provides learning opportunities. That sort of teaching agent a model you just mentioned is a really interesting approach. Could the student become the teacher and teach the bot?
And it's an incredibly exciting problem space to be in, I think. But it also. Raises a lot of hackles and we've seen a lot of phone bans recently and we've seen a lot. I think we're just talking about how even this week there've been some major articles about how college students are using AI to cheat their way through college, or how open AI has these new models, but they still hallucinate in these strange ways and it still feels like.
Even now as we're trying to build that sort of learning permit layer that you're mentioning, there are still these sort of deep fear that this thing could just take over lives and somebody could get all the way through high school and college, putting every assignment into GPT and never learning anything.
And you mentioned, you know, we talked to Kristen from Khan Academy early on when they were creating Congo, and they said the number one thing that was hardest to get them to do was to not give answers. But you mentioned that. You don't give answers. That study fetch can set up against a standard. General LLM does not give answers.
It sort of helps people work through the learning experience first. How did you do that? Because it's not the most obvious thing to know how to do, but secondly, how do you think we can get over this moment in time where people just say, oh, this thing is so powerful, it's just gonna outsource thinking.
It's, they don't understand that learner permit analogy that you mentioned. How can we get past that?
[00:17:45] Sam Whitaker: Well, I mean, we have to prove it. That's the bottom line. People don't believe platitudes, they don't believe, and they shouldn't believe what a company tells them. So that's why we're so excited about the statistics.
This is teaser number two because we're still not gonna go into it just yet. For those of you maybe forgot about it, weren't thinking about sticking around. It's coming. So we have the statistics now and we're, to my knowledge, the.
For now, we have the statistics that prove that study fetch is a learning platform, and it's also more about we focus on the worst case scenario. And I think a lot of that has to do with society. And there's never been a movie, to my knowledge, where the AI ended up being the good guy. It usually goes bad, but when we're talking about kids, especially in our education system, we try to fit every kid into this little box that's in the middle and every kid's on what we call Neurodivergency spectrum or whatever it is, every kid's on that spectrum somewhere, and the ones that don't fit into this box right in the middle, they feel like they're stupid.
They feel like they just get broken down, and some of them, by the time they're in third or fourth grade, they just given up. They think they're dumb, but they're not. Every student has something to offer. It's just teachers don't necessarily have the time. In almost all cases, teachers don't have the time to give that individual attention to every student to find that one thing that can cause them to go in this direction as opposed to that direction.
So how do we, we're using our hyper-personalization to identify key future skills that children have, what they're good at, what they enjoy, and highlight that throughout their learning journey. And make it a part of their curriculum and just keep that sense of wonder and that curiosity and build and maintain confidence throughout their learning journey.
That's the silver bullet. Again. It's how we can keep, and so many of society's ills just education is the silver bullet and so many wonderful, impactful things that we can do.
With this technology that's evolving so quickly, we have such a unique opportunity to harness it and to use it for the best it possibly can be. And we have such a recent example of it going badly IE social media and it went very badly. And as a species, I think we've been pretty bad at inception points throughout history, industrial revolution.
I don't wanna go too far, but I was gonna say something along the lines of ruin our children's brains with social media, but something maybe a little short of that. But we have that recent example of social media and we can't get it wrong with ai. We just can't. It's too big, it's too powerful. It's a. If we wanna go back to the car analogy, and right now we're behind the wheel and we have the control to get the best out of it, but we have to work together and we have to be thoughtful and we have to be responsible, and we have to be intentional with everything that we do.
And those are the things that we talk about at Study Fetch every single day. We could build the best cheating tool in the world and make a ton of money tomorrow. We absolutely could. There's a reason we've been so successful because we're consistently coming out with the latest features and the latest innovation.
So if we wanted to build the cheating tool in the world, we would, we're building a learning tool because it's about more than just the company. This is about society, and I'm not overstating it when I say that.
[00:21:19] Alex Sarlin: Let's dig into that tension. 'cause I have felt that a lot in my journeys through AI and education.
This feeling of some of the fastest growing, especially student facing applications in AI are, you can call them homework helpers. You could call 'em whatever you'd like, consider it a euphemism in certain places. But basically they allow students to literally take of their homework and get the answer, and get the answer.
In any way they want. Get full essays, get complex proofs, get math answers. There are some very popular ones. One is to put the social media piece together. One is actually created by Byan in China. It has been consistently in sort of the top five homework helpers. So you have this natural instinct, I think for a lot of.
Commercial companies, especially those that are marketing directly to learners and students to say, what do students want? They want answers. What do students want? They want the degree, they want the diploma. They want the A. Right? That's what they actually want. They see the education as a means to an end, or at least that's how the companies project the needs of the students.
So they say, well, why would we stand in their way? Why tell them, I'm not gonna give you an answer. I want you to learn this, and you are obviously taking a very different tack, but it's not. Actually super apparent from the outside. I mean, when you look at all these homework help apps, it's not that study fetch stands out and is like, well, this one looks completely, utterly different than everything else.
It is different. And I know you've, you're working really hard to make it different, but it's not obvi. I mean, conmigo, because they're a nonprofit, they've sort of branded themselves from day one as, oh, we're not gonna give you the answer that we're the opposite of the cheating. But you're sort of in this funny liminal space where you're a new company.
Many of the features you have, as you mentioned, are the same features that exist in other spaces. They can turn notes into flashcards. They can do all these different things for you, but you're going out of your way, clearly, very far out of your way to take a hard line stance, pedagogical stance on. Hey, we do not want to be a cheating tool.
We are turning down certain levels of growth to not be a cheating tool. We wanna be the learning layer, we wanna be the learning tool. That is a complicated space to be in, and I'm sure you have spent a lot of time explaining this to various stakeholders. Whether this is your product managers within the company or students who are requesting a photo feature.
Tell me about how you're positioning here, because it's a really weird moment in time for this, where you have this incredibly powerful tech, it can do anything. How are you as a company maintaining focus on this learning paradigm?
[00:23:46] Sam Whitaker: Okay, so first of all, I wanna coin a new term because it just popped into my head when you were talking.
GPTs get degrees. So there, that's copyrighted right there. So honestly it's tough and it, it's not tough from a kind of a shoeing profit side because we'd made that decision long ago and nobody ever looked back. It just doesn't come up. But it's hard from a technical perspective because it is so powerful.
And so we take from. M we're not dependent on any, any of them. We use open ai, we use Claude, uh, we use Gemini. We use everybody. Bim. We're gonna use whoever's best at any given time, gives us the best results and you know, the most efficient results. But we essentially have to create a filter level in between the results that come back from them.
Create something that's learning focused and it's difficult. It makes it a lot harder. It takes a whole lot more time. But again, as a company, we're focused on.
And we've built a really good product that a lot of people really like. We have over 4 million users now, but impact is too important to let go, and it's too important to not have somebody holding that standard and kind of charging forward and making sure that the ideal is that everyone's gonna have to follow because.
Go back to kind of what you were talking about initially. Education's gonna have to evolve and especially classroom education and assignments, things like that. I mean, we're gonna have to evolve to a level of more testing understanding as opposed to testing facts and how that's gonna happen. I'm not really sure, but that's when students are gonna have to have a learning.
Understanding in order to get their grade, which isn't that the goal of everything. The goal isn't to know when Napoleon died. When you get out of college, the goal is to understand that you know, this is what happened at this point in history, or this is why this theorem is applicable in whatever you're doing.
And AI can make that happen. If AI can help with the learning side.
Studying and, and learning in classrooms can actually be about retention and understanding. That's what we're building for. And the thing is, like those cheating tools, whatever homework helper tools, they're not gonna last because the education system is slow to evolve and has been for a hundred years.
However, it's not gonna have a choice. But to pivot pretty quickly here because of ai. So yeah, those homework helper tools will make some money in.
[00:26:32] Alex Sarlin: Yeah, I love it. I hope I wasn't painting too harsh a straw man there in this idea of these cheating homework help tools. It's a funny moment. It's, it's unusual I think for technology to be so flexible, let's put it that way. Right. And that's what's so interesting about the L lms. They're so flexible.
They'll do whatever you tell them to do with some internal constraints, but not as many as you might think. And you know, people like study fetch have to say, wait a second. Learning is a struggle. Learning needs time. Learning needs attention. Learning needs can be a fun struggle. It could be a wonderful struggle, but it is not, shouldn't feel like, oh, my teacher gave me an assignment.
I typed it into t, gave me an answer, and I gave that back to the teacher. Oh, I learned something. Because that's not what learning is. And we, we all know that. So your study fetch and you have this sparky sort of dog. Character that is your AI tutor, you can call it. I'd love to understand what that means, but that's really interesting.
What do you see as the role of a sort of personality? You know, you mentioned earlier that there's this beautiful ideal of younger kids like school. They like learning and we know from the statistics that people get sort of more and more disillusioned, more and more anxiety, more and more stress. As they get older, if they feel school is less and less relevant and they lose their curiosity, the word you mentioned earlier.
And if we truly want to create a situation where school does not diminish in engagement and curiosity, which is a really beautiful vision, what would that look like? And do you think there's a role for sort of AI personalities and companions like Sparky?
[00:28:05] Sam Whitaker: I do, I want to stop short. This is speaking of being topical.
Mark Zuckerberg did an interview yesterday. He was talking about in the future most of your friends are gonna be ai No. However, as a companion, as an editor for writing as, uh, and I do think there's a, a personality side to it. So if you think I wanna talk about there, when you're talking about call Spark.
Well as live as AI can be realtime AI tutor now, so students can upload their materials. Our AI tutor will create a lesson plan and literally talk to them in real time. Wow. The student can interrupt. Stop them. Say, slow down, speed up. Ask me a quiz question about this. Can you explain that one a little bit more?
All happening in real time, which is pretty incredible for kind of.
But when you're talking about the personality side, it's interesting because we talk about that a lot as well. So I'm based in Philly World Champions and there's a charity that I've worked with for a while up here called Mission Kids. They do, I won't drive too deeply and because it's very sad. Their primary role is that their intake, um, facility for child victims of abuse for a.
Previous, but they also do a lot of education and education for talking to younger children on what's appropriate, what's inappropriate, what should you, when should you report something, who should you report it to, things like that. And pretty much every AI solution out there right now, if a child or starts to talk about something with the ai, AI personally believe children will start to talk to AI like they do an imaginary friend or stuffed animal.
And we don't want our AI to say, I'm sorry, that's inappropriate. I shouldn't be talking about it. If a child's asking for help, we wanna use the best resources and the best information from the subject matter experts to make sure that the AI helps the child to get to the right person to get that help.
We're working with former law enforcement enforcement officers who have a company that focuses on active shooter prevention.
[00:30:05] Alex Sarlin: Wow.
[00:30:05] Sam Whitaker: So getting their best practices and saying, how can we incorporate that into our ai? So if a student is talking to the AI and saying, Hey, I saw this is this weird or something, then the AI can say again, this is, it keeps coming around to driving.
So it's guardrails, but it's also off ramps. So how does the AI. Direct the student appropriately to where they need to go in real life. The administrator they need to talk to or the therapist or whatever it might be. Say, Hey, this is probably the person you should talk to in some pretty awful things. So I completely forgot to word what your question was, but I hope I answered a little bit.
Yeah, I think it's just a matter of this, the personality thing. So.
Lives as a supplement to our lives, not a replacement for, for living in the real world.
[00:30:57] Alex Sarlin: The idea of just the quote of, you know, most of our friends will be AI is a very dystopian way to look at it. I mean, you could imagine that 25 years ago you could say, I. In 25 years, most of your work will be done in front of a computer in email, and you'd be like, oh God.
But that's exactly what it is. Or, or in Slack or things like that. I mean, some of these things evolve to happen. You, the frog gets boiled bit by bit, but I think it is really, really important what you're saying, which is that AI. Can do amazing things and it can simulate personalities and simulate humans with that.
It's one of its core capabilities, but that doesn't mean that we should purposely create a world in which our social, that's parasocial, right, to use a psychological term parasocial, where you're, you're working with non-player characters, non-human actors more of the time than anything else. I think we could be a lot more creative in thinking about what are the things you want to work with humans on because humans are.
Human, there's a lot of good things about humans and what might be the things you might wanna work with an AI and because it's endlessly patient, it's always available. There are obviously benefits for some of the, for AI companions, and I think this idea of real time tutoring from your AI tutors, I think touches on exactly some of those benefits of ai.
It's always available. It will be as patient as you want. It'll explain things as many times as you want. It'll give you a thousand practice questions and it's hard to get a teacher to do that for you. It doesn't mean that you wanna spend all your time with an AI and never talk to a peer or a teacher.
So I mean, we're all wrestling with this. It's a really interesting set of questions. I think this is a great moment where we have a few minutes left, but you just put out today insights from over a million conversations. You mentioned having 4 million users and. You have mentioned, you know, Sam, that you're really dedicated to not giving the answers you built technology to.
It's not about answers, it's really about learning. Now, tell us about some of the findings from this paper.
[00:32:55] Sam Whitaker: I absolutely will. And thank you everyone who stuck around all four of you. Hi mom. So I did wanna say one thing on the last point, um, when you were talking about kind of avatars and who you're talking to.
Yeah. That it brings up a great point about tutors and, um, one of the common misconceptions about ai, which I just wanna make sure I touch on. People are worried AI will replace teachers. And I say this on social media posts, I say it every chance I get. AI will never replace a teacher. Teacher's compassion and experience and just insight.
That there's never a replacement. AI will never be a replacement, but it can be a help to overworked and burnout teachers. So just wanted to slide that in. So yes, we studied 1 million conversations, student interactions with our AI over the course of a couple months, and we stripped everything out. We obviously completely anonymized everything, stripped out all the content.
Just narrowed it down to what actual user intent was. Now, this is really meant as a comparison study to kind of generic chat bots, you know, the chat GPTs of the world where, I mean, studies have shown 50%, maybe even more of students or interactions are, what's the answer to this question? Can you write this essay for me?
Cheating? Just cheating. There's, it's total lack of academic integrity because it's there. So we decided to figure out what students were actually using Study Fetch for, and the results were actually better than we had hoped for. So we saw, of the 1 million conversations, 40% of the conversations were concept explanations.
Things like, I don't understand this, the, can you explain it to me? 22% were content summary. So okay, here's this big document. Can you summarize it for me? And you know, interact with it. From there, 10% were step by step instructions, so that would a lot, and a lot of that was math and computer science students saying, Hey, I need to get from this point to this point.
Can you explain to me how to do
six?
Do this essay for. And not only that, because the 2.6%, the students who did try to use study fetch that way, they realized they couldn't and that decreased by 80% over time. So IT students realized that they couldn't do it and they started learning, they started asking, and then eventually they started learning and it was really a validation for us.
That was our hypothesis from day one has been if you offer students another option, we believe in equality of opportunity. Equality of opportunity around the world. And there are so many areas where there is not equality of opportunity. Parents can't afford to tutor, parents can't afford, you know, extra credit or AP review or SAT prep, things like that.
AI can bring equality of opportunity to everyone. The best possible learning experience for everyone who wants it. Students still gotta work. You gotta put the work in the chat. GPTs and the, what are we calling them? Homework helper tools. The world will always be there. So students will have an easy way out.
They'll have a shortcut. But for the students who wanna learn, I want the students who want to learn but don't have the resources. Those are the ones I'm targeting. Whether it's in North Philadelphia, where we have a pilot program going, whether it's in Uganda, where we have a proper program going at a.
90% as long as I can figure out my transport. I'm gonna be there next month to introduce study fetch to this girl's school, which is eight hours away from the airport in Uganda, which I'm not looking forward to that drive, but it's gonna be totally worth it. Everyone in the world has now access to the best learning experience in the world.
There are no more boundaries. There are no more borders for learning. They don't have to be, and I've said it however many times today, it's the silver bullet. This is where we can make so much of a difference right now and just see the exponential difference in generations to come as we build and build on it.
And I couldn't be more excited or more passionate about it. And yeah, so that was it. But visit.
Right. Especially being used by students. So we wanna partner with researchers, make sure that. We're getting the best comparisons. We're getting peer reviews. We're making sure that, making sure that we're doing right.
Pretend at any point.
[00:37:27] Alex Sarlin: Lemme ask a couple of super quick follow up questions about this report 'cause it is really interesting. And then wanna ask one final bit about this Uganda and this equalizing, you know, opportunity of access to ai. But one of the things that's really interesting about this report, and there's a lot of interesting things in here, is that, as you said, the step-by-step guidance, the idea of, hey, help me through this problem is very high in mathematics because you have it down by discipline.
It's high in mathematics in. Chemistry, engineering. Exactly. The kind of, you know, hardcore science and STEM disciplines. You also see it a little bit in business and computer science of course, which I just think that's really interesting that you, you know, in, in these scientific fields that where problem sets really do encompass, you know, multi-step solutions.
You have students. Walking through that step-by-step guidance, asking for, you know, bit by bit instead of jumping right to the answer. That makes a lot of sense, and I guess in retrospect, that doesn't seem surprising, but I, it is surprising to me how different that is than other disciplines. You know, even in something like biology, from what you have in biology, you know, the step-by-step is only 4% compared to.
In chemistry it's 30%. So that's really interesting. What did you make of that step-by-step piece? 'cause that feels like a really good halfway liminal point between, you know, going to an, you're still getting really direct help on a problem you're grappling with, but you're not getting right to the answer.
You're getting walkthrough to actually get sort of tutoring on how it works. What did you make of that?
[00:38:52] Sam Whitaker: So there are several points there. Number one, teachers, professors, whomever, don't have time to go to every student and give them a step-by-step ex explanation. It's, here's the book, here's, I'm gonna do my lecture, I'll answer some questions.
But there just isn't enough time. So as I said before, every student learns a little bit differently. Every student learns at a different pace. So it, it makes it possible for students who maybe aren't getting it right away or just need that little extra piece that, so for so many students, it's that one thing, that aha moment, that light bulb moment that says, oh my gosh, now I get it.
And making that available to students. I mean, that's the golden ticket. That's fantastic. And it's, it's an opportunity to really, we had, we did a study another.
She said, said, with study fetch, I can ask a stupid question without feeling stupid.
[00:39:43] Alex Sarlin: Right?
[00:39:44] Sam Whitaker: And how many students out there are just, they don't raise their hand or they don't even go to the professor because they're afraid, like, I, this sounds dumb. And they never get over that little, just that one thing, that one spot that would've gotten them to understanding and would've, you know, they would just would've taken off from there.
And they don't do it. Nobody's afraid to ask Sparky. He the nicest little in the world. Infinitely patient and he's constantly, you know, always there.
[00:40:11] Alex Sarlin: That idea of being able to be vulnerable, be in a safe space and get support is one of the most important pieces of ai. I love your stat about the shortcut behaviors.
You have this concept in this study of learning intent and shortcut behaviors that without learning intent, so you're just trying to get the answer without trying to actually learn the material. And even the students who do start by doing that. Because they may be used to homework help tools or you know, other types of tools.
When Study Fetch comes back and says, well, that's not what we're here for. I'm not gonna give you the answer, then you wouldn't learn anything. Whatever version of that, you have people then get with the program and say, okay, I'm not gonna then just go download a different tool and just ask it the answer.
I'm going to actually still engage with this and I'm gonna engage with it in a much more pedagogically sound way, and stop asking for the answer. Instead, ask for support on understanding concepts or getting step by step walkthroughs. That felt like a huge finding.
[00:41:03] Sam Whitaker: A huge finding. Again, it's proving our hypothesis and it's proving, you know why students are seeing such better results With Study Fetch I study and 80% of students saw their grades rise.
They saw 30% reduction in study time, even though their grades going up. And these are all wonderful things, but now we're seeing the proof in the pudding. We're seeing that this is, students realize that I'm not just gonna get the answer. Additionally, you'll see if you read the full report again at Study Research, you'll see that students were learning how to learn with ai.
Is the best way I can put it. And that's just an invaluable tool. That's an absolutely invaluable tool. They're learning how to prompt, they're learning how to ask questions appropriately to get to the answer that they want, get to the answer quicker and have it enhance their learning experience.
[00:41:51] Alex Sarlin: There you go.
Exactly. Final question, I know we're, we're coming up right on time here, but you are the director of Social Impact and one of the things that people worry about in the AI future is that. It will exacerbate existing inequalities in society where, you know, the, the kid whose parent works at Google is gonna be using AI from when they're five years old and the kid in Uganda, whose parent has never used a computer, is just gonna fall even further behind because it's such a powerful tool that they won't have access to.
Tell us about what you're doing to equalize access and. Where you see, you know, the whole ed tech ecosystem, how can we make sure that we are not building tools that make inequality even worse by sort of supercharging the most motivated, most culturally literate learners.
[00:42:34] Sam Whitaker: We have to be cognizant of it from day one, and we have to think about it and talk about it from day one and be responsible and thoughtful about how we implement our system, because at the end of the day, we set the parameters for AI at least so far until Skynet takes over.
I don't think that's gonna happen, by the way, but yeah, we set the parameters so it's making sure that we're cognizant of it and making sure that we. We look at those biases and those kind of inherent things that are built into not just ai, but already built into the internet, built into scholarly articles that read Right.
And it's also about building out our models. So I've been working on. Program in Uganda. Been running for six months and it finally is, and I could not be more excited. And I'm working with the teachers and we have a WhatsApp group and they keep sending me all their study fetch tools that they create and everybody's excited and it's fantastic.
But I want to go there because I wanna look all of those, it's, it's a girl school and it's actually a two generation school where the parents, who most of them never went to school, go alongside their daughters. I wanna look all of them in the eye and tell them there is no limit. To what you can learn.
Learn. There's no limit to what you can do. The barriers are gone because of technology. Because of ai, there's no limit. You don't, teacher, there will never be a replacement for a teacher. However, in areas where there aren't, there isn't a teacher, you have an AI tutor, you have access to all of this, you have access.
It's taken what the internet did or in many cases didn't do for equality of opportunity and just supercharging it. Again, if we do it right, if we make sure that it's responsible and that it's purpose built and that we're focusing on learning it, there's no limit to what we can do. There's no limit to one of those girls in Uganda going off to work and be the next lead engineer at Nvidia or whatever it might be.
It comes down to true equality of opportunity and true reward for hard work. And reward for reward, just putting the time in. It's an incredibly bright future. So many people are so scared of AI and I get that. I have fears too, but that's so outweighed by the amazing future that we can create and the amazing way that AI can impact and influence and improve our lives from starting at six years old, starting at five years old.
[00:44:48] Alex Sarlin: It just reminds me of that old Ted talk about the hole in the wall experiment where they literally, you know, put a computer in the wall, in in, I think in, in rural India, and students would just go and teach themselves how to use the computer just by having access to it. It would be this incredible, just having access made this huge difference.
I. That with ai, the idea of being able to give students all around the world access to a intelligent chatbot and say, I can do literally anything you can imagine asking me to do. We can make movies together, we can write books together where I can help you study anything you can imagine. I can help you understand anything you can imagine.
I agree with you. It is a really bright future. We are out time, unfortunately. But this has been so interesting. I really appreciate your perspective on everything you're doing, and honestly, you know, this was not an aspect of study fetch that I, I really knew was a big part of your story. So I think this is really educational for me and hopefully for our listeners as well.
Sam Whitaker is the Senior Director of Social Impact and Institutional Development at Study Fetch an all-encompassing AI learning platform for students. In many languages, Omni Modal has realtime AI tutoring check it out and check out their brand new usage report about how students are actually using it and don't sleep on that, uh, offer.
He said to researchers to come use that dataset. That is an amazing dataset. Really exciting. Thanks so much, Sam, for being here with us on EdTech Insiders.
[00:46:04] Sam Whitaker: Thank you, Alex. It's been a pleasure.
[00:46:05] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community.
For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.