
Edtech Insiders
Edtech Insiders
AI That Learns From You: The Future of Teachable Agents in Education with Adam Franklin
In this special episode, Alex sits down with Adam Franklin, our very first guest on EdTech Insiders, who returns three years later with an exciting new venture.
Adam is a former high school history teacher turned edtech entrepreneur. He previously taught at YES Prep Brays Oaks HS in Houston, TX and coached the varsity soccer team. Adam earned a masters degree in Learning, Design, and Technology from Stanford GSE in 2016 where he also served as a researcher and writer for the Stanford History Education Group. In 2017, he joined Nearpod, then a Series A startup, to help lead the content arm of the business where he stayed through their $650M acquisition by Renaissance in 2021. In early 2024, Adam left Nearpod to join the Teaching Lab Studio as a fellow to build out innovative products at the intersection of AI and Education, with StudyBuds, a teachable agent practice platform, being his primary focus.
đĄ 5 Things Youâll Learn in This Episode:
- How AI-powered teachable agents help students learn by teaching.
- Why AI can boost engagement and deeper learning in classrooms.
- The limitations of multiple-choice assessments and AIâs role in fixing them.
- How AI can co-create with students instead of just giving answers.
- Why venture studios like Teaching Lab Studio are investing in AI-powered learning.
⨠Episode Highlights:
[00:01:18] Adam Franklinâs journey from teacher to edtech founder.
[00:06:46] How teachable agents create interactive student practice.
[00:12:39] Training AI to ânot knowâ and learn from students.
[00:16:56] The PokĂŠmon effectâwhy students love training AI characters.
[00:22:57] Why students outsource thinking and how AI can change that.
[00:31:23] StudyBuds aligns with curricula to support teachers.
[00:37:20] Inside Teaching Lab Studioâs unique edtech venture model.
[00:44:48] The future of AI in education and multimodal learning.
đ Stay updated with Edtech Insiders!
- Follow our Podcast on:
- Sign up for the Edtech Insiders newsletter.
- Follow Edtech Insiders on LinkedIn!
đ Presenting Sponsor:
This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.
[00:00:00] Adam Franklin: What are the interactions with LLMs that do spark thought? And I think anybody out there that has tried to refine a prompt to get GPT to make an image that they want to be more accurate or to remove a typo from there is absolutely a fun, productive struggle, fun for some people, productive struggle that takes place when you're like, no, I got to figure out what I did wrong to get the output to be what I want to see.
I think. Co creation with AI is going to absolutely be a future ready skill, right?
[00:00:38] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry. From funding rounds, to impact, to AI developments across early childhood, K 12, higher ed, and work, you'll find
[00:00:51] Ben Kornell: it all here at EdTech Insiders. Remember to subscribe to the pod, check out our newsletter, and also our event calendar.
And to go deeper, check out EdTech Insiders Plus. Where you can get premium content, access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod.
[00:01:18] Alex Sarlin: Today, we have the pleasure of speaking to Adam Franklin, who has the. Unique honor of having been the very first guest on EdTech Insiders three years ago. He's an amazing guy. Adam Franklin is a former high school history teacher turned EdTech entrepreneur. He taught at Yes Prep Braze Oaks High School in Houston, Texas and coached the varsity soccer team.
He then went to earn a master's degree in learning design and technology from the Stanford GSE, where he served as a researcher and writer for the Stanford history education group in 2017, he joined near pod, which was then a series a startup to help lead the content arm of the business. And he stayed through the 650 million acquisition by Renaissance Learning in 2021.
In early 2024, Adam left Nearpod to join the Teaching Lab Studio, really interesting place which we'll talk about in this interview, as a fellow to build out innovative products at the intersection of AI and education. He's building StudyBuds, a teachable agent practice platform with AI. Really interesting stuff.
Here's our conversation with Adam Franklin. Adam Franklin, welcome back to EdTech Insiders.
[00:02:28] Adam Franklin: Thank you so much. It's great to be here.
[00:02:30] Alex Sarlin: Yeah, I'm so happy to talk to you again. For listeners who haven't been with us for the entire stretch, Adam was the literally very first guest we ever hosted on EdTech Insiders.
He took the leap. We were in our on deck education technology community together, and he was the first one to raise his hand and say, I'd love to talk about EdTech. I'd love to talk about what I'm doing. So I appreciate everything. You being just one of the absolute very first champions of this idea. And at the time you were at Nearpod and you had been at Nearpod for a number of years, and you've recently made an incredibly interesting move, starting your own venture called StudyBuds in collaboration with the Teaching Lab Studio.
So let me pass it to you. Tell us the story of the last year and what you've been doing with StudyBuds.
[00:03:16] Adam Franklin: Yeah, thank you so much, Alex. I really appreciate it. Like you mentioned, I worked at Nearpod for the past seven and a half years, which was an incredible run. And before that, I was a high school history teacher and a through line through all of those experiences is that I love building tools that spark.
thinking that invigorate classrooms, but not just in a surface level way that get kids actually reasoning and allow teachers to see what's going on. And that's something that informs the work I'm doing today. But I was at the end of my run at near pod, excited to get back to early stage building. And frankly, Having just had a kid, not necessarily willing to take on the risk of bootstrapping something from the start and not having health insurance.
It's I'm terrified, but I was lucky enough to come across an institution called teaching lab studio, which is a venture studio that was putting out a call for folks with ideas at the intersection of AI and education. And I was Thinking about this idea I had for a long time, I had studied a topic called Teachable Agents back in grad school and I'd love to talk more about the history of that, but this idea, I had felt at the time was really hampered by the technological constraints of the internet and the tools that were available to build with, but with the advancements of GPT I saw an opportunity to maybe revisit this idea and see if we could create a more engaging and accessible experience whereby kids are practicing by teaching another character.
And so that's what I essentially applied to the teaching lab with. I was near pod, God bless him. I took five months off for paternity leave and had a lot of time to think about my future. And that's when this like idea really, I had. That like Jimmy Neutron style brain blast, like moment of clarity and was like, wow, there is something here and started socializing with some friends and was pointed to the teaching lab studio.
And I ended up applying and was accepted as a fellow. There are five or six other fellows that were all working on different ideas in parallel that are all at the intersection of AI and education. But my idea really is just, can we operationalize? Teachable agents as a means for student practice, because I think there's huge potential to both engage students and push them to think deeply in a way that alternative forms of gamified practice in the classroom just don't do or sometimes incentivize kids Not to do and what I've been doing for the past six months, I joined in April has really just been building and testing.
I'm so grateful to the teaching labs model, which is very much prioritizing efficacy and experimenting to demonstrate learning impact. They're not caring at such an early stage about my ability to generate revenue that will come later for sure. But we want to build on a foundation we feel confident about and that type of patient capital, I don't know that it exists in other places.
And K 12 being in the state that it's in, as far as funding is concerned, like I'm grateful to get to apply my skills and make something and put it in front of teachers and get their feedback and iterate on it. So StudyBud's the platform that I've been building. I'm working with teachers right now in schools in Houston, Texas, where I'm based, as well as some other folks across the country to just understand that very question I put.
Toward you before it. Can this be an impactful form of practice towards engagement and Rigorous, deep thinking.
[00:06:43] Alex Sarlin: Yeah.
[00:06:44] Adam Franklin: That's where we're at today. It's exciting.
[00:06:46] Alex Sarlin: It really is exciting. And I think I read some of the teachable agents literature when I was in my grad program at teacher's college. And I had a similar feeling.
It was a really cool idea, but the technology and the interfaces felt. a little bit Stone Age at the time. Can you tell our listeners what a teachable agent really is and why it is such a promising and exciting way to teach?
[00:07:11] Adam Franklin: Of course. So I think before teachable agents, this premise existed, right? That a really high level of mastery is your ability to teach someone else.
It dates back to Greek Roman philosophers can you teach somebody else as a means to check your own understanding? And when you do this in any environment, you are confronted with your own knowledge gaps. But in the late 90s, early 2000s, There was a lot of research being done, specifically one study that gets cited a lot is called Betty's brain and I was lucky enough to at the Stanford GSC take a class called core mechanics for learning, which was taught by Dan Schwartz and Kristen Blair and Jessica saying that folks that all contributed to the study.
This study was like, can we program? an artificial character instead of you needing to teach another person another student or a confederate somebody that's playing that role can we program a character to be taught and it was very narrow in terms of how that programming took place that there's a visual interface whereby essentially you are making a causal chain That is a representation of Betty's brain.
If you could imagine photosynthesis as a concept, you're saying on the causal chain, sunlight causes X. And Betty, this character, has to literally operate on exactly the bounds of the information you've put onto this causal chain. And it's really cool if you extrapolate from there, thinking about students needing to get at the core root of their knowledge and getting to experience That character applying their knowledge in the first iteration.
It was just this activity, but as they built on it, they added elements like you could watch Betty, your character compete in a game show based on how well you taught it, and they uncovered all sorts of benefits to this interaction. There is the reciprocal feedback or recursive feedback. This idea that you're confronted immediately.
By things you've left off that causal chain. There's also something called the protege effect that you're willing to invest more into learning about something in the service of teaching it. If it's not just for yourself, if you have something that's going to represent you externally, you give more to this and then.
Beyond that. Also the rigorous nature of conceptual representation of knowledge, abstracting moving you up Bloom's taxonomy as far as how you're practicing with a topic that you might otherwise just be like regurgitating a definition of this is truly a deep level application of photosynthesis. And like you had mentioned, they very much look like nineties and early two thousands.
Right, exactly. As far as what they, they're, yeah. That's like it's a reflection of the time. Yeah. But I think some of that cumbersome nature of the experience. Detracted from more students being able to engage in something like this.
[00:10:01] Alex Sarlin: Yeah, it was not productizable at the time because for two reasons.
I mean, the interface, the graphics were all made by university faculty. I just got saying is I worked with her. Chan Zuckerberg. She's absolutely amazing. Daniel Schwartz is now the dean of the Stanford graduate school. I mean, these are incredible people, but they're not product people. They're not designers.
So it was a little bit clunky in design, but the other big constraint, which I think you're really focusing on now as well is. that you mentioned photosynthesis, you know, they would basically build these whole scenarios about one tiny specific piece of the curriculum. You'd have to do this, all of this work just to get Betty to understand photosynthesis exactly that one thing.
And it was so much work up front and only covering a tiny sliver of what people were trying to actually learn. That is not true at all in the AI world. So tell us about that part. Exactly.
[00:10:49] Adam Franklin: Essentially, you'd be building an entire model just for that topic. As far as what Betty is operating off of GPT and other LLMs are much more flexible in terms of your ability to implement their API and kind of in the moment feeded information about the topic that it's meant to be taught about, which is super exciting because that.
Obviously massively reduces the sort of engineering cost to building an experience like this. I think that in combination with the fact that teachable or LLMs are fantastic conversational partners, right? If you think about AI in general, we are leveraging them education for things that were not their original core purpose in some capacity to be a tutor.
Is not what an LLM was designed to do. And so it is going to try its best and emulate a lot of those behaviors, but it is not necessarily a perfectly suited tool to do that job. And so I'm like, I don't want to use it toward that end. It is a fantastic conversational partner. That is really one of its core charges is to keep the conversation going to continue coaxing a response, a fruitful response out of you.
And that in the context of a teachable agent environment is really helpful as opposed to counterproductive.
[00:12:03] Alex Sarlin: I can imagine that it's incredibly exciting. Also a tricky and interesting use case to tell an LLM to not know something, right? I mean, the LLMs are trying to be trained on these massive data sets.
They try to be the know it alls. That's, I always use the metaphor of they act like C3PO, right? The know it all try to serve you in any way they can and bring whatever they can, but they're kind of jittery, whatever. They have their own personality, but to train them to not know things and to learn from you is not.
100 percent it's natural behavior. How are you adjusting? How are you training and working with these LLMs to get them to act like learners?
[00:12:39] Adam Franklin: It's a great question. And it's something that we're iterating over time based on the feedback we get from the field and watching kids interact with this product that we're building.
As you mentioned, GPT is very eager to help in some ways. And so you will notice if you just prompt in GPT today, play the role of a teachable agent as you struggle in teaching. Or maybe even give up GPT is going to give you some inductive hints because that's its nature, right? It's it wants to further the conversation in advance and we incorporate this version of GPT in our product.
So we sometimes in the current iteration of study buds suffer from the agents being a little too helpful and we're actively problem solving about this, and I'm going to tell you a little bit about a solution we're excited about, but it is an issue. And so our first. solution to this problem was to more narrowly define the practice module so that there are specific milestones that we can essentially inform the agent about.
And what I mean by this is instead of saying you're a science teachable agent, say you're a teachable agent about the topic of genetics and mutations. And we are going to set up an environment where students will have Three milestones essentially to complete. They need to diagnose your misconception.
So instead of just being purely teachable, because that's hard, right? You got to draw a line somewhere. It can't just be tabula rasa. You can't have to teach it English and teach it the definition of the sun, all this sort of stuff. You want to keep it within the zone of proximal development, just below the topic you want to teach about.
And so we're saying, we want you, the agent to manifest a misconception about. mutations and inheritance. Maybe you think all mutations are bad. And so in the context of this practice environment, the agent presents a paragraph yesterday and says, my teacher told me something was wrong with it. Can you help me figure out what that is?
It's meant to lower the barrier for entry as far as the students not needing to be like, it is intimidating. Just teach what you know, right? Just start from the blank canvas. That is going to alienate some folks as far as our goal is. Enveloping more students into this type of practice. So we ask students instead to diagnose something that's wrong.
I think that is a behavior that adults are willing to do, right? We like finding something that's wrong, correcting typos on social media. It is something we're willing to do that's rigorous, but isn't necessarily. a reward to us. And so we asked students to diagnose the misconception and then supply evidence from resources we curate that are intentionally targeted toward that misconception.
So examples of positive and negative mutations, articles, YouTube videos are right there to point toward because the agent will ask when you identify their misconception, the next milestone is to provide a piece of evidence as to why that's a misconception. And so you can select and annotate evidence and send it to the agent.
It will ask you to justify your reasoning. And so what we realized is we're kind of reverse engineering. Claim evidence reasoning question you might get in science or a DBQ for my social studies teachers right there, but it is Way more rigorous than the alternative as far as how you're demonstrating what you know you are truly forced to take ownership of an argument of a piece of evidence and Of an agent's knowledge base, because they will be able to update what they know based on how convincingly you're teaching them with evidence we've given you to do
[00:16:01] Alex Sarlin: so.
I love it. It just gets all my instructional neurons firing. It's such a cool idea. And it does dovetail so well with so many different types of evidence based teaching. You mentioned the word ownership there. And when I heard you talk about the protege effect, all this pop culture references today, my immediate thought was, it reminds me a little of the Pokemon effect, right?
I mean, the story of Pokemon is You're a character and you have all of these little monsters that you are training. That's what it's about. You're a Pokemon trainer. They're the Pokemons, you're the trainer. And there's something so appealing about not being the one who's struggling, about not being the one who needs to be taught and needs to be comforted and needs to be persuaded, but putting in the role of the teacher, getting ownership, having autonomy and agency.
So, Tell us about how you think about how study buds might create a sense of autonomy, agency, and ownership for the students who use it.
[00:16:56] Adam Franklin: Absolutely. So like you'd mentioned, to teach something, it's one of the highest forms of care ultimately, or adoption, right? And my mind went to the exact same place you did when this idea came in your head.
Like, Pokemon, Tamagotchi, Club Penguin, all these things that there is absolutely a track record out there of a willingness and an eagerness to adopt a virtual character. You see it today with LLMs too, like character AI, your mileage may vary on your. Thoughts and utility of that platform, but there is an appetite for this type of interaction.
And I think if we can situate that in an academic environment, we can tap into more intrinsic motivation that I think is truly lacking when it comes to why students are willing to participate. In deep thinking, and this gets at like a broader societal problem that I think we're struggling with. And it's a study that gets cited all the time.
I saw Kristen DeCerbo put it in her last Khan Academy blog post, the unpleasantness of thinking. It's something that you can, you don't have to read the paper. You can know what it's about just from the title, but it's absolutely true. I think there are humans that are exceptions, individual humans, but for society at large, thinking is aversive.
It just is. My friends, when they come home from work are not looking to be challenged mentally. Yeah. My wife, God bless her, loves the Real Housewives programming on Bravo. And this is not an indictment of her TV taste. It is a reflection of we outsource thinking when we can. Yes, when we have tools to do so.
And this has been true in education. And I think part of the folly is expecting kids to be any different. Giving them tools that require proactive use of asynchronous platforms is what leads to the 5 percent problem that Lawrence Holt describes. It's of course, you're going to appeal to the kids that are Already motivated to do so are charges to motivate more kids to do that thinking.
And I see Teachable Agents as an entry point to doing that thinking, because frankly, it's too easy to outsource. But before GPT, there was Chegg and PhotoMath. Before that, there were SparkNotes. Kids just take advantage of tools that are there for them. I don't think we should blame them. No, the goal of education is to inspire as opposed to coerce.
And that's something I'm very much taking up the charge on the side of inspiration.
[00:19:21] Alex Sarlin: I really love that. I think it wasn't a Richard Feynman concept at one point about if you really want to learn something, teach it to a child or is I'm probably, yeah, yeah. And I think there's something so exciting about this.
Flipped role, this idea of teachable agents and frankly, the fact that AI gets brought into the equation creates a whole slew of really exciting sort of extensions of this idea, which I know you're thinking about. I mean, when you mentioned Betty's brain had the, Oh, then you put Betty on a game show. Well, game shows may be a little bit eighties and nineties, but the idea of putting your.
Character that you've trained into an environment where they have to interview or they have to compete for something, or they have to do something in the world with the knowledge they have, and you get to watch them and root for them and then correct them and support them if they struggle. I mean, it's such a different way to look at learning than being that person in the ring, getting beat up by the test questions and getting beat up by the having to admit you don't know things.
Exactly. It's wildly different. So you mentioned this idea of. Outsourcing thinking, it is a very natural behavior. We've been doing this analysis of AI tools and some of the most popular AI tools right now are what they call homework helpers, things where you can literally photograph your homework and it will answer the questions for you.
And that's a perfect example of outsourcing critical thinking. How do you think that we as educators and educational technologists can continue to inspire children in this world where the tools at their fingertips to outsource are just going to get. Exponentially more powerful.
[00:20:53] Adam Franklin: Yeah, first of all, I think we need to come to terms with the fact that there is no Dune style Butlerian jihad that's going to wipe all the thinking machines off the planet.
They're going to get more and more powerful over time and better at enveloping critical thinking tasks, right? And that's like opening eyes charter, right? They want to. Automate the majority of an intellectual labor, which is great for them. But I think in some ways, not necessarily great for society. And so what are the interactions with LLMs that do spark thought?
And I think anybody out there that has tried to refine a prompt to get GPT to make an image that they want to be more accurate or to remove a typo from there is absolutely a fun productive struggle fun for some people productive struggle that takes place when you're like no i gotta figure out what i did wrong to get the output to be what i want to see i think Co creation with AI is going to absolutely be a future ready skill, right?
Sorry for the cliche, but that's something we talked a lot about at Nearpod, too, is what are the things you got to teach kids to be able to do in the future? Co creating with AI is absolutely one of them, and I think instead of Banishing this type of creation, embracing it and rethinking what products of knowledge can look like in that realm could be really powerful.
And I think for folks that have played with canvas or Claude, where you're building stuff together, like it feels magic to watch. GPT give you feedback or to give GPT feedback and watch it iterate on that feedback like that is absolutely a type of currency that we should be leveraging more. Why not ask kids to co create an advertisement or a lab report something that perhaps is more demanding given some of the creative aspects we can take off their plate and they can be more of the producer editor of those things, things that absolutely require a high level of understanding to do effectively.
And they can feel more sense of pride in producing something legitimately cool that can be shared in the context of a classroom or outside of a
[00:22:57] Alex Sarlin: classroom. Yeah, I think that legitimately cool is a really important part of it, actually. I mean, you can call it cool or you can just think about it as the umbrella of human taste, right?
AI can do all sorts of things, but it is a person who comes in and says, that's not good, or that's not cool, or I don't get that, and I don't think my friends would get that either, or in a critical thinking, there's a taste piece, but also a yes, technically, you've made a good argument here, but it's not the argument I want to make.
I don't believe that. So let's do something different. Or I think here's a different example that I'd love to put in here. There is a negotiation, as you say, an iteration. I've been playing a lot with these tools, of course, and I'm starting to Really feel exactly that experience you're mentioning there of going back and forth and being like, we're going to get to the answer here.
And I know that you have weaknesses. You don't really know certain ways of looking at things. I certainly don't want to make a hundred versions of something unless I have to, so we can go back and forth and really get to something exciting. And I think. A teachable agent is a really interesting way to get there because it frames the AI, not as a blank box, right?
Not as a tabula rasa blank slate, but as a co conspirator, as somebody working together and actually somebody a little subjugated to you a little bit, right? You're teaching it, but you're working together and he puts the student in the driver's seat to actually figure out, as you said, what they're doing wrong.
That's keeping their teachable agent from succeeding. It's a really interesting dynamic that I think is very promising. Exactly.
[00:24:25] Adam Franklin: Yeah, there are arguments about like iron sharpens iron, right? Or the more you can wrestle with an idea, the better that articulation of the idea will be. And at the same time, in the pedagogical world, the zone of proximal development, when you have two parties close enough in understanding, there is magic that can happen in terms of transfer of knowledge from one party to the other.
And I think if we can inform the agent, and this is what we're working on right now, something that's like getting at a solution to the problem we were talking about before, AI being overly helpful. We're right now building an agentic framework, which is really fancy way of saying like under the hood of the teachable agent are multiple agents were distributed parts of the task to it.
Some are charged with the pedagogical nature of the interaction, understanding what the student knows and calibrating the response so that it's aligned in some way, close enough, lower than the student, as far as like giving a juicy entry point essentially to have the student correct or teach something other parts are responsible for the more conversational or memory of the relationship and maintaining a sense of continuity there.
Other parts are responsible for understanding that evidence or resources that are available so that it can be prepared to interact with them. That's something I'm really excited about in the future is computer control. This idea of watching an LLM participate. Because for example, one of the resources we provide students with in the mutations practice example is a simulation.
A fet simulation. I love that because it's just it helps you organically come to conclusions about the scientific world, but we're, there's the bunnies fet simulation that is meant to show how mutations can form over time and help the population grow. If they're mutated in a way where their teeth are longer, they can eat more hard.
Vegetables are coming out of the ground. I guess I need to brush up on that simulation, but what if the agent can participate on that simulation too and change some of the levers and you get some of the magic of, I'd love even in Figma when it's just my peers are in Google docs, like that's still magic to me watching peers make changes to something.
And if it's a character you're in control of that you're watching, I think this is untapped magic that I want it. Start tapping, you know,
[00:26:40] Alex Sarlin: you're in control of, but also in conversation with it. It's, it's such an interesting dynamic. I I've been watching a lot with my two year old, a lot of Sesame recently.
And they have this moment in the sort of Elmo's world segment where Elmo does a simulation. He always gets things wrong by design. It's always the same number of things. But every time he gets things wrong, you hear these kids in chorus saying, no, it's over there. Or that's not right. It looked for the yellow one.
I feel like there's something so. Exciting about that idea of working alongside a trusted, safe character. Of course, Elmo never gets, you know, angry at them or anything. Elmo goes, Oh, great. Thank you. You know, right. It's here. And I think that kind of dynamic isn't limited to young kids. I think it's, it's especially exciting for younger kids.
Cause they have very little where they feel a lot of autonomy over, but I mean, fourth graders, sixth graders, ninth graders, they very rarely get the chance to be in control to correct, to. Collaborate in this way. One other thing that I think is really interesting about what you're doing with study buds is the practical nature of it.
You know, we've talked a lot recently about grounded LLMs, the idea of instead of working with an open LLM general purpose, like a GPT or Claude or Gemini, which sort of has the whole internet at its fingertips, you're doing activity that is bounded by the curriculum. It's bounded by particular resources.
It's bounded by the teachers can draw borders around it. And what you're doing with these teachable agents has that. Built in the idea of doing a genetics exercise where the the A. I. Has bounded in its behavior and maybe has all these different agents working in it. And then you have a set of documents or evidence that you're working with fits that build directly.
So I'd love to hear you talk about how you got to that model and how you think. I know you're not being pressured to productize, but how you think that that will, you know, might smooth entry into the market because you're not using anything that's open and totally on considered potentially inappropriate or unsafe.
[00:28:32] Adam Franklin: Yeah, exactly. So one of the hallmarks or banners of teaching lab studio is a premise called coherence that you're not adding tools to the equation of teachers just for the sake of it, or adding one more to become a victim of the 5%. You are genuinely meshing into the educational experience for teachers.
And that to me dovetails with my experience at Nearpod as well. Like teachers are not going to pull content off the shelf unless it matches their standards. Or their students interests. And so you need to Solve both of those through, I think, aligning with or adapting materials they were already going to use.
Now, what I love about what we're building, like you had mentioned, is that you need a set of stimuli or resources to do teaching with. And that's what curriculum is, for the most part, and especially in science, social studies, subjects I'm really familiar with we're working with teachers who are OpenSciEd teachers.
Part of their instructional model is so cool. Kids. Very frequently throughout a unit, we'll create a visual model of their understanding. One that we're working on right now is forces and collisions. And they have a model about what happens when two things collide. And there are resources like YouTube videos or experiment data that is meant to inspire them to change their model.
All we've done is package this in a way where it's the agent that has the model. And you are charged with helping that agent interpret the data. So that it can revise its model. And that I think scales much more effectively to robust teacher usage. Cause I think if you expect teachers to use anything, you got to lower the cost of adaptation or the seamlessness with which it can be.
Input into their instructional sequence. And that goes for social studies, ELA math teachers as well. Like we're working with some success academy teachers, middle school, and they're given a compendium of primary sources that can be intimidating to be like, I don't know what to do with this. I've put these in front of students before, and you kind of get that blank stare of yeah, it's a passage of John Winthrop talking about the colonies.
But when you can situate that in a way where no, this is your key to unlock your interaction with a teachable agent, it becomes a bit more of the wheels are greased as far as like your willingness to interact with something that can be
[00:30:49] Alex Sarlin: academic. And potentially even embedded in scenario based.
I mean, that idea of just building on this concept of, right, you have this primary source about different people talking about Pennsylvania, you know, Penn and Winthrop and people talking about all the early colonies. Well, in a total vacuum, those feel old documents, students, I mean, in a context of teaching somebody about history, well, that definitely gets more interesting in the context of trying to, you know, convince a colonist or convince somebody in England to come across the sea and join your colony in Pennsylvania.
Well. That starts to get really intriguing. Yeah, those
[00:31:23] Adam Franklin: things were controversial at the time, right? Like people loved it. It was like, we got to juice some of that back into the experience of interacting with them.
[00:31:31] Alex Sarlin: A hundred percent. So I want to ask you just a little bit of a meta question. I mean, you've had a really interesting journey.
You were, you mentioned a history teacher. You may obviously jumped into ed tech many years ago with a really exciting. Company with near pod, which was then acquired by renaissance learning. And now you're becoming a, you know, a founder and a founder in this really interesting model through a venture studio.
That's a really exciting journey. And I think it's one that gives you a lot of perspective about different. Spots in the ed tech ecosystem. Tell us a little bit about this path for you and what you might recommend for others who are at any point on that path, whether they're at a company and thinking about doing their own thing, whether they're in the classroom and maybe thinking about doing their own thing or joining an ed tech company.
What drove your decisions that Jimmy Neutron moment you mentioned and what would you share to others who are at different points on that journey?
[00:32:21] Adam Franklin: Yeah, absolutely. And I talk with a lot of folks all the time. And I will say my narrative sounds very Logical, but I think that thread is is much more in hindsight than like at the time.
I wasn't thinking 10 years ahead. This is where I want to be. It was more. I'm just I'm lucky to have gotten to take advantage of some of the situations that have been in. But either way, like I mentioned this before, when I was a high school social studies teacher, I loved tools that spark thinking. I loved.
Making those tools for other teachers. I loved using reading like a historian. That curriculum was like a God sent to me. Sam Weinberg was the reason why I applied to Stanford GSE. I was like, I want to go work for the Stanford history education group. And so I applied to the GSE because of that. Now, I did not end up working there.
It's hard to work at a non profit and stay in the Bay Area for all sorts of reasons. But I was lucky enough during my master's program, I also interned with Nearpod. And I found my way there. I say this to everybody looking to break into edtech from teaching experience, find a mentor or a company you believe in.
Do not think of Box yourself into a role that early. It's not worth it. Bet on yourself to get your foot in the door and demonstrate your confidence and you can move laterally across companies in ways that meet your longterm ambitions, Sarah Romero keeps, who was my boss at near pod. I, she was a guest speaker at a grad school class.
I had, I flagged her down and begged her for an internship. And I just kind of rode her coattails throughout near pod. We were building the content team together. And your pod was like a platform to build experiences with and our. thesis was, could we build a library that would engage more teachers to be willing to try the platform for the first time or to become habitual users?
That was a success. And so my growth at Nearpod really was more of a reflection of the growth of the content business at Nearpod. And I kind of just got to keep growing as that got more robust. We started building products that we sold directly to schools and districts. So I got to have a lot of experience as far as Going out on the road, going to conferences, working on With the product team, as we built content products that necessitated product development.
And that's where my interest got peaked in the product strategy side of the business. Very volatile though. Like it seems rosy in hindsight, but like Blake Harrison, who is a co founder of Fulcabia, used to always say, if you got into ed tech thinking you bought a plane ticket ride, the turbulence is going to be scary.
But if you go in knowing you bought a ticket to a rollercoaster, the ups and downs are a lot more fun. That is a piece of advice I hold really dearly. And it was absolutely true, right? Like we had the recession in 2018, we had CEO replaced by the board of investors at Nearpod at one point. Then you had the pandemic, which morbidly was.
Pretty good for usage at Nearpod. And then you had the return to schools. Then you had the acquisition and by renaissance and post acquisition life. I feel like I'd seen the gamut of experiences in that run, but I also like it just naturally the nature of ambition at a Your old startup compared to like early stages is different.
And I found upon reflecting, especially during paternity leave, that I was much more fulfilled in an early stage environment. And so started looking for opportunities where I could potentially build. This idea that I had, and that wasn't the first solution though. I was looking for a long time for other seed or series a ed tech companies that I believed in to try to join up with them.
But this was a point in time, frankly, when funding ed tech funding is still alive, depending on your point of view, but K 12 classroom facing tools, we're at a nadir right now for sure. And so I had to come to terms with that and. Be willing ultimately to bet on myself as far as joining the teaching lab studio, which was a unique proposition.
And I want to talk more about their models. I think it's really cool, but I kind of just left in April and haven't looked back since it's been a, luckily a great decision, but it wasn't necessarily guaranteed to turn out that way.
[00:36:24] Alex Sarlin: Fantastic advice throughout there. And I think sharing your own experience and some of the ups and downs, some of the wise words that you've heard along the journey, I think is helpful to other people as well.
I think a lot of people's careers make sense in hindsight and feel a lot rockier, you know, and more confusing and more volatile during all the twists and turns and changes. I certainly feel that. I think when you're on the other side of it, when you're looking for a big change, I think sometimes people assume that the change is going to be smooth, that you're like, I have moved into this category.
I'll become a salesperson or I'll become a customer success, or I'll become a, a product person. And then I'll just be in product. And then I'll do product with this. In Ed Tech, it's really hard to plan very far out. I've certainly found that.
[00:37:05] Adam Franklin: Yeah, absolutely.
[00:37:06] Alex Sarlin: So let me give you the chance. You just said let's talk about teaching Lab Studio's model.
It really is distinctive. This is a relatively new world. This is Teaching Lab has this venture studio, but I think, are you the first or second cohort of it? It's pretty early. I am
[00:37:20] Adam Franklin: the first, there were six of us for this first cohort. And yeah, we're the experimental fellows, but it's going well so far.
And like I'd mentioned. Just because K 12 isn't minting unicorns like workforce development or healthcare, like other ed tech adjacent things, doesn't mean that like we can stop innovating there. It's more important for the sake of students, for society's sake, and Teaching Lab Studio has taken up that charge as a means for That patient capital that I was talking about.
So teaching lab itself is a nonprofit organization that specializes in curriculum based professional learning. Very successful in its own right. Teaching lab studio was spun out of that to apply gen AI to develop classroom tools that improve teaching and learning outcomes. And they believe like I do that in order to fulfill AI's promise in education.
We need to fund early stage entrepreneurs to embrace R and D work and that the capital necessary to do so needs to be patient in that it necessarily demand immediate return on investment from a commercial perspective. There needs to be the time to incubate and demonstrate efficacy so that. We can build on something that will actually have a positive impact in tandem with potentially generating revenue.
And the studio engages with funders, some of whom are impact driven. Others are return seeking. There's a space for both as far as who's funding our work. They draw on philanthropy, like grants or program related investments to subsidize some of those R& D costs, and then later invite. Those return seeking investors to support something that's been de risked to a certain extent.
And what they offer to us, the fellows is 12 months of runway, essentially to build tests, prepare to spin out. And then they'll set us up with. VC or other funding to go from there if and when the time comes that I feel we merit something like that. And as far as how it influences my work, like I'm spoiled in the sense that I get to focus on building for impact at this early stage.
I am expected to reflect in my OKRs. Experiments that are going to prove something that will drive the vision of the product and I get funding to do that, whether that's for engineering and UX or for user testing or or going out into the field and working with teachers and compensating them for their time because they deserve to be.
So if you're out there, by the way, like quick plug, if you're a teacher listening to this or an administrator, you're interested in testing, I am very available. Please reach out to me. We let's talk about incorporating study buds, but yeah, like I'm working toward demonstrating impact and collaborating with some of the other fellows on side projects and just feeling very grateful to be able to do this work in the environment we're working in.
[00:40:13] Alex Sarlin: So, one other aspect of your work that I find incredibly interesting is that you've taken an idea literally plucked directly out of the education research and said, we know this works. We know it works at least in theory, or at least in the experiments that have been done on it in the past. And it's time maybe have combining the research driven concept of teachable agents with the Exciting new technological capabilities of Gen AI feels like it's this really brilliant juxtaposition, and I'm curious if you think that this is going to be a little bit of a trend.
Is this something that we should be doing as a field is looking back at some of the things that we know work, but we have never been able to really capitalize on them working because they're so difficult to do. They cost so much to make it. multimedia experiences. It's cost so much to make interactives.
Some of the things that used to be off limits are suddenly really not off limits. Does some of that research become more exciting again? And if so, you know, how would you recommend people start to think about that?
[00:41:17] Adam Franklin: My answer is like, why not? Why wouldn't you try? LLMs have so dramatically reduced the cost to experimenting with this thing that it's wrong not to try.
If you think there's something that could work, it is so easy to try to spin up a hacked version of it. And I'm happy to talk through with anybody who wants to operationalize an idea they have. We should be testing these things because the opportunity cost is too great, right? I think some of the other projects that fellows are working on, like one of my peers, Gotham to par he's.
The man, he started enlighten AI. He actually just had a baby two days ago. So if you're out there listening, got them like congrats, man. We love you. But the value proposition of enlightened is basically, can we take rubrics you have created at the district or school level and train an AI to grade based off of that?
And I'm someone who I did grading for the AP test and graded common assessments for our school district. Humans are so variable. When it comes to two humans grading the same thing, there could be like a three point difference on a seven point scale between our two grades. LLMs are genuinely better at that, to some degree.
And as long as you have a human in the loop verifying, so it's not just a kid getting railroaded by a misinterpretation of something, that is something LLMs, I think, can absolutely chew off for educators. And there, there's got to be more out there than what we're working on. And it's exciting in that sense, as far as what's possible.
[00:42:42] Alex Sarlin: That's a great example. I love Enlightened AI. I've worked with Gotham and rubric calibration has been such a plague on the entire education space for so long for exactly the reasons you're saying inter rater reliability. Oh, I hate even talking about it, but you're right. This is a problem that has been entrenched for so long and suddenly we may have a way out or at least a way forward.
To make sense of this and be able to do much more sophisticated rubric based performance test grading. It's really a fantastic example. I recommend that all the educators, anybody who has been reading education research in any capacity over the last 10 years. Think about, you know, is there a thing that you were like, that is so cool.
I just wish you could actually use it. I wish you could actually do it. I mean, for me, I did my graduate school work on gamification. You mentioned Club Penguin earlier, my master's thesis was about World of Warcraft, Club Penguin and Second Life at the time, all these avatar based metaverse quote unquote solutions and how avatars embodied learning in this really interesting way.
And I wrote about this at the time, you know, all those three things are commercial products. None of those were educational products. None of those were developed in a university because anything like that developed in the university was abysmal. They were proud of it. No offense to them. They worked really hard to do it, but man, compared to anything commercial off the shelf, it was just ridiculous.
So it just was this huge chasm. And I'm so excited that we may be coming back around. There's a company called stimuli. There's a couple of interesting companies out there immersed.
[00:44:13] Adam Franklin: I love immerse. I talked to them a long time ago. Yeah,
[00:44:16] Alex Sarlin: they're great. It's suddenly is actually possible to make embodied immersive avatar based experiences that are education focused.
That was my dream when I was in grad school and suddenly it's possible again. So I hope that everybody shares that really exciting sense of possibility that you're pursuing with study buds. So let me close us out with a general question. What's the most exciting trend you see in the EdTech landscape right now that you feel like our listeners should keep an eye on?
Something that's coming that they should really be taking note of.
[00:44:48] Adam Franklin: It's hard to pick one. I have a list of 20 of these. I think for me, as far as things I haven't mentioned yet, I love the work that Snorkel and Edie are doing as far as taking inputs from students that are Open ended or creative expressions and uncovering what's beneath them as far as what students understand because this idea study buds is born out of a larger philosophy.
I have the issue with the hegemony of the multiple choice question. I think the standardized test. Military industrial system of the schools in the United States. It's it's all comes down to like tests and data that it can make data really effective, but the actual mechanic is so life killing soul crushing to do that it's invalid in a certain sense.
And so if we can parse student input, that isn't circling one of four choices. I think we will have better outcomes, both in terms of the student experience and also genuinely understanding what students do and don't do.
[00:45:45] Alex Sarlin: I couldn't agree more. I, in our little market map we've been doing, we're calling that feedback on multimodal inputs.
Like you can talk, you can do a video, you can do an image, a diagram. All sorts of things that are not at all like a multiple choice and LLMs are sophisticated enough to actually make sense of them, give you meaningful feedback, and give you next steps, as well as in Starkle's case, aggregate everybody's video and put it together and give the teacher common misconceptions.
It's really cool stuff. I love that. It is incredibly exciting. We should have a beer and talk about
[00:46:16] Adam Franklin: it.
[00:46:19] Alex Sarlin: Adam Franklin was the very, very first guest on ed tech insiders three years ago. And now he is the CEO and founder of study buds, which can be found at study buds. org. And he's working with the teaching lab studio with a whole series of other really amazing ed tech entrepreneurs.
If you want to work with him, if you're an educator, if you want to find out more about his journey as an ed techer, reach out. Thank you so much for being here with us. Adam Franklin on EdTech Insiders. Thank you so much. See you next time. Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community.
For those who want even more EdTech Insider, subscribe to the free EdTech Insiders newsletter on Substack.