Edtech Insiders
Edtech Insiders
Teaching the AI Generation with Mireia Torello of Aikreate
Mireia Torello is the CEO and co-founder of Aikreate, an EdTech company revolutionizing AI Literacy for students and educators worldwide. With a PhD in Earth Sciences and an Executive MBA from IESE Business School, Mireia brings a unique mix of scientific and entrepreneurial experience. She has led Aikreate through partnerships with schools, universities, and governments and has been recognized at top innovation forums like SXSW EDU and ASU+GSV’s Women in AI. Her mission: make AI accessible, ethical, and empowering for the next generation.
💡 5 Things You’ll Learn in This Episode:
- Why AI literacy is a mindset, not just a skill
- How educators can teach ethics, data, and creativity with AI
- The importance of human skills in an AI-driven world
- Ways to train teachers and students to learn AI together
- How Aikreate is expanding global access to AI education
✨ Episode Highlights:
[00:02:32] Teaching AI literacy as a mindset, not just a toolset.
[00:06:34] How Aikreate scaled from a local school to a global platform.
[00:08:37] Training students to solve problems in collaboration with AI.
[00:11:51] Why AI belongs across subjects—not in a separate class.
[00:13:06] Supporting teachers with approachable, synchronized training.
[00:23:57] Balancing safety, ethics, and student empowerment in AI use.
[00:42:01] Helping students become AI pilots, not passive passengers.
😎 Stay updated with Edtech Insiders!
Follow us on our podcast, newsletter & LinkedIn here.
🎉 Presenting Sponsor/s:
Every year, K-12 districts and higher ed institutions spend over half a trillion dollars—but most sales teams miss the signals. Starbridge tracks early signs like board minutes, budget drafts, and strategic plans, then helps you turn them into personalized outreach—fast. Win the deal before it hits the RFP stage. That’s how top edtech teams stay ahead.
This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 30 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.
Ednition helps EdTech companies reclaim rostering and own their integrations. Through its flagship platform, RosterStream, Ednition replaces costly data providers and complex internal infrastructure with direct, secure connections to any SIS or data source. The result: scalable integrations, lower costs, and seamless experiences for schools and districts of every size. With Ednition: Reclaim rostering. Own your integrations. Cut the cost.
Innovation in preK to gray learning is powered by exceptional people. For over 15 years, EdTech companies of all sizes and stages have trusted HireEducation to find the talent that drives impact. When specific skills and experiences are mission-critical, HireEducation is a partner that delivers. Offering permanent, fractional, and executive recruitment, HireEducation knows the go-to-market talent you need. Learn more at HireEdu.com.
[00:00:00] Mireia Torello: We believe that nobody's gonna pay you for a work that AI is already doing. We believe that we have to teach the real value that only humans can bring, like critical thinking, imagination, creativity, and innovation.
And during this like scaling moment, we also were part of accelerator programs like the MindCAT. Know that it is just an edtech program that also like, helps a lot to design everything and to grow our company.
[00:00:28] Alex Sarlin: Welcome to EdTech Insiders, the top podcast covering the education technology industry from funding rounds to impact to AI developments across early childhood K 12 higher ed and work. You'll find it all here at EdTech
[00:00:47] Ben Kornell: Insiders. Remember to subscribe to the pod. Check out our newsletter and also our event calendar.
And to go deeper, check out EdTech Insiders Plus where you can get premium content access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod.
[00:01:12] Alex Sarlin: Welcome to EdTech Insiders. We have an amazing guest for our interview today.
We're talking to Mireia Torello. She's the CEO and co-founder of Aikreate an EdTech company that's revolutionizing. AI literacy for students and educators worldwide. She has a PhD in earth sciences and an executive MBA from the IESE business school. And Mireia brings a unique mix of scientific and entrepreneurial experience.
She has led Aikreate through partnerships with schools, universities, and. Governments and has been recognized at top innovation forums like South by Southwest, EDU, and ASU GS v's. Women AI. That's where we first met at A-S-U-G-S-V. Her mission is to make AI accessible, ethical, and empowering for the next generation.
Mireia Torello, welcome to EdTech Insiders.
[00:02:02] Mireia Torello: Thank you, Alex. I really appreciate to have this opportunity to be together with you, so I would like to thank you for that. I'm really excited.
[00:02:11] Alex Sarlin: Yeah. No, it's really my pleasure. I think you, you know, you're doing something really interesting and you've been doing it for quite a while actually. You've been thinking about AI literacy for a few years. Tell us about how you think about this concept of AI literacy. I know you've mentioned that you want it to be a.
Mindset, uh, not just a skillset, not just a tool set, but a mindset. What does that mean to you?
[00:02:32] Mireia Torello: Yeah. So, um, our journey, I, Aikreate, like the founders of Aikreate, we start teaching AI literacy. 10 years ago, we have, uh, we own like a family founded school, like in Barcelona. And 10 years ago we thought like what skills our kids need for the future. And then we decided to integrate AI literacy into the curriculum.
We started from a humanistic point of view, and then, uh, we'll adapt the whole curriculum because at that time was nothing available. And yeah, we believe that. Um, teaching literacy as a mindset means that we go beyond just learning like AI tools or coding. We focus on how to think critically, ask the right questions and make intentional decisions when working with AI.
And, uh, we believe that it's about helping students understand not just what AI does, but how it shapes their lives, their decisions, and even their identities.
[00:03:36] Alex Sarlin: Yeah, their decisions and their identities. I mean, you mentioned in the beginning of that 10 years ago, I think for people listening, you know, they may say, wait a second, you were doing AI literacy 10 years ago. It feels like, you know, so much has changed in that time and you've been, you've been at the forefront of it.
You've been doing this and thinking about it clearly in a very deep way for many years. So tell us about this moment when. When Chat GBT came out became the fastest growing technology tool in history, and suddenly AI went from something that you, I imagine five or six years ago, you probably had to explain what AI was, what AI literacy even was to being something absolutely everybody is thinking about.
And you know, suddenly you were right in the middle of the zeitgeist.
[00:04:20] Mireia Torello: Yeah, so, so when we started, like we had to explain, you know, the parents of the school, like why we were teaching literacy, and parents were like, like we, we, we, they didn't understood right. Why we, we thought it was. Important. But then when, um, generative AI, you know, the boom of, of generative AI came, you know, parents, um, they start like, uh, understanding, you know, why teaching AI literacy was so important.
And we also, like, we also. To, uh, explain, you know, to our parents that we were worried about the interaction with our kids with social media platforms, right? Like the kids didn't, they don't understand, you know, like how social media, how algorithms work, how social media work. So we thought if we explain our kids, you know, how AI is built behind the scenes.
Then they have the freedom and they have the choose to, to, to use it or not. Because when you explain to a kid that, um, why, uh, most apps are free, they don't even understand that they are the data, right? So when it's good that they are aware of, you know, like, um, like understand how AI works and how affect the daily life as a, as a for work and, and for personal use or for like.
College or, or a school and personal use.
[00:05:42] Alex Sarlin: Yeah, I mean, it makes a lot of sense and I think, you know, that those ideas as, as we've sort of started to enter this AI era in earnest now, the idea that AI is going to shape their lives and identities like social media, shape their lives and identities, for better or worse is becoming, it probably is something that that is, uh.
Is becoming much more of a common sensical. People are like, yeah, we, we see that coming. It's true. AI really is gonna shape, uh, students' identities and shoot students' lives and, and understanding when to use it, how to use it, what to be skeptical of, what to be excited about and how it works behind the scenes is incredibly powerful.
So, you know, you started within this school and now you are working with governments, you're working with educational institutions, you're working with businesses. Um, how did you make the transition from a sort of local curriculum. To something, to an AI literacy platform that you actually share globally and and have created a company around.
[00:06:34] Mireia Torello: Yeah. So we started, um, three years ago when, um, we, we, we set up the company here in in us and then we decided to work with educators and educational institutions around the world. To, uh, to make like, uh, uh, um, like, uh, scalable and affordable curriculum for educational institutions. And we did that because we thought that if we had like strong partners that we could, uh, do pilots and they could test our product and then give us feedback, then um, we could have like, uh, a synchronized.
You know, like EdTech platform where users could feel comfortable and they could learn and, um, yeah, and also like we, we believe that, um, like, like to mention that, um, like our, our mission and vision, you know, was that, um, that um, not only like, um, not only like, uh, explain kids how to use like. AI, but also to teach them how to collaborate with AI because, um, we are not, we are not preparing them just to use tools.
We're preparing them to solve problems and create value in way that machines can because, uh, we believe that nobody's gonna pay you for a work that AI is already doing. You know, and, um, we believe that, uh, we have to teach, you know, like everybody, like, um, the real value that only humans can bring, like critical thinking, imagination, creativity, and innovation.
And during this like scaling moment, we also had, um, we also were part of accelerator programs like the MindCAT You know, that, um, they, it is just an edtech program that also like, helps a lot to, to, you know, to, to design everything and, and, and to grow our company.
[00:08:37] Alex Sarlin: So let's talk about that idea, that sort of collaborative idea. 'cause that feels like incredibly powerful and also really core to what you're getting at about, you know, uh, students in their future. AI is gonna be a major part of their lives and identities, you know. Many. We, we are all under the, you know, understanding that some parts of every job are going to become automated.
I, I think we are, it's starting to feel like that's just inevitable. Some parts of jobs will be AI created or AI automated, but that collaborative nature of solving problems. I love the way you said that. Solving problems in collaboration with AI. Really feels like, you know, the skill of the future and, and, you know, writ large, you can obviously apply that to many industries.
You can apply it to many different types of problems, many different types of work. But the idea of working with AI as a, you know, people call it a co-pilot or a uh, an assistant, um, just feels absolutely core. I'm curious how you address that in your curriculum. How do you help students start to understand.
What it, the world will look like when they have this incredibly powerful assistant that can have complimentary skills. You know, the human brings the imagination and the creativity. Critical thinking. The AI can build the deep research, it can create content very quickly. It can write enormous amounts. Um, how do you think about that in the curricular, uh, uh, curricular, uh, environment?
[00:09:56] Mireia Torello: Yeah, we, we believe that, um. Educators are more important than ever right now. Like, uh, we believe that AI is really good delivering information, but, uh, to create the knowledge you need the educators, right? So they, I, I we, we believe that, um, they are really important. And then also like, um, we also believe that, um.
Um, so AI is not like AI literacy, it's not a separate subject. And for instance, like we, when, when we work with the schools like. We believe that AI, it's interdisciplinary. So for instance, in language arts, we might analyze how chatbots influence this course or in history, like students could explore how algorithm shape narratives and bias.
Bias. It's a really good topic, you know, to introduce to, to, in AI and in mass we solve like, for instance, predictive systems of probabilities through machine learning. Which it's also like very important. And, um, yeah, I, I believe that. Um, so understanding like, you know, AI as a multidisciplinary subject, it helps to understand like how everything, it's connected.
And, um, it affects, as you said, like all the economies and sectors and every field of science. So, um, yeah, so I think that, um, this, this, uh, because of that AI pushes us to be more creative than ever and yeah.
[00:11:31] Alex Sarlin: Oh, keep going, please.
[00:11:32] Mireia Torello: Yeah, yeah, yeah. So, um, AI shouldn't be in a silo. So, you know, school shouldn't be like, oh, okay, let's, let's, let's make literacy in a silo.
So we believe that when it's across a curriculum, it's empower kids to see the world and the role in it, and in a more connected, innovative, and responsible.
[00:11:51] Alex Sarlin: That's a really beautiful vision. I really love that. And, and Yeah. I, I wanna talk about the how that the rubber meets the road when you're, when you're proposing something like that. 'cause I love the idea of AI being interdisciplinary, being able to use it within math, within science, within English and language, English language arts.
Um. But a lot of schools and a lot of educators, that's not yet how they see it. Right. I mean, especially the concept of AI literacy. Um, you mentioned, you know, AI literacy shouldn't be a subject, but in many cases that is how it's sort of being looked at. There's this concept of, oh, let's give our students an AI literacy course, an AI literacy module, and AI literacy training, and they don't necessarily see it as, no.
Let's see how they use AI in the context of all of their existing subjects. How do you think that we might. As a, as a ed tech ecosystem, help not only push AI education and AI literacy into schools, but really help people get that really powerful notion that AI is interdisciplinary. It's part of any field that it should actually be incorporated into the core curriculum.
That's a, that's a big ask, especially for American schools, but it's a big ask, but I'm curious how you, how you propose that and, and how it works. Um, with Aikreate worldwide.
[00:13:06] Mireia Torello: Yeah. Yeah. So what we encourage, it's um, like we believe that educators are the key piece, like in, in like making this transition. So we believe that, um, educators need to be trained and, um, the key is making AI approachable. Like, uh, many teachers, uh, fear AI because they believe that they, they need to be like technical experts, but they don't.
So what they need, uh, we believe that it's the right support and tools and a safe space to, to learn alongside with, um, the students. So we offer, at, Aikreate, we offer like synchronized and a synchronized like professional development. So educators can build like confidence toward their own pace. And, um, we focus on ethical conversations and also in practical classrooms, integrations not just theory.
So teachers feel that empowered and not intimidated. And, um, like, um, as I said before, like, um, in, in this, um, AI, uh, because AI is transforming everything. Like, um, educators, as I said, they are more important than ever in, um, we believe that um, teachers are the ones that who can guide like young, um, people in developing the skills that they truly need.
And especially like critical thinking, collaboration, empathy, and adaptability. And we do that through, through our platform. We offer, like this, as I said, this multidisciplinary, um, curriculum. Like, um, we teach, um, five, five areas like we, we, like, we, we explain like ethics, like, you know, helping students like, um.
Think critically about being responsible, you know, inclusion and impact. We also, um, teach data science, so you know, what is, we explore data. What is data? You know, it's a good data. It's bad data. What is bias? You know, and also evidence based decision making. We also, um, like, uh, teach AI engineering, like understanding how models are built and how they work.
We also, uh, teach math. So, you know, building the foundations for algorithms and problem solving and, and coding that, uh, we do coding, but from a very friendly, uh, block programming. Um, like, uh, we teach. Like Python, but we starting with blocks, it's very friendly and then, you know, the more advanced the kids, they can just code in Python.
So we have created the tools as well. So we have the curriculum and the tools to facilitate that during the classrooms.
[00:15:56] Alex Sarlin: So I, I wanted to double click on what you're saying about the e educators learning alongside the students. 'cause I think we're in this really unusual moment. You mentioned that educators are intimidated. Excuse me. Educators may be intimidated by AI because they think it's very technical. Um, and it doesn't have to be, but at the same time, it, it is pretty clear that, you know, ed, most educators are at the same level as their students or, or maybe one or two steps ahead.
I'm curious about how. That, you know, you, you're talking about sort of synchronized professional development. I really like that phrase. Um, tell us about what this looks like, because I feel like that's part of what makes this such a confusing moment, I think, for educators is that they don't know whether they're being expected or, or asked to be, to like jump into this subject and learn a huge amount about it so that they can be ahead of their kids and create new projects and create new curricula that incorporates AI or whether they're.
Being expected to almost protect their classroom against AI because the kids are using it all the time and they're not sure how to incorporate it. It's, it's sort of an odd moment for educators, I think, in relationship to AI. And I think your vision of synchronized professional development where sort of everybody's learning together in these types of subjects you just mentioned, you know, AI, AI engineering and coding and math and data science, it's, it's a, it's a exciting vision.
If you were an educator right now and you wanted to, you know, get on, get ex, you know, sorry, let me phrase this a little differently. So how would you help, you know, a school build this synchronized, uh, type of learning where the educators and the students are both learning this really new material?
[00:17:33] Mireia Torello: Yeah, so we have a platform that we offer, like synchronized, um, like training for, for teachers. And then, um, either the teachers can decide. To deliver the lessons themself, which we encourage that, what we want, it's like to train the educators and then educators like to feel confident and to have to have the training and then they can deliver the lesson themself.
But we can also offer, we can also deliver the lessons we, we offer both, but we believe that. Um, that we, we, I mean, um, you don't need like, as, as I said on a strong technical background. What you need to understand is what, um, how generative AI works. You know, how it's built, like, you know how to like write a good prompt, right?
Or what is prompt engineering, right? And also what you need also to understand it. What is data decision making? Sorry? Data decision making. Um. Um, I'm sorry, I'm gonna, I'm gonna repeat that and, and, and also like, um, what do you also need to understand, it's, what is data decision? Dec, data driven decision making, which means that at the end the AI is built with data.
And like how algorithms work. And it's something that anyone can understand. And, um, we are teaching like, um, kids with no coding background, how to create their own AI models. So educators, we can do the same. Like, we can like explain them the basics, understanding what is AI so they can bring AI into the classrooms.
And also I believe that at the, we can, I, this is a, a a a personal. Opinion, but I believe that we cannot hide a new technology from, from kids. I believe that we have to, like, we have to like explain kids what is AI because, and, and then from there kids can, can, can use it, uh, on their favor because at the end.
As I said before, like critical thinking, innovation and imagination are more important than ever. And then if you understand how AI works, you can use it in your favor, right? You can like, um, use your critical thinking and problem solving skills and, and then turn it in a tool. To help you, that this is what I believe at the end, AI is a tool to help you in, in your everyday, like professional or personal, uh, or, uh, like, or for, for the school, uh, um, uh, for the school day life.
And, um, I believe that, um, that students. Like I heard from students that they use, um, like AI for like entering the notes that they take into the classrooms and convert these notes into podcasts so they, they are able to better understand, you know, like, like, um, the concepts in that classrooms. If these students, they understand how AI models are built.
They, they, they can, um, take more profit and, and they can work with that tools more efficiently and being more productive and not being afraid of AI. We don't have to beafraid of generative AI. We have to understand how it works. And then, um, the same like we have to understand. What is bias, right?
Because all the alert language models that we are using has bias. So, you know, we, we need to explain the kids, you know, the benefits of, of AI, but also to be aware of, you know, like of what, uh, what, uh. How, how you say that, what inconvenience, you know, it can bring to you. So I believe that it's more important than ever than to train, like, like, uh, educators and, and middle and high school students like to, to, to use AI in their favor.
[00:21:38] Alex Sarlin: It makes sense. It it, you know, when you mentioned the, that, uh, that idea that you, you've been able to train, you know, high school students who are non-technical to be able to train their own AI models. I mean, the. Is a really, that's a powerful, you know, claim and that's a powerful thing to be able to, to, to let people do.
One thing that I noticed as sort of a through line of a lot of the things you're saying is that it's really about empowering both educators and especially young people and students to really embrace these tools, embrace this AI mindset, and. Not be, like you said, don't be afraid of it and leap into it and sort of bring your full imagination and creativity to it.
Um, and I think that's an exciting vision for how this would work. It's one that I, I think we're all grappling with. I, I tend to agree with you. I, I think this is unbelievable. I, the, the idea that, you know, these tools can literally make almost anything you can conceive. Much more possible, if not, you know, uh, even easy.
Um, that is just an absolutely crazy thing that we've never had before with technology. I mean, I think you could claim that, you know, being able to make your own website had some, some element of that or make your own mobile app. But this, you can do it in plain English without any coding skills or with minimal coding skills.
Um, it's really quite incredible. I'm curious how you see this concept of what you, what do you see the role of Aikreate in sort of pushing this, uh, philosophy of empowering students to embrace this AI mindset and create, you know, just, just really sort of race into the future? Because, you know, we, we, there's been polls coming out where parents are increasingly concerned about AI.
Teachers are increasingly concerned about AI. You know, there's actually been more because there's been so much negative press. People worry about the effects of AI. You mentioned hallucinations and bias and ethical issues, but I think they're sort of taking the headlines and some of the excitement in my experience, some of the excitement and the power, the, the empowerment, uh, narrative is sort of getting subsumed.
So I'd love to hear you talk about how you. Make sure that all the schools and businesses and educational institutions you work with, you sort of keep that really optimistic future focus. Like look at the amazing things that we can do now and that you can do even as a 14-year-old. Um, tell us about how you keep the optimism and that sense of empowerment.
[00:23:57] Mireia Torello: Yeah, of course. So what we, um, so we have, um. So we have designed it. Um, like we, we always, like, we encourage like schools to, to talk to parents. Look, so when schools integrate AI literacy into their curriculum, we encourage the school to, to talk to parents and, you know, like mention that, um, like the, the, the, the, the, the, the future workforce.
You know, it's, it's changing and like. Everybody will need, like AI literacy or at least to work with AI tools. So if, if kids like mentioned that they understand how AI is built, that means that they can work in the future with any AI tool because they understand the fundamentals. That means that when they go to a job interview.
They have more chances to get the job because AI, it's replacing these first jobs that students were able to get. And also like, um. So from our curriculum, like we have designed specific tools like kid friendly and educator tools to jump from one step to another, like safely. So, so, um, so they don't, they don't start using like generative AI like.
Right. So there is an introduction course where they can understand, you know, how AI is built. We go very deep in ethics so the kids understand, you know, like the goods and the bads of, of, of, of AI. So that's something that we, we really like, as I said before, we started our project from a humanistic point of view.
So we follow the same philosophy and then, um. Like, as I said, like then we integrate all the subjects when it's time to, to, to be integrated. So they, I mean, as I said, we are very careful in not to empower kids to use tools that they are not ready. So we are not gonna encourage them to, to work with AI tools until they really understand, you know, what is the benefit and what is the risk.
Right. So that's something that we do. And, and, and then, um, as I said, like we encourage critical thinking, problem solving, and imagination because it's something that sooner or later they will face. I think that a lot of kids are using AI for their personal life even though they are not using it because maybe in a school it's not allowed.
But then we are in the same question, right? I think it's even more dangerous. Because these kids are, they using it without, don't even understand how it works. And we also, we also, we always ask to our kids like, do you say good morning to your microwave? No, you don't. So why you, why, why you say good morning to Judge dp, or why you say please?
Right. So kids need to understand that it's a computer program, right? And, and it's something that, because. Generative AI, it feels like you are talking to another human and maybe you have an app that, that behaves, like if this is your friend, you think that you are talking to a friend when you are not, you are talking to a computer program.
So we believe that even like, even that, because you know, if, if school is not teaching that part, because what I, because it's not possible at that moment, kids. Need, need to understand what is AI and in order to be safe, and also like not to use it, you know, for, for bad purposes. You know, like, like generative AI.
It's so easy, you know, to change pictures to, I don't know, to, to fake, you know, to, to fake things. That, um, it's good for them to understand, you know, the cows, the consequences, and also like what is right and what is wrong. So that's something that we are teaching and I believe that, um, you know, as, um, educators as well, they, they don't need to be afraid because at the end, um, when once they have the training, they are in control.
So they are giving, um, for instance, our tools, kids cannot, I mean, we have our fine tune LLM right, with our whole curriculum inside. So it's a frame, right? They cannot talk about anything outside our AI literacy curriculum. So we have the tools like for the educators, and we also have, you know, our app.
Like it's a self-learning app, but we have the whole curriculum that we talk about in, in, in AI literacy. And as I said, I think that, um, we empower educators to feel safe and to feel under control when they are like delivering the lessons. And I said even, even if, if, if, if the schools are not, if the kids are not getting this.
Um, training from schools or after schools programs, they need to have it because we cannot hide AI anymore. Like in all the kids, even like, um, are, are being exposed as well, like the recommendation, you know, like for instance, when they are watching Netflix, you know. Why Netflix is showing you recommendations.
Kids shouldn't, shouldn't understand why. Or in YouTube, right? Or, or TikTok or whatever. Like they, they need to understand how these, uh, algorithms work and, and you know, in order to, to, to, to be free. Because if, if Netflix is giving you a recommendation, then you are avoiding. You know, a, a thousands o of, of other, um, choices that you could have.
And, and you need to be free to understand, you know, why is a, uh, why this is happening. And then to be, uh, I believe that, um, training on AI literacy make kids more free.
[00:30:15] Alex Sarlin: Yeah, that's a powerful perspective. You know, one, one thing that you've brought up a couple times that it always sticks with me is that, you know, we're coming just in the history of technology, we are now coming, um, off of a, uh, era. We're sort of coming to the end of an era where, which is all about social media.
It was sort of the main technology for teenagers. It was. It, you know, it's ways that that young people were exposed to each other, to a huge amount of content, to different types of lifestyles and mindsets. And, and I think, you know, there's starting to become somewhat of a consensus that social media has been not, you know, uh, you know, maybe net negative for, for young people.
Lots of anxiety and depression and has caused lots of problems. And something we mentioned on the podcast a lot are, and certainly a, a belief of mine is that. That framework of, oh, social media came along, we thought it was harmless or positive, and then it turned out to be something we really didn't really want for our young people.
Um, I think that's really having a lot of effect on how people think about AI. I feel like there's a lot of. Concern that this is, it's this new type of technology, it's ubiquitous. Kids are using it, as you said all the time. Um, they're using it for, for social purposes, as you mentioned, for AI, friends and companions.
And I think it's triggering a lot of things in, in parents and educators saying, we don't want to get fooled again. We don't want to embrace this technology thinking it's going to be something that is positive. And then in 10 years we realized it caused a huge wave of depression or. You know, suicides, um, bullying, you know, all the, all of, some of the negative things we've seen from social media, and I'm curious how you see that.
I'm curious if you see that in, in your, in your current customers or in their conversations. Is there, how do you see the parallel between social media, which I feel like is sort of the, the tech era that's now sort of starting to, to end and, uh, and this AI era, which is starting to begin. How, how should we think about the next to each other in terms of their effects on children and young
[00:32:12] Mireia Torello: Yeah, so, so when we started our project in, in our school like 10 years ago, we started one of the main reasons that we started, because we saw the, the, the problem of social media dependency of our students. And then, um, especially for girls, you know, in, in, in, in social media. And then we thought like if we explain like how this platform works, like at that time it was Instagram, right?
So if we explain like how Instagram work. They, um, so, and, and we also combine this with, um, Socratic dialogues so they could, um, express themselves how they were feeling, you know, um, with the interaction of social media. And then, um, when we were explaining them that an algorithm was positioning the post and how this was affecting their feelings, um, these kids, um, we, we saw like, uh, we, they, they changed the prisma on how they were, um.
Seeing like social media platforms and some kids also decided to, to quit this, this, these social platforms, and this is how, how we start this was of, of one of the main, you know, we were very worried about our kids, right? This what we, this is what, this was one of the main reasons, but now. Like when we are, um, teaching, we explain like, um, you know, how AI model works.
We explain the kids that AI pipeline so they can build their own AI models and their own AI projects and understanding like how, um, algorithms work. So, um, for instance, um, we had like students that, um, they were like, um. Using, um, so they were building, um, projects based on personal interests, right? And they were understanding like, um, where to collect the data.
What is data privacy? You know, what, you know, um, like they were playing with datasets. Is this a good dataset? Is the bad set? This dataset has bias. So that's something that at the end, um, there were, um, also integrating like what kind of model I can use, what kind of algorithms I can use. So it's something that, um, when they can play.
The AI pipeline in the favor so they can integrate. Like, um, for instance, like, um, we had, um. One kid that, uh, uh, that who create a solution to support learners with dyslexia.
[00:34:59] Student 1: Together, we're working on value, your AI powered HR assistant. But before we start, I want to, uh, you guys to imagine something. Imagine you're a small business owner, and what does every small business or successful business need? Talent? No, the main thing is workers, but hiring nowadays is really flawed, uh, because small, uh, HR companies.
Uh, like manual, uh, resume screening requires time and resources and it may miss, miss some top candidates. And also, uh, HR teams are unable to track productivity due to insufficient data. Some of our solutions to this problem that Ave brings is we could shortlist the top, we will shortlist the top can.
[00:35:45] Speaker 4: And provide a list for the business owners to look at and see who's gonna be on the resumes or who's gonna be their future employees. Real time productivity tracking, using KPI of the employees so they won't burn out. Or if somebody's working too hard or too little. Um, benchmarking data for reasons for the shot, the, the top candidate, top candidates, and AI recommendations on what the AI would do, or who's the best, who's the worst.
[00:36:12] Student 1: Uh, according to SBA, there are 33.3 million small businesses in USA in 2024. Uh, they are going to require our services because it will save time and resources. It'll also, uh, trace the performances of every worker in the company and then also identify top talent for companies to hire.
Oh, uh, uh, our AI pipeline consists of five steps. Uh, first of all is collecting data. Uh, as we launch, we're gonna be collecting data from public services such as GitHub. Kaggle and, uh, other services. And as we expand, we're gonna be, uh, contracting partnerships with, uh, different companies for them to provide us, uh, their resumes and data databases.
After that, we'll be using LLMs to sort out any sensitive data. And we'll be training supervised models, regression and classification to rank candidates from, uh, best to Last, and also labeling them as fit for the job and not fit. We'll be testing the model, checking for bias, and then deploying. So from an ethical standpoint, our, our company is gonna be clean, uh, it's gonna be transparent with the employee and the employer.
[00:37:26] Mireia Torello: And, um, he was able to, to create the whole pipeline, you know, like what I'm gonna collect the data, how I'm gonna work this data, you know, how I'm gonna train the model and like what algorithm I'm gonna use.
And, um, we also, like, he combined two things. So he understood the AI pipeline, but also at the same time. Um, what stood out was his empathy and desire to improve the learning experience for other kids. Like, like, like, like, like him, because he had a problem with, you know, as a kid he had a problem with dyslexia.
So he thought how I can help, like, future learners. So we believe that, um, if we turn around, if we can explain, um, you know, the, um. Behind the scenes, you know, how AI works. The kids, you know, they stop thinking that AI is magical. I think that, um, you know, that's the problem with kids. They, they, they think that AI, it's, it's magic and it's not at the end.
It's that data plus algorithms plus probability. And when you are able like to, to, to deify. You know, uh, this concept of AI being magic and explain each concept and when kids are able to integrate each of these concepts to their projects, and then they are able to present these projects or even to create a prototype, right?
That's something that, it's very powerful and the feedback that we had, um, from our like students. It's that, um, they, they, they, they thought when they were starting our training or our courses, they thought that, um, AI was magical. And at, at the end of the program, they were like really proud of what they built.
And, um, you know, we also like explain them, you know, um, like they also have to think, for instance, you know, in when they build their models, where are they gonna get the data? Right. And then that's something that, um, you know, it can become like you, you think about data privacy, you know, or who are your users and how you are gonna get the data right?
And they can also connect that in the apps that they are using, right? So, so it's an a full exercise. That. Um, once you do it and you learn it, you can associate, um, in every like, uh, in every, um, like area in on your life that you are using AI.
[00:40:09] Alex Sarlin: It is such an interesting perspective because I feel like what I'm hearing you say in, in lots of contexts is that. The, that understanding how the systems work from the inside really sets kids, as you say, it sets them free and, and it empowers them. It gives them the opportunity to use these systems in their favor.
So if they learn that the Netflix recommender is doing a certain thing, they can decide how to react to that or whether to. You know, agree with it or ignore it. If they understand that Instagram or YouTube are doing certain things to push them in certain directions because of their algorithms, the algorithms underneath them, they can decide whether you know how to react to that or how to game the algorithms or how to get on top of it.
And then AI, you come to this modern era and say. AI.
If you understand these systems, you understand the data, you understand the algorithm algorithm, al algorithmic thinking and how all the pieces fit together. Not only can you not be fooled by it and not think it's magic, right, not, but, but also you can actually.
Use it. You get this enormous, you know, suit of, you know, robot power suit where you could do all, that's how I picture it. You know, you could do all of these things, create videos here, create apps there. Do something for solve problems like dyslexia, solve, maybe address climate change issues, like some of the things that maybe seemed really intractable.
Suddenly you have this incredible set of powers to do that and. You have the power to, to be free of some of the commercial applications that may be sort of trying to make, convince you that it's magic or convince you that your AI girlfriend is real. Um, you know, those are, those are important things to avoid in this modern age.
That's a really powerful technique. And I, I, you know, I've heard you say that you want, you feel like students should really be the pilots. Of their experience, not the passengers. I, I'd love to hear you expound on that because it feels like this is all part of your overall philosophy. It's a really important one, I think right now.
[00:42:01] Mireia Torello: Yeah, yeah, yeah. So, um, yeah, so we believe that, um, like students, like if, if we like empower students and we like teach them, like to be AI literate. They can likebe pilots in this new AI era instead of being passive passengers and using like other AI tools that someone else built. And on top of that, ethically and responsibly, we want to create like ethical leaders, right?
That's, that's something that we also like, uh, are very concerned about.
[00:42:39] Student 2: Every year, 1.3 billion tons of food is being dumped in landfills that's around $1 trillion annually. I'm the founder of Smart Bytes, A solution to this growing problem. When food gets dumped in landfills, it creates a harmful gas called methane. Methane traps heat in the planet and increases the effects of global warming.
Our app uses AI and takes user input and ingredients and generates recipes based on those ingredients. This reduces food waste because instead of throwing out the excess ingredients, you can incorporate them into new recipes. Our target age group are college students who just moved out of their houses and are in desperate need of both nutritious and easy to make recipes.
The first step of our AI integration is data collection. Our data collection will occur in the form of recipes. We'll be taking recipes from publicly available data sets online, such as food.com. Next, we have data processing. During the stage. One crucial bias that we, um, avoid is selection bias. This selection bias occurs when the data being inputted into the AI model is unrepresented of the world population, causing skewed and biased results in order to combat this bias.
We'll be incorporating recipes that are from a variety of cultures, each proportionate to the population. Next is model selection and training. We'll be using cluster and generative AI as our machine learning models. We'll be using unsupervised training and we'll cluster together recipes that have similar ingredients and use those clusters to generate recipes using generative ai.
Finally, for our model deployment, we'll be deploying the product in the form of an app that users can download on their phones, and it will be using data driven decision making through prescriptive analytics in which it will take in data, which are the ingredients, and generate recipes, which is the action.
Here is an example of someone using our product. As you can see, you can easily add in the ingredients and you can even add an expiration date for each one. Once you add it at the top, it'll say how many are about to expire, and approximately how much money you save when you scroll down. You can choose a certain culture and you can generate the recipe at the bottom, it'll show your environmental impact.
From our competition analysis chart, we can see that S Smart Bytes is the only app out of all of our direct competitors that has all of these incredibly useful features. Our indirect competitors include e-commerce apps such as HelloFresh and meal planning apps such as Milan. But even then, we hold a sustainable advantage because we're the only ones with all of these.
Next is our business model. Our business model is business to business to consumer, because we wanna partner with a larger food company, such as Whole Foods. And our distribution strategy is installing these advertisements in carts, which will lead to our product. We wanna do this because we wanna increase demand for our product, for pricing.
Our product will be available to download and use for free, but there's a premium version which you can access for 1 99 a month, and it'll include exclusive features such as collaborative grocery tracking. Lists and you can also have bonus content. Our team members consist of C-E-O-C-T-O, business strategist, marketing Lead, and UX designer.
All of these roles are incredibly vital to the success of our company. Overall, smart Bytes makes it easier for households to plan their meals, and it's also very good for the planet because it's sustainable. We, our unique features give us an advantage over all of our competitors. And our ask from our investors are connections because we wanna network and we wanna grow the demand for our products before we go to market.
[00:46:22] Alex Sarlin: So let's talk about the ethics for a second. 'cause this is something that I wrestle with a little bit in that let's just, I, I was, I promised the last question about, about, uh, social media, but I, I do find it interesting parallel, but if you understand, so. If you're a 16-year-old who gets really powerful lessons about.
how Instagram's algorithms work or how YouTube's algorithms work, there's a couple of ways you can approach that information, right?
One is, hey, they're, they're trying to feed me extremist information, or they're trying to keep me in a filter bubble, and I'm going to avoid it. I'm gonna. I'm gonna start three different accounts that are different, or I'm gonna turn the whole thing off altogether 'cause it's trying to manipulate me. Or you could say, Hey, you know what?
I'm gonna become an Instagram influencer. I figured the algorithms work I'm gonna, I'm gonna make, I'm gonna get a million followers by doing all the things that get me noticed on, uh, on Instagram. And, and I feel like the same logic is potentially true with AI, right? Knowing how to use the tools, knowing how to scrape data, knowing how to build your own models, knowing how to build your own applications. You are, it's, it's powerful. And that power can be used in ethical or unethical ways. So you, I know that is, is absolutely core is the first thing you named as, as the, as the core of what a Aikreate does. But I'm curious how you are, um, how you're sort of approaching this tricky question of giving students these incredible powers and at the same time making sure that they don't go, oh, wait a second.
I can do this thing, maybe I'll do it for, for, you know, for nefarious purposes because it really is enormously possible.
[00:47:56] Mireia Torello: Yeah, that's something that, um, like, uh, well at, at, yeah. So it's something that we, um, we make sure that, um, you know, we explain kids, um, what is data privacy and you know, how, um, you can be affected, you know, if you use AI. Negative, negatively. And it's something that we run in our, in our curriculum because as I said, um, you know, we don't want, um.
Like, uh, bad use of, of AI, and as you said, it can be very easy once you know, you know how to use it. But I believe that, um, it's, it's even worse. I mean, I, I believe that the consequences are much stronger if you don't understand AI. That if you do, and if you do and, and you use it, um, in a wrong way. Sooner or later you are gonna find, um, you know, the, the consequences.
So it's something that, uh, we, um, encourage kids always to use AI ethically. And we also, we always ask them, you know, how you would feel, right? Or how you feel, you know, if, um, someone is using AI, um. In a no ethical way, we cannot control outside our classrooms. You know, how students use AI. That's something that I don't have control, but I believe that for, for, um, like if you have the, the knowledge and if you have a vision and a vision, you know, that's something that is related to your heart.
You can easily, um, achieve it today, you know, with AI. And that's what we want to empower, like for every kid. And, um, then we teach ethics and of course at the end it's every student choice how to use AI. But you can even, like, if you don't get the training, you can even learn how to use. AI by yourself, you know, and then use it wrongly.
And, and I mean, what, what I'm trying to say that, um, kids in a way can get, you know, information and can, um, use AI in, in the wrong way. Right? But if you have AI, the, and if you understand the consequences of not using AI, um, correctly. Then, um, this is a moral questions, right? That is something that, that every, every kid and should like think about.
But our mission and vision is to make sure that kids understand, you know, the, the, the, the right and the wrong use of AI.
[00:50:57] Alex Sarlin: Yeah, so. It makes me think, you know, and a, a couple of months ago I spoke to a, a consultant who focuses on AI deep fakes and helping schools, figuring out how to respond to them. And it was, it stuck with me because it's just such a intense, scary, sort of section of the AI world. But what I'm hearing you say is that.
You mentioned earlier that kids can learn about AI in school or they can learn about it from sort of commercial apps, right? I mean, because it's true. They, they're already using it in all these different ways. And if you go to the app store right now and search for AI, you know, image creation, you're gonna find a huge variety of different things.
Some of which are literally designed for some pretty strange purposes, and they're commercial and they. Look legitimate. They're, they're made by real companies. They have logos. So what I'm hearing you say is that if you are in school and you're learning AI literacy and you're learning what image generation or manipulation looks like in AI, what, how it works, the diffusion models, all of that stuff, and at the same time you're learning that.
There are these use cases that are so out of bounds that they can have, literally have legal, you know, major legal consequences. You can, you know, um, learning that at the same time feels very powerful because it's like, this is knowing what's possible. And then also at. Simultaneously learning what's both morally and legally problematic, um, adds so much context to what a student is actually doing, rather than sort of students stumbling on this stuff on their own and having no context for what it's doing, why it's doing it, how it's doing it, and you know, whether to do it or not.
Putting all the pieces together and understanding how these systems work and when and how to use them ethically and safely. Is, is that I, I know I'm not. I'm putting your words in your mouth a little bit, but like is, is that sort of how you see it? That's sort of simultaneously they're learning the systems and the ethics.
[00:52:50] Mireia Torello: Yes. And on top of that, I would say that if a kid come across one of these tools, you know, and have the um, and can go to their educator and ask about, listen, I found this tool, but I don't know, I don't feel confident, and they have the freedom and they have someone that they can go and talk about it.
I think it's much safer than just, you know, avoid AI literacy.
[00:53:16] Alex Sarlin: Right, right. Safer than right than than a school saying anything that happens outside the walls is not our business. And this, this, this stuff is not in our purview. You know, we teach English and math, we don't think about this kind of thing. It feels like a, a problem, uh, to, to, uh, you know, to avoidance of responsibility for these incredibly powerful tools that kids now have access to all the time.
[00:53:38] Mireia Torello: Yeah. And, and also the school, you know, have like AI policy, you know, if a kid, you know, is being, I don't know, like a fake uh, image, you know, like, uh, if, if a kid gets involved, you know, as a victim, you know. It can get support, you know, like, like as well. And, and from the educator as well, and from the school, you know?
And, and I think that, um, you know, AI is, is, is super fast, is, is, is evolving, like, uh, super fast. And unfortunately educators haven't had time, you know, to, to, to follow the path, you know, because it has been like an explosion, right. And I think now it's the correct time. Like for educators and for schools, you know, to start learning, you know, how to implement this, how to like, in incorporate AI literacy into the school, into the classrooms.
So when, you know, when kids, um, in the future, you know, um, they. Come across different, a new AI tools they have, like someone that they can go and talk about it, how they feel, you know, and, and, and, and safety and privacy concerns. And, um, that's something that I don't think today, um, they have this figure.
But I would love for all the kids, you know, to, to have this figure and then to have someone that they can go and ask when they have adapt.
[00:54:59] Alex Sarlin: Yeah. So, so, uh, one thing I wanna ask you about, you're, you're based in Boston, is that right? Aikreate, yeah. So as you know, you've been doing this for many years. You've been doing AI literacy since before Mo people knew what AI literacy was. Um, and as you now are expanded, you're expanded in different parts of the us you've been expanding internationally and different types of organizations.
I'm curious if there's a sort of a profile of a type of organization or a type of person within an organization that is the sort of really the, the advocate for AI literacy. Are there, you know, are, are, are private schools moving faster or slower than public schools? Are countries in Europe or countries in South America moving faster or slower than, than the us?
I'm curious just how you see the landscape for AI literacy and, and who's sort of on the forefront of it.
[00:55:46] Mireia Torello: Yeah, we, we are, we are seeing like private schools moving faster than, than, than the public, um, system. But it's something that I believe that. It's gonna change. I believe that there is a lot of anger, you know, from like, uh, the public system to, to integrate, you know, this AI literacy. And also like in, in Latin America, we had like, uh, a good like, uh, uh, like we are, we are growing like, uh, fast as well.
Like I believe that, um, they are like, like private schools by the moment. They are like very concerned, you know, and how to integrate. Um. Like this, this, this privacy and safety like frameworks for around AI for their students. And also like, uh, with the government in Panama, like we have, you know, been working and as well, like we are expanding, um, into like Asia, which, um, you know, like, uh, in China, AI literacy now, you know, it's, it's mandatory
[00:56:53] Alex Sarlin: Right.
[00:56:53] Mireia Torello: yeah.
Yeah. And in Europe, yes. Yes, in Europe, we are working in, in Spain because our, our platform is available in Spanish, in English. And, um, we are, um, starting in I as well,
[00:57:09] Alex Sarlin: In, in what language did you say in Hindi?
[00:57:12] Mireia Torello: I,
[00:57:13] Alex Sarlin: in,
[00:57:14] Mireia Torello: I, yeah.
[00:57:15] Alex Sarlin: Oh, fantastic. yeah.
So for Israel,
[00:57:18] Mireia Torello: Yes.
[00:57:19] Alex Sarlin: Yes.
[00:57:20] Mireia Torello: Yeah. Yeah. So that's something that, um, yeah. But, um, I believe that, um, that I, I, that, um, like the landscape is changing and, and the public system, you know, it's something that, um, you know, it's, I'm sure that they're gonna like, um, be integrated like quite, uh, sooner.
[00:57:43] Alex Sarlin: Yeah. You know when I hear you say that. You know, people are very concerned about the privacy and safety piece. It makes me realize there's an interesting sort of move here, and I'm curious, I'd love to hear you react to this. It feels like schools could come to AI literacy for the, for the rear guard action, right, for the safety.
Right. We, they say we need our students to be using this stuff ethically, to not be doing DeepFakes, to not be scamming people, to not be falling for scams, to not be, uh, cheating on tests. Right. I mean, they're worried about all of these different AI use cases. Maybe AI literacy comes into schools as a safety precaution, but then especially with a tool like Aikreate, or a platform like Aikreate, you're all about empowerment.
You know, systemic thinking, critical thinking, imagination, creativity, problem solving, collaboration. Now I wonder if that's a, that's a sort of pattern you think is going to happen there where, where. You know, especially for public schools. Um, but, but for any kind of school where they, you know, AI literacy is brought in quickly as a sort of, uh, defense mechanism against AI.
But then once it's there, it actually is empowering and creates all these amazing, you know, students are turning their notes into podcasts and training their own models and doing these amazing projects. Is that a pattern you, you have seen personally or that you expect to see?
[00:58:57] Mireia Torello: So I, I, I heard that, um, like some private schools, they talk about the parents are very concerned about the, the, the privacy, you know, and, and, and the misuse of AI of, of their kids. So one of the reason that they brought AI literacy was because, you know, to protect kids, right? In, in, you know, to, to frame the kids, you know, in a context that they, they are safe.
But at the same time, these kids, we are also training them to, to use AI in their favor. But because we had this framework, right? So, so they can, like, um, schools can fit safer because, um, you know, it's something that, um, you know, more and more, as I said before, like AI is, is, is, is. It's growing like, like super fast and then we can, uh, yeah.
So, so with skill, uh, some schools like principals and, and teachers are, are feeling that, um, their students are not safe anymore. So that's one of the reasons that they brought us into the school. But we are not stopping, uh, to teaching our kids, you know, to use it as well in their favor. So we are doing both.
[01:00:10] Alex Sarlin: Right. So it's, it's, it's a, my co-host Ben always talks about abstinence education never works. It's like, it's not about stopping kids from using AI. It's about giving them the tool set and the AI mindset to actually understand AI and use it in their favor and not fall for anything and not, not use it in unethical ways.
Actually know how it works and be able to do amazing things with it. I mean, that, that's a pretty powerful vision. And it's a little, it feels like a little bit of juujitsu almost, right? It's like a school or, or a state or a country says, oh, we're worried about AI. And you say, well, we're gonna come in and make AI, make everybody love AI and use it in all these, you know, amazing ways.
[01:00:51] Mireia Torello: Yeah. Yeah, yeah, yeah. And, and, and it's working. Like schools are happy, you know, kids are learning and kids, when, if you ask a kid that, uh, if, do you wanna learn about what is AI and how to build with AI, they say yes.
[01:01:06] Alex Sarlin: Yeah, I bet they do.
[01:01:07] Mireia Torello: Yeah. And they're ready and, and they are fully involved in, in learning and they go fast and, and it's something that I think it's amazing.
And as I said before, when they show up in a job interview or when they show up into college applications, you know, that's something that some students. Ask us for like, recommendation letters or, you know, like, like certificates, you know, and then when they demonstrate that they are able to use AI in today favor, you know, ethically, of course, always when I'm talking about this, I'm always saying, you know, ethically, and, and so then as a responsible, uh, leader.
Then, you know, they, they have a better opportunity for them, you know, like to get into the college, you know, to get into the first jobs, and of course to be protected, you know, from others to others like apps or other tools that outside of, of, that they can use in their personal life because they understand how these AI tools work.
[01:02:14] Alex Sarlin: It's, it's a really exciting vision. Um, so very last question. Just, you know, in the last two years, I think we've seen an. Version of sort of the concept of AI literacy and a lot of different people have put together different frameworks. You have ones coming from a number of different nonprofits. I'm curious how you see that landscape evolving, especially as you have different models.
So you know, you have your five classes with, with ethics and AI, AI engineering and, and coding. Um, others have different types of structures. Um, how do you see the AI literacy space evolving over the next year? Do you think people are gonna sort of combine or will one. One paradigm or one framework sort of become the dominant one in any particular region or country, how do you think this is all gonna work.
together?
I think right now it's a, it seems like a little wild west,
[01:03:01] Mireia Torello: Yeah, yeah, I agree with you. Like you see like, um, a lot, I mean a lot of researchers, but, um, I think that most of research. At least that I, I, I have come across. They are not very like educator friendly, right? Like, you know, they need, educators need to invest a lot of time, you know, to understand, you know, what is a lesson or what they have to deliver.
So I believe that. Um, I hope, you know, that educators in the future can like, come across, you know, like, like, uh, tools that, um, help them. Um, you know, because in our case, you know, we combine, you know, the, the, the, the, the, the, the training with our tools. So it's something that is easier for the educators to have everything in one spot.
But I believe that in the future, you know, um, educators are gonna. I hope gonna, you know, uh, go across more resources that are more like, uh, friendly for, for them, for integrate into the classrooms. And yeah, I believe that, um, yeah, that more and more, um, resources are gonna be available. But of course I will always.
Encourage, you know, to, to use, um, resources that are, uh, teacher friendly, right? So, so if you want to develop something, talk to teachers, you know, make sure that, um, uh, teachers are comfortable and that they can adopt it easily, because otherwise, um, you know, it's very hard for them to, to invest the time and resources to, to understand this tool and then to change to another tool, you know, because you can, you, I mean, today you come across.
Different tools for different purposes. And, um, I hope that in the future, you know, we can, uh, find all, you know, the, the tools in, in, in, you know, that help educators, uh, for, uh, the delivering the lesson.
[01:04:55] Alex Sarlin: Yeah. Educator first, right? Think about the delivery and the educator, uh, first, uh, when you're thinking about a, a framework or any kind of AI literacy. You know, project, you don't want it to come completely top down or from a research organization and not have, uh, not have a way to be incorporated into the classroom.
I think that's a very, very solid, uh, principle to follow. Um, Mireia Torello is the CEO and co-founder of Aikreate. That's Aikreate with a K, so AI, K-R-E-A-T-E, an EdTech company revolutionizing AI literacy for students and educators worldwide. Thank you so much for being here with us on EdTech Insiders.
[01:05:35] Mireia Torello: Thank you. Thank you, Alex. It has been a pleasure and I would really, um, thank you so much for this opportunity. Thank you so much.
[01:05:42] Alex Sarlin: Thanks for listening to this episode of EdTech Insiders. If you like the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more, EdTech Insider, subscribe to the Free EdTech Insiders Newsletter on substack.