Edtech Insiders

Special Episode: Generative AI in Education, Learn from the Pioneers with Ben Kornell of Edtech Insiders, Amanda Bickerstaff of AI for Education and Charles Foster & Steve Shapiro of Finetune at SXSW EDU, March 2024

Alex Sarlin and Ben Kornell

Send us a text

Missed the live panel at SXSW EDU? Tune in to our special episode where Ben Kornell of Edtech Insiders hosts a compelling discussion with AI thought leaders Amanda Bickerstaff of AI for Education, Charles Foster, and Steve Shapiro from Finetune. Since ChatGPT's launch in November 2022, these trailblazers have been integrating generative AI into educational contexts, pioneering new approaches and solutions.

Revisit their insightful conversation on the innovative applications they're spearheading, and the exciting opportunities and significant challenges ahead for educational technology. This recorded session offers valuable perspectives for educators, tech enthusiasts, and anyone interested in the future intersection of AI and education. Don't miss out on these expert insights—perfect for those keen on understanding where education technology is headed next!












Alexander Sarlin:

Welcome to Season Eight of Edtech Insiders where we speak to educators, founders, investors, thought leaders and the industry experts who are shaping the global education technology industry. Every week we bring you the week in edtech. important updates from the Edtech field, including news about core technologies and issues we know will influence the sector like artificial intelligence, extended reality, education, politics, and more. We also conduct in depth interviews with a wide variety of Edtech thought leaders and bring you insights and conversations from ed tech conferences all around the world. Remember to subscribe, follow and tell your ed tech friends about the podcast and to check out the Edtech Insiders substack newsletter. Thanks for being part of the Edtech Insiders community enjoy the show. This episode of Edtech Insiders We feature a panel with AI leaders that Ben Kornell lead at South by Southwest edu in March generative AI in education learn from the pioneers. Generative AI was unleashed on the world in November 2022 With the release of Chachi btw our panel is comprised of some of the early pioneers working with this technology in the education space. This conversation goes with Amanda Bickerstaff from AI for Education, Steven Shapiro from Finetune, and Charles Foster AI lead at Finetune.

Charles Foster:

Ben Kornell. I'm Co-Founder of Edtech Insiders. I'm also managing partner of common sense Growth Fund. And we have all all all of us up here have been following generative AI. But some of us up here have actually been working on generative AI and the predecessors of generative AI for years. And so we're going to do a little past a little present a little future of what folks are working on what they're excited about what they've learned. And then about the half hour mark, we're going to flip it, and you're going to be able to ask questions. Now the rules are, if you're sitting in the section, from the microphone up front, you can heckle us. If you're back behind the mic, you cannot heckle us, so feel free to move around. And questions can come from anywhere. All right, so I'm gonna get started, we're going to do a quick intro of the panelists will kind of get a sense of who's in the room, and then we'll dive in. So I'm gonna pass it over to Charles, please introduce yourself. Hello, can you hear me? Great, everyone, Charles Foster, I am the lead AI scientist at fine tune, which is a parametric company. So I get to be the AI scientist I had on the panel, which is a role that I enjoy taking. And yeah, I am out in the bay area I've been in the AI space for since before we were even calling this generative AI. So it's exciting to be able to share some of my experiences and also learn from you all who are out in the field about what you're seeing in AI in education.

Amanda Bickerstaff:

One, I'm Amanda. I'm the CEO and co founder of AI for education. I'm in New York City. But right now it feels like I'm on an airplane. That's where I live. And we are focused fully on the responsible adoption of AI in schools, specifically generative AI. And so we're the newbie of the group, we did not exist before chat DBT existed. And so we work with now we've trained about 60,000 educators and leaders since last August on generative AI. And we work with schools, districts, leadership associations, not just in the US but around the world. So happy to be here.

Steve Shapiro:

Hey, everybody, this is Steve Shapiro. I'm the founder and CEO of Finetune. So I've been an entrepreneur in the EdTech space for most of my adult life. This is actually a third company that I founded. And this one's the most interesting. Just so you know, around 2018, we had been in the assessment business. We're known for building an assessment platform for the college board, Advanced Placement division, very proud to say that the platform we built for them known as AP classroom, was one of the first platforms to really look at free response questions and the idea around what is inherently subjective and how you can turn what is inherently subjective into something that's more actionable data. Around 2018. We very early stumbled upon this company called Open AI those just getting started and start to do some research and experiments with them and spent a couple of years doing research and then began to invent to products using large language models that we took to market filed for patents and believe to be He kind of leading in the space in a few different use cases that we'll talk about later. So really excited to be here, then mentioned that this is happy hour. But more importantly, we're also hosting a happy hour after this. And we welcome all of you. I thought I'd be popular.

Charles Foster:

So before we get out to the crowd, I'm Ben Cornell, like I said before, and Ed Tech insiders is a newsletter, podcast and community event platform for Ed Tech and innovators. We've been focusing on AI, because that's been the wave that we've seen. And we just interviewed last week, Sam Altman, to talk about education and AI. So please check that out. It's, you can check our substack out at Tech insiders. And before that, I was a middle school teacher, and also elected school board member. So I wear a number of hats from being super excited about AI to being desperately afraid. And they're all inside my head. So with that, we're going to survey the crowd. So how many of you raise your hand if you are an educator? All right, how many of you are have used chat GPT? Before? All right, we've got a pretty informed group. How many of you have used Google Gemini before? All right, pretty impressive. Let's go with higher ed any higher education folks here? All right, showing up? K 12. Folks. All right, any bucket that I did not include, which could be early childhood workforce. Anybody were early childhood workforce. Nice. All right. How many of you are working for for profit companies? How many of you are working for governmental organizations like schools or states? All right, pretty good. And how many of you are working for nonprofits? Wow, that's about a third, a third, a third of between governance models. Very, very interesting to have the group. Alright, so we're going to jump in here and be ready with your questions, because we're going to make this super interactive. So the first question is just for our panelists, tell us a little bit about your journey, specifically with AI. So Charles, we're going to start with you. So I think about my background is gonna be a little bit atypical. I've always known that I wanted to do something like AI, I think it was when I was maybe in. In high school or middle school, I listened to a radio lab podcast, where someone, a professor, David Koepp, he was one of the early pioneers in AI for music generation, he built this program that could take in bach, bach, pieces by Bach, and like, create new versions of them. And as soon as I heard that, that was even a thing that was remotely possible, I was like, I'm hooked, I want to do that, I want to be part of that. Because I've been a musician for sort of all my life. And so I found myself in the space of AI, wanting to learn about neural networks want to learn about these machine learning techniques, which were at the heart of being able to do creative things with, with computers. And starting around 2018 2019, as one, we're really starting to see within the field of neural networks in the field of artificial intelligence, the first what I consider glimmers of this could be like creative aid, this is useful enough that we'll artists are finding use in these tools. And so that's how I started getting into the same world that the open AI is and the TPTs came out of, it's like a rich history of neural networks. So I was involved in that, and the research side. And finally, around 2021, I want to say, is when I found fine tune, which was starting to try to apply these techniques that were these large language models, which were born out of that deep, rich tradition of neural networks, I saw that they were applying it in a domain that actually made sense. I think, if you see AI and education are really anywhere, you're gonna see 99 places where AI just doesn't make sense to be used. It's just a gimmick. But the one you find the one case where it actually is going to help to automate or augment some process, you're like, yes, that that makes sense. This is a real problem. And that's what my experience was. Okay, it was a marriage of like, this is a real problem and an industry that I care about. And it's also this use of technology that I have deep expertise in. Because, you know, I've seen it grow from the early days when it wasn't not working to now when it's really working beyond anyone's wildest dreams. And before we go too far, just from a translation standpoint, can you just tell us what a neural network is? And, and then also like is the AI that we're experiencing in generative AI A parallel to how the brain works or is it actually some other process? So that's that's a tricky question. When I say neural network, all that I really mean is that there are techniques that were originally inspired very loosely on the principles of the brain. If you think about the brain at a high level of abstraction, you have a large number of neurons or simple processing units that connected together, can represent information, can store your memories, can let you do things and interact with the world and make up a lot of what makes your mind what it is. And so networks were a particular type of machine technique that uses these connected very large data structures in order to represent information and process it. And so, you know, the whole history of the neural network tradition within AI is about how do you take how do you build automated ways of training the systems and training the interconnections, so that they represent the information that you want them to represent. So it's, it's really in line, like there's been a consistent tradition from like, 1947 or so all the way up until now, of the neural network tradition within AI. You'll have seen in other waves, other kinds of AI that that were, like popular, but the current wave has those particular origins of being pattern loosely, loosely, loosely on the brain. It's, you know, in many ways, it's very different, but it inherits some of those principles. Super, Amanda, tell us a little bit about your journey and the connection to AI. Well,

Amanda Bickerstaff:

I come from a very different background, which is why this panel I think, is so interesting. So I'm a former teacher, I taught high school biology at 22 in the Bronx, it is the hardest job I've ever had until I was a CEO of an edtech at Australia, a place I had never been before, right before COVID And the most lockdown place in the world, Melbourne, Australia. So that tells you a lot about my career. I'm a former researcher as well, a former I guess, current researcher, specifically around STEM student well being and the impact of COVID on teaching and learning. i When chatty b t was released, I was in Japan, eating ramen and not thinking about generative AI. So I come to this from a very unique position. I left Australia, in the state of burnout was traveling, and I got back to the US and I was thinking about what I wanted to do next. And you know, you go through the process, everyone here is potentially had a career change. And so I was thinking, Okay, I think I want to start something I don't know what and so I started partnering with the technologists about building a generative AI tool, and realized pretty quickly that this technology is very new, and not really fit for purpose yet for a student use case on something so trickiest as well being but then I use chat CBT for the first time. And so if you've heard me speak before it and if some of you have, you know, I hate rubrics. Who in here likes writing a rubric? We have to Oh man, I always love this because it's like a Rorschach test. Who likes rubrics? You are you're special human beings. I hate writing rubrics. I know how important they are to education because they allow for students to actually know how they'll be assessed. But they're incredibly tricky to write. And I had a job where I had to write I was at advancement courses right out of grad school. And we had 200 courses that we wrote in 20 months. Each of them had three rubrics. I did 600 rubrics in 20 months, and I'm still a broken human being. This is real. So the first time I ever use chat CBT I didn't ask it for a recipe or plan or trip or something. I asked it to build me a rubric. I had not written a rubric in 10 years, everybody. I still was thinking about it. And when it formatted a pretty bad prompt. You know, the way that we asked Chet to be to do something is through prompting, it wasn't a good prompt. But when it came back to me with a perfect rubric and formatted in a chart, I started AI for education. This is a true story. It is I am extremely nerdy. And so it's the nerdiest origin story of a company I think ever. But what I realize is two things immediately. One is that this is the technology transformation we've been talking about for a really long time. I taught 20 years ago, and we've been talking about personalization at scale all these promises of education technology, and having been stuck in a 5k radius in Australia during COVID. What I realized is that the technology that we've been talking about was just not bad. It just wasn't we really hadn't transformed education at all. And that this technology has the real opportunity to start personalizing learning at scale. It has the opportunity to create space for teachers to get time back to focus on what matters most which is a connection to their students. It has so many opportunities, but then I also realized the most technically advanced thing we do every day as a society is search Google and generative AI is not by At our Google, if you use chat to be at the free version, it's not even connected the internet, y'all. And if you're using GBT for it's connected the internet not no normal search way. And so I realized very quickly that the adoption curve is specifically in education was going to be incredibly difficult. We as an organism, like we, as an infrastructure, were slow to change or risk averse. We have competing priorities, and we have technically under skilled staff. And so the idea of like, how do we create a space in which schools and the education ecosystem is not left behind by this moment in time is why I started the organization. So if you're wondering, do you have to be a deep technologist to do this work you do not. I'm a perfect example of that. I'm someone that cares deeply about equity. somebody cares deeply about this work. I'm very wonky, I love research. But really what we've done is we've taken a practitioner focus, we work with teachers and leaders every single day, some of you are in this room, and we do the work. And then we take that work, and we try to abstract it to a place in which you can create frameworks and opportunities for educators to be able to use these tools effectively and responsibly.

Charles Foster:

So follow up. Oh, just a quick follow up on that. Amanda. So you mentioned using the tools for educators, we're going to come back on what the current state is of AI. But in terms of student use case, and working with students is primarily your focus working with educators or students or, you know, the whole user diaspora? Yeah. So

Amanda Bickerstaff:

we believe very strongly that these tools need to be responsibly made before they can be in front of students in meaningful ways. We have tools that have significant bias. If you saw the Gemini kerfuffle Oh, man, if you work for Google on here, I am so sorry. If you don't know there was some huge like issues around trying to take out the bias in text image generators, and all it did is make it more biased. Welcome to Gen AI, everybody. And so we believe very strongly that AI literacy is incredibly important. And it doesn't mean just for educators. But what I will say is, I did something super cool. On Thursday, I was in Seattle, and students and teachers work together with one they had GPT, for access for the day, they'd had aI literacy training. And then they solve problems in their community together with generative AI. So around affordable housing, building a gym, they built a whole capital campaign, including their gala plan with generative AI. And so the idea that there are opportunities to do this work in meaningful ways that bring teachers and students together and do that and ethical responsibilities and get away from that rhetoric of AI is only for cheating is possible, but only with a foundation of AI literacy, and really strong understanding of how these tools can be used ethically. Awesome, Steve.

Steve Shapiro:

Amanda, you set me up really well, I appreciate it. So I'm also coming at this from the perspective of an entrepreneur who happens to have been in the world of education and workforce training for most of my adult life. And like any good entrepreneur, you always want to hyper focus on the problem to solve and really become intimate with the problem. Too often, I'm advising early stage companies that saw a solution that looked cool and shiny. As a matter of fact, in the last 18 months, I've seen a lot of that, but didn't really live and understand the problem acutely. And so our company as being in the assessment space, and working directly with the big dawgs of assessment like ETS College Board, AC T, the large publishers, so on and so forth. And a ton of school districts, we came to the realization that there were certain problems that we could solve back in the day, let's say 10 years ago, eight years ago, but it became increasingly clear to us that one of the problems to solve was that the world was changing so quickly. And that, how does educational content keep up with those changes? Because creating educational content is actually a very laborious and cumbersome process. And then also because we are in the assessment world, how is assessment keeping up with those changes? And so when we literally stumbled upon the beginnings of Gen AI in 2018, we said this might be part of the solution. And as we began to do research, and in the early days, as Charles could tell you, I mean, you've heard about the term hallucination. And you've heard about biases, and in the early days with large language models like GPT, two and its predecessor, that it was, it was still we were kind of like doing experiments saying, hey, this has promised but it's not there yet. I wouldn't want to put this in front of a customer, right? And then it kept getting better. And we also learned how to work with it in a way and so part of Our journey has been as that underlying technology is improved. The question to our own team and of course to our user base was, how can we make this better for you. And we've gotten to a place now, where it is, it is really good. And it's really customized for each of the different partners that we're working with. At the same time, the underlying technology is moving so quickly, we have kind of the burden of understanding how we go from GPT, three to GPT, for to GPT. Five. And as a business, quite honestly, people say what keeps you up at night, I say, if g PT five makes everything my team just did over the last 18 months obsolete. So you have to be pretty smart as you think about that. But I think our Northstar has always been how we work with the users. And right now, because our company was acquired about 18 months ago by strategic Prometric, who's kind of one of the big players in the assessment space. And right now we're helping a lot of content creators that are in the certification, licensure space. So people that are doing like certification for nursing or for doctors, or for financial CFA CPA, are literally like 500 different certifications. For them, we're solving a really big problem, because for most of them, their content is changing so quickly, that it's mind blowing that you have to create new content and new assessments. And when you're in the certification world, as much as I hate to use this term, the assessments are very high stakes, like a person taking that assessments, career career development depends on their ability to pass that assessment. And you want it to be a valid assessment, right? And so how can you do that when things are changing so quickly. And so that's kind of been our Northstar to do that. And, along with it, Amanda said about being super careful about entering the K 12 world. With this type of technology, we get requested many, many times, two, three years ago, to point the product in that direction. And I continually waved it off. And I said, I'm sorry, there's too many unintended consequences of pointing it there. I'd like to learn more about it. Before we do that. And now we're at a place where we have learned more, and we'll begin to do it, but we'll do it really carefully. So that's our story.

Charles Foster:

Awesome, how we got here. So, you know, at altitude learning, I was the CEO in 2017, to 2021. And we had a team of five data scientists who were working on personalization. And we had a curriculum team of almost 30 people building for every child, for every standard, we would have three to seven different options that children could choose. And the idea was, you know, if I love to learn math, through baseball and, and batting average, and somebody else's, I learned through pizza. Each child could have it customized, not only for their level, but also their interest area. And I will say we hit our head against technical issue after technical issue. We were operating in the old world of AI, which was around machine learning and supervised learning, which means that every time we had to had a wrong answer from the the AI or the machine, we had to tell it wrong answer. And every time we had a right answer, we had to tell it right answer. And so when you think about the kinds of numbers of data points, you need to have to make insights that even approach the insight of what a teacher could do. It was a non starter. And so our, our work really became more of a recommendation engine. And the the best AI at that time was actually in marketing automation. So they get in marketing for a long, long time, they get about 40% of your targeted ads. Correct. That's a great weight for them. If it was learning activities, and it was a 40% hit rate, that would be awful for education, by the way, be prepared for hyper targeting, in your marketing from now on, like all of your marketing is going to be it's going to know more about you than you know about yourself. But really, the kind of work we were doing was not possible. And literally, I was sitting I was actually at the California School Board Association meeting. And a friend had sent me an early version of chat GPT and I'm next to the school board president and I'm, I'm playing around with it, and I'm like, This is what we were waiting for. And then I turned to the school board president I'm like, Here, have it write your speech at the school board meeting. And she did and it was a great speech. So, you know, I think one thing for the group to understand is literally the thinking about teaching and learning has been done for 20 years of how we would want to use AI. But it wasn't possible until this technology shift. So let's talk about where we are today. And, and I think it's easy to be optimistic about the future. But let's be real. I'm gonna go to Amanda, first, you're in schools. Where are we today?

Amanda Bickerstaff:

Okay, everyone, who in here thinks you're an early adopter with AI? Okay, so I'm okay, so a decent amount? Well, there's been research that shows that like that two and a half percent, are considered early adopters. And what I'm seeing consistently is that when we have schools and districts and associations, or even companies that are willing to do the work of AI, literacy and guidance, it's because they have enough early adopters that are allowed enough and have enough credibility, and enough power to shift that dial. And so there's been a really fascinating moment where very few schools and districts have policies in place of any type, including just an addendum to their existing guidance about a sexual use. We have very few organizations that are taking time to build literacy. And so it's been really, really fascinating because the early adopters are the ones and if you are an early adopter, and here if you're not yet, come on, hang out, we'll do it together. But it is really interesting because even though ci t is the fastest consumer facing technology ever, it took 10 years for Facebook to hit 100 million users nine months for tick tock took five weeks for chat, GBT five weeks that we do have a lot of consumer use. But we still see that schools are moving extremely slowly. And so I think that that's the first thing there is also a conflation of generative AI and AI. And so artificial intelligence has been around like this coined in the mid 50s. The first chat bot was Eliza, she was in the 60s, she was a mental health Chatbot. And she said, I have a bad day, you know, what'd she do? She's like, Oh, you had a bad day. And people actually liked it. And we find that there actually is a lot of trust. But, you know, I think everyone in here has had a bad chat bot experience in the last five years where you just want to scream representative into the void. I've done that recently. But what we saw is that we've had this move right for a really long time. But really what Ben was talking about is that we were using these models that took a lot of data, they did one thing pretty well sometimes, and not everyone could do them because they were really expensive to this world. Now we're generative AI is this opportunity to build things that never been possible before. And I think that what's really interesting, though, is that because education is so at risk of a calculator for the Humanities, which is kind of what Chad should be at is, again, do your essays, it can do your, you know, like you can do your college essay, applications, reference letters, pretty much everything that we look at in terms of assessment is that there is this really fascinating time where there's a really strong, two things, a conflation of AI is all bad. And that AI like that we have to stop the use of AI. And so we see that, but then also this conflation, that AI is only for cheating. And so what's happening is you have these really terrible, like kind of conflicts that are happening in classrooms. I've been in PDS where I've been working with superintendents, and I've asked if any of your students been accused of cheating. And I've had people where their kids have been accused of cheating with AI. And they're superintendents, their child has been accused. And so this idea though, right now of like, this brand new technology that can do things that have never been possible before that took us all by surprise, is having such an enormous impact on the day to day lives of teachers and students is something that we cannot ignore. But it is playing out in this really strange way, where if you don't have the early adopters that are pushing, then it's just a whole bunch of fear and uncertainty. And you have some people that are going at hyperspeed. And you have some people that are like my cold dead hands, I am not changing. I'm gonna close this door in my you know, my, the I bought my classroom is gonna go away, like science of reading is gonna go away and differentiation and you know, personalization, all those things that we keep hearing off and on, and really, truly believe that AI is not going to make an impact without realizing that school is already change. We just can't recognize it yet. And definitely do not know what it's going to be in five years.

Ben Kornell:

Steve, tell us a little bit about the current state of AI, especially when it relates to b2b and these large curriculum companies etc. Yeah, so

Steve Shapiro:

I think a couple of things that Amanda said, I mean, I think we're on the cusp of a few things when we think about how advanced the technology is. Now when we think about learning and instruction. mean one is that a lot of us So in the tech world have been around the idea of adaptive personalized learning for a long time and, and we would be the first to look in the mirror and say, it's never quite had the impact that we set out that we thought it was going to have back in, like 2008 or so. But we're on the cusp of having the underlying technology that's going to make that a lot more feasible. And the other part is the idea of automated scoring and feedback. That's on the cusp of being really good. It's been around ETS invented it like 18 years ago. In certain use cases, it's it's okay to score essays give students immediate feedback, but it's on the cusp of being like 10 to 20x, better soon. And this is all to say, this is a way of taking away some of the drudgery of the role of the instructor is to free them up to do things that are more, using their superpowers, to be facilitators to foster collaboration in the classroom, and so on and so forth. So I think the key though, and this was the key that we found on our journey over the last five years with content providers was, you start as a human hybrid tool, if you go to our website, it still says human hybrid tool, and get humans used to using it. But have them know that they're still the authority, they have editorial control, they have agency over it. And over time, they'll come to see the power of the technology, and they'll begin to trust it better. And they'll also their input is data that helps it get better, as well. And so keeping that in mind, I think that in the next. So we're on the cusp of being able to start that journey, I feel like we're at times zero right now, in K 12. world, but it's about to happen. And that's really exciting.

Ben Kornell:

Charles, can you tell us where we are in the state of AI technology?

Charles Foster:

Yeah. So one way I like to think about AI technology and AI as a field is that AI is a field whose sole job as it is to make ourselves obsolete. And I think that we're actually sort of in the process of it. I'm very glad to be here speaking as an AI scientists, but I can honestly say to you today that with some of the technology that is out there, it's making it possible for people to build sophisticated human and machine hybrid systems, without necessarily needing to have an advanced degree and lots of experience in AI. In our day to day is in our in the work that we do on our day to day is a lot of what we do is not telling a machine, specifically what to do in computer code. But it's talking with other people, it's clarifying and negotiation, negotiating requirements and needs, and doing that and language during that in body language, figuring out what is important and what is not important. And we're starting to have the technology that can at least in part, process that for us, so that we can, instead of having to write computer code, or ask someone else to write computer code to do something, we can program systems in natural language, and we can say what the process say what the thought process is, that should go behind something and have the computer execute that thought process. So like as a concrete example, when I'm building an AI model to help a customer, say, write exam questions for, for auditing, I don't necessarily need to today. Write everything by hand. Like all the steps of logic, what I do is I will take what are the guidelines? And what is the thought process that someone who is who is writing an auditing question, what would they be thinking about? What were the mental steps that they will be thinking through be? What resources what they look at as they're doing development of those items? What would they not be looking at what checks and balances would they be doing on their own work? And I can write those things into natural language and then really let the computer system deal with the rest. And I think that that's somebody that you're going to see a lot more of nowadays, because the technology is finally ready to just accept instruction in the natural modality that teachers and students are used to dealing with other humans. And so that's where the technology is right now. And that's one of the things I'm excited about seeing more of. Yeah,

Ben Kornell:

from my perspective, there's five areas where the tech is ready right now. One assessment. If you think about education, 5% of our data is structured data. 95% of our data is unstructured. It could be that project on the wall. It could be a hands on activity. It could be someone speaking out loud. Now, AI can take that unstructured data and convert it to structured data for assessment. And you can have real time formative where you never need summative. The second one is student practice. So I'm pretty skeptical about AI teaching new concepts. But we're seeing math practice, reading practice, really taking off. Think of that more like going to the gym and doing your reps, you're more motivated. And you're better if you've got your personal trainer who's helping you figure it out. But then you've got to go do the work. And so AI is really, really good for encouraging and engaging and customizing personalizing that. Third one is educator efficiency, just taking the rote tasks off the plate of the educator. Fourth is data systems and and insights. Right now all of our data is super siloed, into like 15 to 150 different systems. And so AI can actually be that connective tissue to put it together to create insights. And then the last is school home connection. I'm actually the most disappointed in that area. The fact that my school doesn't have a chat bot, where I can ask, When is the parent teacher night? Or how do I register my child, or what is the lunch schedule? These are basic fundamental things that chatbots have been able to do for a couple of years now. And we could do it for almost no cost for every school. The other thing I would just say from a current state is that Omni modal is coming. So this idea that chat GPT was really the interface of yours zero, it's going really quickly to speech. And I think that is really good, especially for learners who are struggling with reading, or early learners or people who have different modalities of interacting and communicating. And Omni modal also means that it's going to be more complex. So instead of text to speech, or text to video, you will now have video to video or video to speech or video to text. So once you start drawing all those arrow arrows, it's really hard to build guardrails around what the AI does. And so the third state of affairs is, our kids are basically prepped to do laps around us. And this is where I think if we take, if we let the learners lead, I think it can be really powerful and exciting. If we, if we tend to stay in a compliance based system, where the learners incentive is get the tasks done. And the teacher owns the learning and is trying to get you to learn over time, then kids will use AI to get the task done. But if the learner owns the job of the learning journey, and are empowered by AI to follow that journey, I think it could be really incredible. So we're gonna do one rapid fire. On that note, what's giving you optimism and what's giving you concern as you look for it, and then we'll go to questions. So rapid fire, Charles, why don't you start optimism and concern?

Charles Foster:

Your optimism, I think I'm really optimistic about the degree to which governments, everyday people, civil society are engaging with the technology and asking the critical questions asking the hard questions that you should be asking if the technologist and startups that are making this stuff. Yeah, I think that that kind of engagement is going to be what shapes the future of technology. And then you said what certain concern, I'm concerned about trust. I'm concerned about trust and how we're going to navigate trustworthiness in a world where everything could be generated by AI, and there's no technical mitigations to that right now. There's, there's the, despite what people will tell you, there are no reliable AI, text detectors or like image actors. That tightrope does not exist. And so we're gonna have to figure out how to navigate a world where trust needs to be established and other means. So yeah.

Amanda Bickerstaff:

I'm optimistic by seeing things like how special educators and students with IEPs and 504 plans are really taking off with this technology. The fact that you can talk to chat GBT, and now it can talk back to you that was being rolled out now. It's really amazing for students or people with dyslexia aphasia, or have issues with low vision. And so we see such amazing things. I'll just say a really quick anecdote. If you join, we did a webinar series with educating others Alliance and we did one last week on supporting students with special needs and disabilities. And a teacher worked with a nonverbal student and they use Canva magic and the student loves tigers. And so he did is he asked it to build him a tiger and the tiger came up and it was just a face of a tiger and incident that first of all, that student made a connection for the first time between his communication device and that it was a technology that could create and build and do things that were beyond his conception. But then he said, Okay, now I want it to move, I wanted to have arms and legs, I want it to be in a jungle. But now it's going to be here. And the student to be able to have this student continuously interact with this technology showing resilience, his thought process, building those reps and those skills was something that was such an amazing opportunity. And something that happened in five minutes with a technology that's freely available. And so I think that that's where I get really excited. And I think that when we see opportunities to move into places in which the possible the impossible is now possible, specifically for students is where I get amazingly excited. I think the concern is the fact that the these technologies are owned by enormous companies that cost$100 million to trade in charge of 84. So does anybody have $100 million that you can throw up here, and we can build our own LLM for students? Maybe Ben, maybe I can ask Ben later. But these tools are being controlled by organizations that are huge corporates that have historically not cared that much about education, and educators and students, let's just be realistic here. And it doesn't look like anytime soon, there's, there's going to be an opportunity to have these tools get out of like that those kinds of opportunities. We talked about trust, but also about this idea of like who has the power and to the next stage, open AI, like they fired their CEO and rehired him in two days. It's organizations run by humans that are flawed, but they're having outsize effects on our future.

Steve Shapiro:

Yeah, so I'm optimistic because I've been through a number of hype cycles in the world of education and ad tech, that, finally the underlying technology, as I said before, I think is it a place that if we do things the right way, and get the right people involved, like everybody in this room, that it can finally come true to its promise of better personalized learning and some relief for instructors as well and great insights. Amanda, you stole my pessimism part. But I'm definitely concerned about the concentration of power around the companies that have created large language models, I have only one company in the world really controlling the chips that power a lot of them. I think there's all sorts of political dynamics that I'm concerned about. And we, as citizens should voice our opinion about that to our politicians, and to everyone else, that we can, because that's probably the biggest threat right now. to society. Honestly,

Ben Kornell:

for me, on the optimism side, it's that everyone can build. And, you know, we didn't talk about open MLMs. But I think there is an alternative future where these large companies, actually the value of the large language model goes to zero, because the open models are so good. And, you know, actually, when you talk about GPT, 5678, I feel like we're hitting some sort of Assam total level where the, the language model is getting bigger and bigger isn't going to be the thing, that's the unlock, it's actually the use cases. And, and then also the compute, because, you know, if you're looking at video and video, compute, you need massive amounts. You know, my hope would be that we donate 1%, of all compute to social impact purposes, just because the cost of compute will be the gating factor, what's giving me the biggest concern is around the adoption curve. So in education, I'm convinced there, you know, the Rogers adoption curve, there's like innovators, early adopters, early majority, late majority and laggards in education, we do not have the early majority, they've gone they've left. There's a cycle of who gets selected over and over almost Darwinian, of those who are super innovative. The leadership says you do you just go for it. And those who are late majority and laggards. It's like, well, this is a safe job, I know what I need to do, I'm going to do it. So I don't think our systems are really ready for the rate of change that we've really had this last year. And that's coming. And part of what makes us vulnerable in those systems is that one, we don't know what's behind the curtain. So that's scary, but to our systems don't have the time to do the adaptive change, to pair with the technical change. And if you look at health care, whenever they roll out some new innovation, let's just say you know, hand washing and save more lives than any other innovation in healthcare. When you walk into a clinic, you walk by a sink every like four or five steps. That's because they understand the adaptive change, which is the nudge of the nurse saying, hey, you need to wash your hands. And they put in building codes, the speed at which our institutions need to support adaptive change, which why I'm so glad you're here Amanda. It is a totally different level of speed. And so that gives me but it's both optimism and concern. All right, we're ready for our questions. And just what is a good question? What is not a great question? I had teachers who always said, Oh, every question is a good question. So not true. It might, it might be a good question for you. But we have a whole room here. So please try to think of a question that could be good for the group. And if you're the type of person who always ask questions, maybe this would be one to step back and say, I want to hear what other people ask. And if you're the one who's like, I have a great question, but I just don't know if I can ask it. This is your moment. So microphone over here. We also all have teacher voices in here. So if you want to stand up, the only thing that we ask is that you say your name, where you're from, and then your question. Go ahead, sir.

AJ:

Oh, my name is AJ, of Houston. Question for Ben. And then a question for the panel for the panel. Oh, what is a way of using this technology that you're not seeing yet but that you believe would make a radical difference in the quality of instruction that teachers are able able to deliver? So that's a question for the panel, something you're not seeing yet, but it will make a radical difference in the quality of instruction that teachers can deliver a better question for you. You I thought I heard you say a minute ago, that we could get formative so well, that we wouldn't need summative. Did I hear that correctly? Could you help me understand how that's not a horrible plan?

Ben Kornell:

Yeah. Yeah. Great. That's an easy one we should have gotten rid of summative long ago. But but I will just say the idea that you would take a test in May and get results in October is so insane, that our education system does that. If you What do they call it in healthcare, they call it diagnostics. And you get your labs in 24 hours. So another example where healthcare, it was like witchcraft in the 18, early 1800s. And then they figured out how to do diagnostics. And all of a sudden, the only summative you have in healthcare is death. Everything else is diagnostic. Right. All right, so team, where is a hidden use case in teaching and learning that you'd really love to see, but you're not seeing quite yet.

Amanda Bickerstaff:

Let's not talk about more depth. But what I would say is, I think what we're doing right now is a lot of point solutions. And so no one's really figured out what generative AI can mean for pedagogy. And so what I mean by that is that we have some tools that are good for planning and some goals that are reinforcing existing ways to plan ways that we're reinforcing existing models of education and classrooms. And these these chat bots that are focused on like, the way you'd have like a consultative conversation. And then we see things around assessment. But no one's really looked at a comprehensive approach to like what pedagogy becomes with generative AI, what is the pre the during and after mean, where you can actually do on demand, like customization, and responses you can create on the fly, you can start to have real differentiation. And so what I would love to see, and I think that what I, if anybody's a builder in here is like, how can you how can you tease apart these points lations into actually thinking through an augmented practice, or an augmented tool for educators that make them able to be a superhero in the classroom, by having access to technology that can do amazing things. But not only really amazing things, but on the fly and on demand.

Charles Foster:

I don't have anything to add here. One of the reasons why I'm here at this conference, I want to be able to hear from instructors, and learners and others who are like deeply engaged at that level of where are you using AI? Or where are you seeing AI used? And in what ways? Yeah, so I'm really more here to learn about that than to offer comments on it.

Steve Shapiro:

I think given that we just saw the launch of the apple vision Pro, I think there's some really interesting use cases where they are VR. And just like he's asking, like 1%, from these giant corporations to donate, I'd like to ask Apple to donate one vision Pro to every student in the United States. They could afford it. It's not a joke. And Ben and I are very aligned about that you guys are making so much money. hand over fist, you'd like to see you want to make the world a better place. That would be a really good use case, because I demoed division pro a couple of weeks ago, and it did blow me away. And again, we're at time zero with that. But we've worked a lot of AR VR companies, and I think we're on the cusp of that soon.

Ben Kornell:

So let's go microphone, and then I'll go over here and then the question over there. Hi, thank

Diane:

you. And I came in a few minutes late. So if you covered this, it's totally fine to

Ben Kornell:

tell. Can you just say your name and where you're from where you're at and

Diane:

love it. I live in California but I work for Cornell University for the Tech campus in New York. Yeah, Go Big Red. So my question really, I drive some I'm Kate 12 research in computer science education. And we're just thinking a lot about privacy for students and, and teachers, and where are the safeties are, because they're not yet. And before this, were really obviously worried about that equity issue. You know, the same week that private schools in New York started to offer high school courses on how to, you know, prompt an LLM New York City Public Schools shut down chat GPT completely. So that kind of gives you a sense of where we are. But what are we? What do we what do we do as educators and researchers around this issue of privacy and what are your hopes and your concerns?

Amanda Bickerstaff:

First of all, like Chad GBT, in New York City is a fascinating piece. And equity is really important. Jimmy T was unbanned in May. But I can tell you as someone that works in New York City schools is that it's still banned, is Shadow banned. And so you literally have to, like the principal has to ask to get unbanned. And so I would say about, I would say more than 95% of schools in New York City have no access chat duty, and not just for students also for teachers. And so I think that that's just something to understand that the equity, we talk a lot about the fact that if we do not start thinking about literacy and adoption, it is an equity divide. I was in a private school two weeks ago. And they are about to throw a lot of money at three, three generative AI tools, because there's an arms race of how much generative AI can be implemented for students, not for teachers. And so I think that's like they don't even the tech director doesn't even like generative AI. That's amazing part. He's just doing it because other schools are doing it. And so I think for that, which is really interesting is that we feel very strongly at AI for education, that without strong guidance on appropriate use, and the ability to ask questions of technologist to about privacy, about model transparency. Some of these tools are using seven or eight different models are using open models, like llama, they're using closed models, your student data can be going to seven different places. There are leaky data systems, there are lots of questions about that the way that their data is used to being trained for systems, you may not even know and in fact, if you have a system that already used AI, you probably didn't even know to ask that question of is your student data training AI models before generative AI came on the scene? And so we believe very strongly, it's about building literacy. It's about creating spaces in which tech not like technologists are asked the right question, and are put in a position that they need to be transparent about what they do and how they use their use student data. I would say right now there are lots of tools on the market that are not FERPA and COPPA compliant. And if they say they are, I don't think that they always are. And I think that this is something that's really important, because the risk here is not necessarily to the company, because these companies are so new, and many of them might not exist anymore after a year, but it's to the students themselves.

Ben Kornell:

Do you think the two of you want to? Great, let's go over here. Dave from Sao Paulo obrigado.

Dave:

Amanda, you said something earlier that I was I chuckled when you said judging the T is not to move approval or something like that. I have been guilty. I'll admit. I think maybe you would say it's like the new Google, you know, that is, of course, you've been checked before, which isn't static. But I respect what you said. And I can understand a little bit more. How do you see that different from like a normal person potentially?

Ben Kornell:

Quite question is basically difference between search and using advanced or pro GPT versions or AI versions.

Amanda Bickerstaff:

Guys, I feel like I'm talking too much. But. So the way it there's generative search. So anybody who's perplexity? Yep. Okay, if you've not used perplexity, I promise you'll like us better after this. So perplexity is essentially taking Google search and then adding a generative function. And so what it does is it allows you to connect the best parts of both right and it hallucinates much less, but it makes up things. But GBT for it. The way that it works is almost like we layer in actions. And so people don't realize that these models have knowledge cut off. So if you use a free version, the knowledge cutoff is January 2022. And I've told this joke in many places, but does not know that Travis Kelce, and Taylor Swift are dating, and I'm going to continue to tell that joke until they break up. And then I'll probably say that they broke up instead. But and then even with the update now with GBT for it's to December 2024 2023 Excuse me. And what happened is the way that you would say is do your research It will do connect to Bing. But it's not connecting to it as a search result, the same way that you would do it unless you actively ask it to search the web or do its research. It's all just working through its training data set and the way you interact with it. And so when you ask it for a date, or a figure, or even a URL, it could be very likely made up. And so that's why it's not better Google. And I will say that most people when you ask them, when you dig into their misconceptions, it's so fascinating. It remembers me, it's way I did it. It has a training data set, it remembers everything. It's an encyclopedia, it's connected to the internet, it's thinking, those are all things that I hear every day. And because the way that they're done the way that the system prompt works, natural language, it says Don't say I don't know, you know, keep answering questions, respond even like with URLs, even if you don't have access. And so I think that that's where it's a difference between people go to Google, they might know that they have to be careful about which source they use. But they still believe that that's a real website. And then Jack GBT couldn't make up that website.

Charles Foster:

add on to this, I think it really underscores how much literacy about AI is going to become an important thing. And also, that learning about how AI systems working don't work and what you need to do to prepare them and set them up well, so that they work for a use case, it's going to teach us something about our own thinking too. It's there's a lot of stuff about pedagogy and education that is applicable to AI things. And it's going to be there's going to wall going to have a lot of teaching and learning opportunities as a function of that. Yeah,

Ben Kornell:

I would just also say, I don't know, if you have a friend who says ridiculous things, but with such certainty that you're like, is that true? Is it really real? And there's a there's a tone that humans pick up from the AI, that this is the answer, when it's actually a probabilistic, you know, next word guesser. And it's more like autocorrect than it really is, you know, an encyclopedia, also say, you know, for all of the challenges with search, at least gave you a set of links that you would explore yourself. In this case, whoever is doing the structured prompting after you prompt it, they're already gaining information for you, they're already kind of feeding you into a loop of data that you may or may not want to be involved in, and you may or may not know about. Okay, we do have time for only one more question. It was over here. Sorry about that. But we will be at the happy hour at 530. At seven grand and this kind of stuff is always better over a beer. So go ahead, sir, stand up. Just tell us your name.

Rowling:

Rawlings. Right, from broadband communities. But we heard in two earlier sessions about what you might call a fight. So elaborate efforts about each AI as a front end? Like a sort of any single question. How close are we, you know, being able to use privacy? I just read it. I want to set a chapter.

Steve Shapiro:

That's possible today. Yeah, we're there. So one of the great thing about this wave of technology is that, because we've learned a scalable recipe for really taking a very, very wide body of background knowledge and common sense and unsealing it into a much smaller model, we already have models today that you could if you wanted to, if you have a GPU, if you can write one, we have models that you can run on your computer that have a lot of background knowledge. And then you can additionally add on top a book of reference materials that either the the model can look at, and sort of like refer to, or that you can train it to sort of respond to correctly. So that already exists. Every day, I'm building out models that incorporate those sort of reference materials, what you said like a micro data set, that that's the current paradigm that we just call it prompting instead of fine tuning. But we are in a world where you can now train models, or at least configure their behavior using a few input output examples as opposed to needing a giant data set.

Charles Foster:

I don't we did not plant that gentleman in the audience. But that's something that our company's been doing groundbreaking work on for a while. Thank you. Thank you for the question. And in

Ben Kornell:

summary, you know, I think one misconception about AI is that the bigger the model, the better the output and actually, the better the training data often to better the output. And so Bloomberg, for example, is doing amazing things with their proprietary financial data. And and so one question you might ask yourself is, does my educational institution have really valuable data that we might want to use to train an LLM? And if you needed to call somebody, I think you could call the folks up here on the stage. So thank you all so much for coming. We really appreciate it. Have a great day.

Alexander Sarlin:

Thanks for listening to this episode of Edtech Insiders. If you liked the podcast, remember to rate it and share it with others in the ad tech community. For those who want even more Edtech Insider, subscribe to the free Edtech Insiders newsletter on substack.

People on this episode