Edtech Insiders

Week in Edtech 5/21/23: Guests Hussa Blake of the McGovern Foundation and Brisk Teaching AI's founding team

May 24, 2023 Alex Sarlin and Ben Kornell Season 6 Episode 4
Edtech Insiders
Week in Edtech 5/21/23: Guests Hussa Blake of the McGovern Foundation and Brisk Teaching AI's founding team
Show Notes Transcript

In this Week in Edtech, Ben and Alex talk about the rise of open-source LLMs and what it means for education, Sam Altman's Congressional testimony and coming AI regulations, and  systemic changes affecting the entire Edtech venture landscape.

We also speak to Hussainatu (Hussa) Blake, who leads Edtech and Workforce for the Patrick J. McGovern Foundation about global AI and Edtech

We check in with Arman Jaffer, Tom Whitnah and Corey Crouch, the founding team of the powerful AI Teaching tool Brisk Teaching.

This episode of edtech insiders is sponsored by magic edtech magic Ed Tech has helped the world's top educational publishers and ad tech companies build learning products and platforms that millions of learners and teachers use every day. Chances are that you're probably using a learning product that they've helped design or build. Companies like Pearson McGraw Hill, imagine learning and the American Museum of Natural History have used their help to design or build some of their learning products. Now magic wants to bring its pedagogical and engineering expertise to make your key learning products accessible, sticky and well adopted. Check them out at Magic Ed tech.com, which is Ma GIC. Edie Te ch.com and when you get in touch tell them Ed Tech Insider sent you. Welcome to Tech insiders where we speak with founders operators investors and thought leaders in the education technology industry and report on cutting edge news in this fast evolving field from around the globe. From AI to xr to K 12 to l&d, you'll find everything you need here on edtech insiders. And if you liked the podcast, please give us a rating and a review so others can find it more easily. Welcome to Ed Tech insiders where we speak with founders operators, investors and thought leaders in the education technology industry and report on cutting edge news in this fast evolving field from around the globe. From AI to xr to K 12 to l&d, you'll find everything you need here on Ed Tech insiders. And if you like the podcast, please give us a rating and a review so others can find it more easily. Hello, Ed Tech Insider listeners. It is Ben and Alex week of May 18. My gosh, so much going on in ed tech. I say it every week, and yet it continues to astonish me. Today we have a triple episode. And today we have an incredible doubleheader of guests. We have Lisa Blake from the Patrick J. McGovern Foundation, she leads AI and education and globally. And we have the brisk learning team. They just launched their brand new product. It is a teacher copilot product that literally changes the game for educators across the country. excited to dive in and excited to have you on board. So really excited for these incredible guests. We're basically at the intersection of what's happening and tech and AI. Before we go too much further though, Alex, what's on the pod this week. There's always so much on the pod. Every week, we have so many episodes coming out what just came out is a really cool episode with Audrey McKay wish, who is the CEO of curious cardinals, which is an amazing mentoring platform. And next week, we have Katie Kirsch, who is the CEO of 20, which is an incredible work life mentoring platform. So we're sort of in mentoring mode as we get to college graduation, and people are thinking about how to succeed in their high school, college and life beyond. And no time like the end of the year to raise a glass. We have events coming up on May 18. We've got a happy hour in the Bay Area in Redwood City. We're also headed to London for our first international event. That will be on June 20. More details to come. But we're doing a great event with Cooley and we'll have happy hour we'll have dinner. But we'll also have some great conversation and some panels in London. And then on June 26, we will be at ISTE and hosting a really awesome happy hour there. So please mark your calendars, more details to come. All right. So headline time, we've got three big headlines for you all today. The first one we hit on in the newsletter, which is really the rise of open MLMs and specialized outlines. The TLDR on this one is that the days of the duopoly of Google and open AI dominating AI through their large language models may be waning. Right been a short live domination cycle six months, let's be fair. And you know if you extend it back maybe a couple of years, but what we're seeing is that new developers are taking models from lambda, which was the model developed at is it lambda? No, it's llama. Yeah. So what we're seeing is new developers are taking the llama LLN which was leaked from Facebook and then building on top of it and it is a classic open source case where developers around the world iterating on this really powerful, very flexible, open ended model are achieving spectacular results, either for generalist AI queries or specialized AI queries. I think what's also interesting is the debate around is this like Linux versus Windows? or is this actually one of those open source elements that go to the core of the stack, where, actually the switching costs and the benefits of an open source version are fully replaceable, are interchangeable with the kind of branded ones. I think this really begs the question of what our LLM future will be. And then the EdTech spin on this is it opens pretty incredible possibilities for specialized edtech tools that could be a tech LLM that puts students safety as a top priority. It could create an attack LLM that is really specialized around inquiry, rather than giving an answer and always asks a question back and helps you with learning. So like a Socratic LLM or third, it could be you know, LLM, specifically tasked with things for educators, administrators, university, deans, lifelong learners. I mean, the possibilities are endless. And it just seems like this is opening a new dimension. You know, Alex, you're super close with open source movements, you've seen the rise and fall of a number of different platforms and strategies both closed and open. What's your take on how this is going to play out? But also, what does it mean specifically for Ed Tech? Yeah, it's a great question. So, you know, there have been a lot of different moments in tech, where the sort of crowd comes up with an open source, you know, non proprietary model, and it accelerates development and all sorts of ways. In some cases, that open development becomes a significant competitor, like Linux, as you're mentioning, in most cases, though, the open in education, at least the open elements, like the OER movement, we have OpenStax textbooks and what used to be called, you know, flat world, or a former low, if you go way back, or, you know, in many cases in education, the open stuff doesn't quite transform as much as you'd think it what is transforming education in some ways, but it's taken a long time. And it's not fully, you know, disrupted the textbook industry, let's just put it that way. So my guess about where this is going to go is yes, the fact that llama is open source through some combination of leaking and strategy from meta, and the fact that people can now build on it, it's going to mean two things, it's going to mean that there will be a price competition, which was not yet true. With open AI and Google opening eyes API's are relatively expensive to call, at least right now they will go down. But that is a limiting factor, it means that if you're building businesses on top of them, you can't just you know, open it up and say free trial for a month, do as many as you want, because it will cost the company a lot just as it's costing open AI a lot when they had their complete, you know, as they've had their open source. So what I think it'll mean is that there'll be more of a rapid price competition, more people will be doing LLM companies, more people will be developing their own LLM. And really completely that there will be an edtech LLM. What I would think that what I guess will happen is that rather than being, you know, an open education LLM that's like some giant evolving Creative Commons project, I think it will be proprietary. I think somebody will basically say, Okay, I'm going to train an education LLM on the right kind of data with the right kind of philosophy, you know, not giving the answers with pedagogy baked in, and I'll put lesson plans into its brain. And then I think they're going to start charging for it. And ideally, they will charge what the market can bear not what open AI charges for some of these edtech tools. And that will be an accelerant. So long word. I think the open movement will create private companies that allow edtech to accelerate in a safe way, I don't think it's going to be that I might be wrong, of course. But I don't think it's going to be that like, you know, every college chooses whatever LLM they want and I think there are going to be dominant figures who make the education version of the LLM. And I think the race if it's not on yet, it should be on any minute. Now, given that these technologies are out there. And you know, Lamas was trained on a 50 billion, I think, you know, tokens are parameters. And you know, these start out strong. And as you mentioned in your article, they can be tuned to get stronger and stronger really quickly. And people are already pursuing models that are as good as Chad GPT for in a generalist way. And it's only beginning this specialized use case. Yeah, you bring so many important pieces up there. First on the business model. This is at the infrastructure level. So there still is the front end that you have to build on. Have any of these. So I agree with you that, you know, whoever builds the right LLM and tunes, it trains it for either specialized or generalist purpose will have an advantage. The second thing that's really interesting as I dug deep into this is that entrepreneurs are mixing LLM. So what this probably also means is that you might take a dab of the edge LLM and another, you know, shake of the open AI one. And you also have shared some about, you know, auto bots and the GPT. This idea that you almost have like AI, that then is instructing other AI agents to do things. So you can imagine the different AI agents are actually powered by different LLM 's that are best for their specialty. So for an ed tech LLM, if you create like real customer safety, not only is your market at Tech, your market could also be general tech, whenever there's anything that interacts with a minor. I mean, it's like a potentially game changing business or marketing opportunity. What do you think open AI and Google are going to do to respond to this rise? In open LLM? Do they compete on quality? Do they do partnerships to kind of make it easier for somebody to create their specialized LLM? And when do they open up? How do you think they respond? Yeah, I mean, the cynical side of me, and maybe this is not the right side. But the cynical side of me says, you know, Microsoft, and Google are in an arms race for AI dominance. Open AI, in many ways has sort of sided or sort of is fueling Microsoft, it's feeling Bing and other other pieces of their platform. I think that that will continue, I don't think that Microsoft and Google are yet going to look at these, you know, this 1000 flowers blooming, you know, people can create their own LLM for insurance purposes or for, you know, all sorts of things. I don't think they're yet going to say, okay, game over, I think they're gonna say we have incredible tech teams, we, I mean, Google, you know, Google basically created the transformer technology, which is underlies everything in this whole MLMs transformer model and machine learning. So like, they can continue to have advantages over open MLMs for a while, I think. And I know, it doesn't seem like it right now. Because people have used this springboard of the Lama to get pretty close to, you know, to be catching up. But it's not just about the number of parameters and tokens, it's also about the structure. I mean, if anybody hasn't Googled, you know, what open AI is looking to do with their plugins. Opening eyes whole thing is that they trained it on the internet up till September, I believe, 2021, which means it doesn't know anything that happened since September 2021. But they're already working on these incredible plugins for things like Expedia, or I mean, all sorts of modern platforms, Yelp, things like that, where they have data that is up to date, and it's changing, and it's reliable. And you can ask an open AI, you know, open AI bot, you know, I want a restaurant by me and make a reservation for it right now, and things like that. And it's like, yeah, I can tell you all the restaurants, I can tell you all the flights, I could, and I mean, they have advantages. infrastructurally and in terms of their people that I don't think open source will catch up on in a generalist way, like not yet. But I do think the specialized LLM will continue to grow. And you know, if I'm open AI, I'm gonna buy some of those, right? If one takes off from the medical community, or if one takes off in construction, management, whatever it is, like they're an easy acquisition target. So I don't know, I think it's going to be a sort of classic, the big dogs versus the many flowers. And if the last 50 years serves the big dogs, usually when there is a lot of structural advantages to the mega companies, but I don't know, that's my cynical side. Yeah. I mean, I have no doubt that the big dogs are going to win. I think that that is feta complete. But I think there's a way in which the big dogs win. And there's a way in which it like, creates opportunity in that tech space. And oh, my view is that open AI is staring down $500 billion needed to invest to create the next biggest LLM. And they've got to create these ecosystem partnerships so that there's a marketplace dynamic where they're getting advantages of scale, it actually behooves them to open up their model more and get that to be the kind of standard framework like He who controls the like coding language and structure and the developer infrastructure controls all. So I actually think, in a way they could actually reduce their capital intensity of developing the model further and further establishing themselves as the long term market leader by opening up partial or full elements of the open AI platform. ironic that it's called Open AI and it's closed. And if there wasn't a various intent from Facebook, you know, this is widely speculated, and no one can confirm or deny. But getting llama out was actually Facebook's only shot at catching up to open AI or to Google, which was basically like, get the whole developer universe to just attach on the llama as the standard LLM and frame and then catch up or exceed what open AI and Google are doing. And Facebook has a marketplace already that they can layer on top of that. And you know, with Instagram, Facebook and their other properties. So it is going to be interesting to see, one of the levers that they will use, though is policy. Tell us a little bit about Mr. Chat GPT goes to Washington in you know, a classic movie, what, what are we seeing on the policy side around AI? And how is that affecting the way ed tech is playing out as well as AI generally? Yeah. A few weeks ago, the White House I think, specifically the Kamala Harris invited, you know, had a summit with a bunch of these sort of heads of AI companies to try to figure out what this might look like, what regulation might look like. And now it's gone a little further, Sam Altman, you know, testified in Congress, just last week, I think about, you know, about what's going on. And he basically asked Congress for regulation, I think, you know, it's an interesting moment, you have somebody who, there's different metaphors you could use here, right? I mean, the people who are afraid of AI, which there are plenty of would see him as like the Oppenheimer, it's like, you let the cat out of the bag, the tech is now out in the world, and nobody knows what's going to happen with it. So you better you know, get some government on your side to protect you and everyone else. And there's, I think there's some element of that here. There's also sort of the commercial side of it, which is like, like you mentioned, maybe Facebook, and that I was being strategic. And you know, if opening, I had the recipe for Coca Cola, Mehta put out, you know, a recipe for a pretty darn good Cola, and everyone else can just make it better. And suddenly, they're being taken down. So I think they're also trying to, you know, in a systematic way, sort of structure the development of AI so that it isn't just completely market driven. And it's hard to know, there's these sort of moral elements, and these are, you know, humanity, societal elements, and then these business elements, it's really hard to get into people's heads and understand what exactly they're looking at here. But I don't think it escaped Sam Altman that like, you know, the hundreds of the most knowledgeable AI people in the world, all said, Hey, we should probably pause on this, which obviously, is not going to happen. And that, you know, Geoffrey Hinton left Google saying, this is not going to end well. He's the guy who basically invented neural networks, like, I think he sees this train is speeding up and need somebody to break it. And what's interesting is that, from my perspective, and from everything I've read, it's very unlikely that the US is going to be out front on this, our Congress knows very little about tech, we are not especially good at enforcing really good regulations. We basically punted on social media entirely for 20 years, you know, Europe is trying to write the AI playbook right now they are putting out, you know, policies and ideas and regulations and really trying to be like, you know, hey, we've been around a long time, we know that we've got to, you know, get ahead of this stuff. And, you know, I think they see themselves as they're the GDPR folks that I think they think that rightfully or wrongly, they have a little more sophistication in sort of balancing market and government than the US does, which is like almost all market. And then of course, you have China, which is all government, which is doing enormous regulations, tons for all sorts of reasons. So regulation is coming, and there's no question regulations coming, I really don't think the US is going to be the gold standard bearer for AI regulation, I think they are going to go in circles and get confused. A representative Ted Lieu put out a possible regulation. And I think, What the What I've seen about it is like it was sort of partially written by AI, and it's just a mess in the US. And I think Sam Altman knows this, and I think he's trying to wake Congress up to the need for this. But I think what's gonna end up happening is that the US will end up adopting, in part Europe, Europe standards. That's my feeling. I don't know, what do you think about all this? Sam Altman regulations? In most things, including ed tech, you know, California and Europe kind of feel like the bridge. And then the US, you know, reluctantly follows along or doesn't. And so there's a way in which like, state level players could actually be defining, you know, AI policy, because if somebody sets a standard that's, like, reasonable and meaningful, you know, it's probably easier for the AI companies to comply with that standard and say, Look, we're already complying with some sort of standard. So I would look at state legislatures as pretend Joel, mechanisms for this. You also see, like Montana banning Tiktok is like a backdoor way of, you know, going through the courts to figure out can we actually pay on this? You know, big time, you know, my view of the Altman testimony is like, first, I am probably naive. But I do genuinely believe he thinks that AI should be regulated. And part of that has to do with the fact that, you know, he's always been underselling what AI can do. And he's worried about the hype cycle getting too far out front. And it has the second I think, is from a strategic standpoint, it serves his best interests, he's in the lead, having more regulation tends to help the leader. And then third, on a cynical view in the PR, like World of public opinion, this is totally the playbook Mark Zuckerberg should have followed, which should have been like, I want you to regulate me with the full knowledge that there's no way that the US government is going to get their act together to do it. And so then whose fault is it that social media is messing with kids? US government's fault, I asked for regulation. I'm trying my best. Basically, Altman kind of flipped the Zuck playbook, the total opposite way, and it's really working for him. And the kind of glowing reviews that he's getting from like, right and left press is mind boggling compared to how Mark Zuckerberg painted as a child painted as out of touch. Like all the negative, I mean, it was like personal attacks on him, mainly because he was resistant to regulation. And Altman leaning in, it almost has no downside. And so I just think he and his PR team or his political advisor team, they are writing a new playbook for Silicon Valley, that I would not be surprised if you start having a parade of Silicon Valley exact same saying, regulators here regulators, their regulators everywhere, and at worst, they're improving their leadership position in those spaces. And at best, you know, from their business standpoint, you know, US government's not going to get their act together, they're going to be regulating stuff that happened like 10 years ago. So I do think the ad tech story here, though, is that there's probably not going to be a privacy or kids safe regulation, leading the way on what we can and can't do in ad tech. And so what we should really be looking at is, are there states? Or is GDPR going to take a lead on that? Or are there companies that are going to kind of set the bar, because one of the limiting factors in our space, whether it's, you know, workforce, higher ed, or K 12, is, if the buyers demand a certain standard, that standard will be met, but they have to know what the standard is. And so some player, industry, state, federal international needs to set that standard. And then we can all you know, kind of jump in with those rules. Until then, it's going to be a free for all. And one of the things I loved about your recent article was that realization that, you know, you can use different MLMs for different purposes. So if there is a moral panic or an actual panic, an actual real panic, about you know, teenagers or young kids being exposed to AI, that's like destroying their mental health the same way, you know, people have have seen in social media or young kids, you know, just not knowing how to use it. There's a really interesting market for, you know, having a walled garden AI, so to speak, that is safe and private and, you know, trained on great data are trained on reliable data, accurate, factual data non biased. But at the same time, kids are still going to get out of that walled garden the same way they get out of almost all walled gardens in the world. So but just having it there gives everybody plausible deniability. And it does help. I mean, it does really help. So I think it's gonna happen. And I think you're I don't know, I think it's a really important factor that even if the regulation flops, which I think we both agree is, is likely, I still think there will be people who try to solve it through, like you say, companies that try to solve it through actual tech solutions. And if they can convince school districts and universities that they are confusing, something that's safe for them, where they're not going to be held liable for things that happened, then there's a lot to win there. Well, I mean, I just think it's also important for listeners to know like, what does the open LLM thing combined with the policy piece mean? There are new legal questions that have never been answered before. And there's some dark shit happening in the web, when you can have your own LLM for 500 bucks and a laptop. So You know, I was talking to somebody in the UK and they were saying, you know, EU, cybersecurity people are seeing a huge wave of child pornography generated by AI. Is AI generates it? Is it illegal? If it's not really a real child? Is that illegal? Is it not? Is it art? Is it? Who do you blame? Is it the AI company that made it? Is it the LLM? Is it the person who put in the query? How do you enforce that? How do you find it? How do you prosecute it? I mean, when we're talking about like policy and regulation, this is not some sort of simple act that they, you know, put on the President's desk that signed and there's the rules of the road. This is changing every element of government, every element. And so you know, I think EU is kind of first wave dealing with all this stuff. And hopefully, we can pick up some elements. And by the way, we should look to China and just say, what are the elements of their policy that don't totally destroy personal freedom? That right, but that are actually things that we could set as a standard, knowing full well, that people will deviate from that standard, but at least creating some sort of like, Norm, or set of rules? All right, last, we're going to third topic, Wall Street Journal just released a report and it is a bombshell. It's basically that venture capital firms, on average, over the last, you know, two, three years are reporting 7%, negative returns, they are, you know, Tiger global, for example, just marked down their portfolio by 20 plus billion dollars. that's with a B. And the data shows that a couple interesting factors happened. Okay, so money was free, in like 2019 2021 2223. So what did people do, they went from small funds to mid size funds to big funds, because they just kept raising more and more and more. And because the market was so high, their portfolios would get marked up marked up marked up. So you have companies that have not gone public, but based on their funding rounds have 300x 400x in terms of values, so we were seeing record on paper returns. But now as the cost of capital has really tightened as interest rates have gone up, the market has taken a hit, which means your multiples are harder. Basically, the two sides is your weighted cost of capital. So how much is $1 Today worth versus $1 In the future has changed, and the like exit value has gone down, which is basically obliterating the portfolio's of most of these organizations. And because their funds have gotten so large, it's really hard to turn around because they're so deep with so many dollars, that to turn around a like $10 billion fund, you need 30 billion in exits. And so we've really, you know, what we basically have then is financial investors, which represent pension funds and institutions and all that stuff. They are moving all their money away from VC. And I've basically been talking to a bunch of VCs, Ed Tech and not, you know, if you aren't on like fun six or fun seven and don't have a good track record, good luck raising any money. So what does this mean for our entrepreneurial community, when you go talk to that VC in years before where you would have a competitive bidding process, or in years before where somebody would kind of go with your idea and help you grow it, you now have a raft of VCs that are either so big, that whatever you're doing is just too small to move the needle for them, or to, they're so small or early, that they're worried about raising their next fun, and they're trying to keep as much capital in house and in their portfolio as possible. Just to kind of, you know, they're worried about their runway, but they're also worried about the investments they've already made, and trying to make those successful, that we're just seeing the EdTech winter has now become a VC winter, which is, is having a compounding effect. And then, you know, Alex USEK shared that reach capital ama deck with me. It's also showing that, in particular, Ed Tech is underperforming, it's like four to 5% Worse than all the other markets, partly because the public companies in edtech are faring disproportionately poorly. So top of the pod and everything we're talking about is like AI is changing the game. There's so much opportunity. What we're seeing on the impact and VC side is total pullback VC model is in jeopardy for the EdTech space. And while I think that establish that tech VCs who've been doing this for a long time, they're in a strong place to continue moving forward, the kind of anybody else who was like in the ecosystem, they are pulling way, way back from Ed Tech. So it is tough skiing out there entrepreneurs, and we like, feel free you. But I think what this really opened my mind to was, it's not just a hard time for sales in K 12, or sales in higher ed or workforce. It's like, the actual capital that should fuel the winning business models. And there are winning business models out there that are ready to be fueled, that capital is not there. And so how long will this last? What are the implications? Is this like a bigger correction away from Venture as, you know, an allocation class in, you know, an investor's portfolio, these are systemic structural changes that are going to fundamentally impact our space. And, by the way, I also think, whoever can figure out new models of financing, to get in here and take some of these businesses that can go from 10 million to 100 million, which may not move the needle for Uber large VC, but could be a really great return, there's some opportunity for some innovation in financial terms to, as you hear that, how do you connect the dots with what we're seeing on the entrepreneurial side and AI opportunity that we keep talking about? It's a great question. I mean, you are much better versed in the VC world than I am, but what a few things that I've seen that I think are related to this are, you know, there are a few different types of investment that are not the scale and size of VCs, you know, foundations can do some really interesting work here. Angel syndicates can do some interesting work here, individuals, you know, high net worth individuals who have a mission and really, really care about, you know, a particular aspect of education, maybe instead of putting their money into an ad tech VC or into a VC at all, they might be, you know, taking pitches themselves, or or finding, you know, finding a team there are, I think there are other, I guess what I'd say is I would encourage entrepreneurs who have been following the sort of Y Combinator playbook for the last 10 years, to maybe look a little bit broader, or maybe even a lot broader at where money might come from. Because I think you're right, but I think there are definitely good ideas, and especially when it comes to AI, they will be some incredibly game changing ideas. And it's possible that there will be you know, they won't be able to catch the attention or get the funding that they would certainly that they would have a couple of years ago. But if the right you know, if smart investors of any kind, are keeping their eyes out for it, and people who do have the money, it could be both really a good return, but also a really amazing opportunity to sort of like, then the space to your vision a little more, because there's just going to be fewer companies in it coming from this generation. That's my best thoughts. And I don't know if they did, I'm a little out of my wheelhouse here. But I think it's a, I'm really curious how this changes the space. And it's amazing to even think that these VC model that has been, you know, dominant, might be starting to, you know, look a little different, it's well, the profitability means controlling your destiny. And I think that's where, whatever you're creating, I think we're already seeing creativity in how people are funding and financing, whether it's non dilutive capital or strategic m&a. But I think this idea that everyone's going to go through seed A, B, C, D, or an IPO, I think that that path is becoming harder and narrower. And the idea that you might get seen in a and then be profitable for a foreseeable future. That's much more likely scenario now. And with AI, you know, you can bring your, hopefully, you could bring your cost down such that profitability is far more and reach like a lower revenue scale at like 5 million arr. You could be profitable, whereas in years past, it would be 10 or 15. Well, with that, Alex, we're going to go ahead and transition to our interviews. For our listeners, please download the newsletter. Please reach out with your feedback. Also, check out our new website at Tech insiders.org. We'd love to hear from you and appreciate your listening off to the interviews. All right listeners. We have a special guest today. I'm so excited to introduce OSA Blake. She is education and workforce strategist at the Patrick J McGovern Foundation. Thank you so much for joining us today. It was so excited to have you on edtech insiders. Thanks so much for having me. I'm so happy to be here. Before we dive in too far. Can you just tell us a little bit about the work of the McGovern foundation and specific Berkeley a your focus on AI learners and the future. We are the Patrick J. McGovern Foundation, we are really focused on bridging the frontiers of AI data science and social impact. So we can create a thriving, more equitable and sustainable future for everyone. And so we are a philanthropic organization foundation, that really partners with nonprofits, social enterprises that are doing the work, right. So we will partner with academia, practitioners, civil society, to pursue what AI and data science is doing in addressing some of the world's most urgent challenges. Just to add on to that you have some focus areas, and one specifically is around AI. And, you know, kids kind of learning the fundamentals of AI, both as part of being a human being and that AI future, but also because of workforce, can you just talk a little bit about the work that you're doing there, and how that work is playing out in the space, given how much we've seen this explosion of AI? Yes, I do want to say that the McGovern Foundation has different areas that we focus on, I personally manage the education and workforce portfolio, where I really work on strategy and partner relationships. And in our education and workforce portfolio, we really are aspiring to focus on two things. One is early exposure to AI education, data science, education, machine learning education. And the second one being is focusing on developing AI and data science skills, so that students, young people are ready for the future of work. And the ways in which we do that is one by looking at who are we educating, we want to make sure we're educating the whole ecosystem, not just the students, but the teachers, the administrators, and families, right, you can't really have a digitally literate society that is able to function in an AI and data science world, if all aspects of the ecosystem isn't there. And they don't understand it, we also address responsible use of AI and data science, we understand is not enough to get the basic foundation, we have to be able to know how to use it for good. And the only way you can do that is understanding the threats, the bias. And some of the gaps that we're seeing now, as we're seeing the explosion of AI, and the use of AI in many different sectors. And then the second part of that is looking at the workforce. How are we collaborating with the private sector? How are we giving people access to diverse professional tech networks that are primarily exclusive or historically has been exclusive to certain groups of people? And how are we mitigating? How are we working with our partners to mitigate some of these existing challenges that we see in a very fragmented tech workforce pathway? So, you know, it's so great to hear that the McGovern foundation is thinking about equity. It's something we grapple with when we hear about, you know, AI spreading on this show that McGovern Foundation put out an op ed recently. And I think there's a line I wanted to read from it, because I think it mirrors exactly what you're saying. It says, you know, educational institutions need to grapple with AI as effects on learning and evaluation, and design new curricula to prepare students for the AI future. And I think that second part is so important, and something I think people aren't even thinking about as much as that you might expect. What I'd love to ask you is, you know, when the internet first came out, there was this digital divide for a decade or more. And even now, access to coding education is extremely, you know, is not particularly available for communities of color, specifically, but all sorts of people. How can we take this new technology AI, knowing that it's going to be potentially as transformative as the internet and make sure we don't fall into the same traps? either? Right? That's a great question. And we at the Patrick J McGovern Foundation, we look at AI being the new digital divide, right? It's expanding is growing so rapidly. The World Economic Forum has reported almost 3 billion people across the globe still remain offline. So if we're still struggling to get people online, basic online connection, and there's this explosion in AI, and those same 3 billion people are missing out on that opportunity, we will only see a widening gap, if we don't try to address some of the challenges that are speaking to these inequity. And one of the ways that we are looking at addressing those challenges is really the incorporation and collaboration of global north and global south partners and leaders in the private sector, and in the public sector, as well as in the philanthropic sector, to really address some of the limitations that restrict successful adoption, and education of AI. We understand in certain locations, there is no basic infrastructure, right. But just because there is a lack of infrastructure, there is not necessarily a lack of understanding. So how do we reach those people ensure that they do not get left behind? Those are some of the issues that we like to address at the foundation. And really, it goes back to also looking at the responsible deployment of some of these resources, coupled with, like, we always say promoting AI foundational skills, as well as making sure that people are using those skills for good. You know, so you talked in our prep for the pod, a little bit about this intentionally global focus. And, you know, as you talk about the 3 billion folks in the world that don't even have access to the internet, the AI Digital Divide being a huge threat to economic opportunity for all, how do you think about that operationally, as a foundation? How do you distribute? How do you create infrastructure that supports both folks in developing countries where obviously, this is a very hot topic and has equity issues, but also for the global audience in developing countries, where just even reaching learners or reaching young people can be really challenging? What's the operational approach? I think the operational approach that we have always taken, has really been focusing on working with partners that are overlooked, often not thought about. So really targeting communities that have been traditionally marginalized and and listened to, that has been the key to a lot of our grantmaking, especially in the education and workforce development portfolio, we have worked with organizations that focus not on providing education, but also spearheading movements, to reaching people in remote areas, people in rural areas, and people in communities that are underserved. So partnering with organizations like aI edu, that not only were at the forefront of the AI, education, awareness in in everything, and the tools and the resources, but also they really spearheaded a movement around being responsible on how we distribute AI education to everyone, not just the students, working with organizations that really are focused on systems levels change, right. And these are organizations that either have, you know, small nonprofits or teams, and they need funding that will help them scale their very meaningful projects and missions. Or it's also working with older organizations, well established organizations that are looking to take what they have already been doing to the next level, and really pivot some of the work that they have been working on. The Global Focus is incredibly interesting. And you know, when you think about a problem at the scale that you're thinking about it at the McGovern foundation of how do you get even the people around the world who don't even yet have internet access to get AI educated? It strikes me as you know, I look back and and think about, you know, some countries did a little bit of sort of leapfrogging and grew really quickly. They went a long time being completely out of the internet world and And very quickly ramped up, and I'm looking at statistics like, you know, the year that Facebook was launched in 2005, less than 10% of the Chinese population had Internet access, you know, or, you know, it's just in Africa. It's we're now at about 43% penetration. But that's grown incredibly quickly over the last few years. And I wonder if, you know, McGovern looks across at all the different possible places to, you know, help this mission, come to fruition? You know, are there places where there's opportunity for sort of rapid, at least catching up? Even if people can't be ahead of the curve? If they don't have access yet? How can it sort of spread? It makes me think of, you know, YouTube and social media, but I don't know, are there? I'm curious how you think about that kind of spread of this kind of technology? That's a great question. And I'm glad that you did note, the rapid reach that technology and AI, emerging technology in general is having on the African continent, I think, some of the adoption, rapid adoption that we are seeing in places like the African continent, like lat AM, is because, one, they're starting from scratch, they know what they don't have, right. And so they know what they need to work to get. But also, they're working on consolidating sectors, consolidating solutions, that can speak to so many different sectors. So for example, instead of having a tool that is strictly applicable to education, you're seeing in places like Latin and whether it be Colombia or Chile, or places like Nigeria, or Kenya, that they're combining edtech solutions with healthcare solutions, you're seeing them combine edtech solutions with business solutions. And that is really aiding them in the rapid adoption of some of these emerging technologies, right? It's killing two birds with one stone, knowing that, hey, this is where we are right now. But we have the capability of doing so much. So making sure that they come up with solutions that aren't just targeting one issue, but they target multiple issues. I love that combination of Ed Tech and, and health solutions. That's such an interesting evolve of iteration. And you're in the US. I think that, you know, we separate those things in many of the things we talked about in the pod are is how convergent now educate like schools are a point of care for healthcare, for food for social services, as well as education. Last question, before we let you go, I also just think that it's incredibly fascinating your career path. Many people imagine a career in philanthropy, as you know, starting at one foundation and moving to another, but you've had an incredible run with both industry experience working with global organizations. Can you just tell us a little bit about your journey to McGovern, and then also how that's prepared you in this role, kind of leading the Education Workforce vertical? You know, when I describe my background, I always like to start with the why. Right? Because it will explain my career trajectory. My y is always being a connector, and being able to help people gain access to resources to change their lives, and to enhance their communities. So my background is academically by training is law international policy, right. I have a law degree. And I have a master's degree in international policy. My foray into edtech actually came about back in 2007 2008, after working abroad for an international organization on migration issues. And I came back to the States. And I was speaking to my twin sister, and we were preparing our stories because she was also abroad, working on global health issues. And we realize the inequities that we were seeing, whether it be internationally or even domestically, really centered around the lack of access to educational resources and options. And so we thought, you know, we know that technology is doing great things, right? We see that it's connecting people from around the world. How can we incorporate tech into to connecting people from underserved communities to resources that they need. And so that kind of sparked an initial project that was focused on connecting us students, to students in several African countries that were similarly situated, they were really in a position where they were facing similar social issues, academic issues, etc. And we wanted to use technology to connect them. So they could have a cross cultural experience, be aware of different resources, have access to mentors, and most importantly, create solutions to some of the issues in their community. And so that one project that started in Baltimore, Maryland, and we connected Baltimore students to students in Namibia, swap them on Namibia to discuss HIV AIDS, because HIV AIDS was a big issue at the time. And the rates in Baltimore at the time were comparable to rates in Swakopmund, Namibia, and how that was, you know, impacting their access to not only health resources, but educational opportunities in the cities that they were living in. And so that small project, turned into an ad tech nonprofit, where we built on that project, we were able to grow the team from like two people to a staff of about seven, we had a nonprofit volunteer base of about 100 people worldwide. And we were able to develop projects that focused not only on education, and health, but also on workforce development. How do we use tech to one connect students to mentors, connect them to jobs, and train them to have the necessary skills for the available jobs. And so those were the projects that we worked on. I ran that nonprofit for 10 years. And after 10 years, I decided that I had a one sided view of education technology, right? There has to be other aspects to it. And I know there were other aspects to it, because I had to work on funding, I was working with partners, and I wanted to see the perspective that they had when they were talking to me. So that, you know, led me to go into the private sector. And I worked for a big education management company, as well as a startup that focused on high dosage tutoring. And both of those experiences taught me that it's not enough to help people, right, because oftentimes, in the nonprofit world, you're very mission driven, and you want to help people. But it's also you also have to look at the business end of things. Does it not only speak to the bottom line, but how can you scale your efforts in a way that is not only impactful, but it is long lasting? It's sustainable. And I think those experiences led me to now be in the position that I'm in at the Patrick J McGovern Foundation, because I was able to think, not only mission driven, but also strategically, right and see all aspects of the sector for what it was, and how best they can collaborate, to really move the needle forward. And so really being able to partner with different aspects of the sector, not just the nonprofit field, not just the corporate sector, but also community leaders, policy leaders in this space. And I think that my role at the Patrick J. McGovern Foundation has also, you know, helped me become a better professor because I'm an adjunct professor at two colleges, to really see how I can better teach my students to look at education, tech in education in a very holistic way, so that they can be ready for the future of work for some of the positions that they're going to be looking at. Yeah, it's super interesting to hear your journey. And you know what, there's something about the road less traveled leads to a far more interesting destination. In a weird way. You know, Alex and I talked to a bunch of changemakers in education. Everyone almost has these like, unique roadmaps that they follow that have brought them to really foundational insight that they will would have had had they not had that pathway? Well, haha, it's been so great to have you on the show, who also works at Patrick J. McGovern Foundation, which is, correct me if I'm wrong, but governed.org is that right governed.org. We are also on LinkedIn, as well as Twitter. So you can find us there. Well, we're so excited to have you here. We're going to be following all of your reports and your investments in the space. And thank you so much for joining a tech insiders will be excited to cover you in the months and years to come. Thank you so much for having me, Alex and Ben. All right, everyone, we have a special guest today. It's our first time having three guests on the pod. And it's the founding team of brisk. We just saw LinkedIn blow up in the last 24 hours, with the combined impact of the brisk announcement. Y'all have so many friends and lots of different places around all corners of the internet. I was just getting it from all corners. But your video also I think really spoke to the potential for AI to impact educators. So bris team, welcome to the show. Thanks for having us. Before we dive in, I think it would just be helpful for everyone to hear a little bit of the journey like where did this idea come from? What's the origin story? And then we can dive in a little bit more around risk and where it's headed. Cory? Do you want to do the origin story of how we met each other and kind of how we work together? And then maybe I can take the second part of like, Great or no? Do you want to just show how that sounds good to you? Well, I am a career educator. And I taught myself let a school have constantly been thinking about how do we continue to make education more sustainable really, as of late for teachers and for school leaders, as well as making sure were servicing students, the end being that we are doing right by students. And I had the great privilege on that journey, you know, meeting Tom and Arman when we were all working together on a previous product. But I was working closely with a school district that was one of our customers for the former project that really had a pain point in getting the resources they needed to their teachers before school started. And I had actually never met our mom, before we were remote. I was like visiting the client. And the client was like, We really need some guidance on what to do about getting access to that to this thing, like right now. So I reached out to our partner. I was like, Alright, who's available to answer some of these more techie, if you will, questions than me. Well, Armand was yanked by one of our customer support people into a conference room, sat down and supported me in supporting the customer's needs, and really unblocking them so they could be ready for school. And at that moment, I was like, All right, Armand, I like this guy, I think all work well together, you know. And then fast forward, we all had the opportunity to work on a specific part of that product called notebooks, which didn't have a name when we started, and our mom was the PM. And I had to kind of, at the time, force him into giving it a name so that we could really manage the change and be excited about that launch with our customers and users at the time. So yeah, it's such a common startup story forged in the fire of customer wrath, customer desperation, you find out what people are made of. And, you know, also, I often find like, product teams and Customer Success teams, like when things are going well, they tend to be at odds. The question is like, when you know, shit hits the fan, are they like, joining together and running to the fire? And it sounds like that's how you all met. Tom, how did you get involved in the equation? I had worked with Arman and Cory previously, that that same ad tech company and I had worked with Armand particularly on a product called notebooks as he rebuilding Google Docs for education. And that was definitely a experience that was full of fire and firefighting. And so I'd been working at that company, and I was really excited about generative AI. And then when Arman came to me and showed me what he was building as a company, I was really excited about it. Still very passionate about education, but also had been really interested in AI. So I felt like a really good fit. And so I started working with him and became the founding engineer. Great Transition. So tell us a little bit about what brisk is What it does, the problems it solves, I can share a little bit about it. But maybe Tom can go into a little bit more in depth about kind of what we're building right now. So we just, you know, throughout the, you know, we all, were working at the Chan Zuckerberg Initiative during the pandemic, and we saw how much time teachers were spending across all these different workflows, and whether it be less than planning grading. And so our question was, like, how can we use generative AI just to decrease the amount of time that teachers are spending on things that are taking them away from their learners, and so are that sort of key Northstar? And so we've built a couple of features, which Tom, maybe you can share right now that we think really have decreased, they already are decreasing the amount of time that teachers are spending on tasks that they don't need to be spending time on. Yeah, so the four main features that brisk offers are all ways of kind of connecting the power of generative AI, to what teachers actually need to do in the classroom, and are already doing right now in the context of Google Docs, or the content that they're reading on the web. And so the first thing that we built was the ability to detect AI plagiarism, it's definitely something that's really top of mind for a lot of teachers, but they don't always know where to start. And even when they do discover that there are services out there that do this, they do find them kind of cumbersome to integrate well with the actual flow of assessing their students content. And then that some of the next features we built were the ability to create curriculum, whether it's worksheets or lesson plans, or quizzes. And to be able to do that in the context of Google Docs. We also are adding the ability to provide feedback to student content, and are soon to launch the ability to take content, particularly news articles and convert it to any Lexile level that's basically like the student's reading level. So that a teacher can share a news article that might be written aimed at 11th grader, but haven't converted to something that a fifth grader would be able to read. That's awesome. And we've seen so many products that have gotten launched, that have been attacking the student, like layer, you know, interactivity with the student, why did you decide to focus on the educator layer? I guess I can take that I think we think that educators are kind of the key linchpin right to a quality education in the classroom. And if we can support teachers, if we can support them in many different ways, if we can pay them enough, and if we can give them the right tools, they can create the best learning experiences for that learners. So that's kind of why we started focusing on the teacher, I think, eventually, we'd love to branch into student learning and trying to facilitate that more easily. But we want to start with the teacher and focus in on that right now. Yeah, you know, as a middle school educator, myself, I wish these tools had existed in my eighth grade classroom, I remember doing a reading assessment, and I had third grade readers all the way to 10th grade readers all in one class. And like, how do you differentiate? How do you create dynamic lesson plans, and now it feels like the future is here, like we could actually do this? You know, a lot of folks, before we go to the future of AI and tech, a lot of folks are trying to figure out, okay, we've put AI and we brought it to a product, how do we bring it to market in a way that you can actually create a scalable impact? How are you thinking about your go to market motion, both in terms of like product adoption boss was like a business model? I can take that one as well. That's a really good question. And I think that is, you know, we've been looking at all the case studies, right. So we've been looking at the b2b motion, we're looking at the product lead growth motion, we realized that this is a new era, right? During post pandemic, schools are actually thinking a little bit more about having after having procured all these different tools, like what are the core tools that they need? How do we make sure that all our classrooms are using the same type of tool rather than having this arbitrary combination of products? And so as we started to explore this, the product led growth motion, we've actually found instructional coaches to be one of these key lovers. Because, one, they are very exploratory in terms of trying new technologies, including artificial intelligence, because they're early adopters, they chose to be ad tech coordinators, and then we're having them like try it out and actually see the benefit of artificial intelligence. Because I think one of the things we've realized is, you know, if you say you're an AI tool to a teacher, they're like, Okay, we've heard, you know, we if you tried to put my education on the blockchain last year, we're not that we're not as interested in that. But if you can actually start to show with a key person at a school, they can really transform the environment in that school so they can forward the tool to older teachers, and then also become a super advocate at the school level to be able to say, like, hey, teachers need this, this will definitely help them and so that's kind of how we're approaching it. And then we plan on basically eventually selling to making inroads into the school and then selling a premium version to schools. And if I had tried to launch this four or five years ago, I also would be feeling on the inside like death by 1000. API's, Tom like this, you know, clearly you all have experience in trying to disrupt Googled ox with notebook. So this isn't your first rodeo. But is it easier now with AI to integrate, you know, teacher facing AI products with the other tools and platforms that they're already using? Or will ultimately we have to adopt brand new sets of tools? Yeah, I mean, I feel like as complex as AI itself is the API's that they expose to make tools like ours, are actually still pretty simple. And so I feel like working with the chat GBT, or the sapling API's are one of the more straightforward parts of it, Google Docs actually still ends up being one of the more challenging ones to work with. And I think it's one that hasn't really changed probably that much in the last five years. Which is, by the way, totally ironic, because Google is the largest edtech company in the world. It's a rounding error for them. But they could do so much if they just opened up their platforms and make it easier for developers, whether it's classroom or Google Docs, or workspace or anything like that. All right. Well, you know, our listeners are very curious. We've been in this like crazy, crazy accelerated mode of product releases, product development, AI revolution is here. Where do you think it's heading in the next four or five years? Cory, what's your read on where the market is going? I am opting to take the optimistic view on this. And I think that it's going to revolutionize the teacher role and allow for the teacher role to become more sustainable, more joyful, for teachers to be more whole as humans themselves and their well being to be taken care of in a way that is not always happening right now. And in conjunction with that, they are going to be serving individuals, student learning needs and social emotional needs in a far more effective manner than they ever have before. And the AI technology is going to allow us to do that. And I actually believe it's going to deepen relationships and improve human connection, rather than disrupt it. I love that tape. It's like aI rather than being a job killer is a job saver, because we have teachers leaving the profession because it's just not a fulfilling job. And so there's a way in which, you know, as constructed, you know, you're doing 50% You know, administrative work, if AI can take that off, what a joyful job, educate being an educator could be, and what better results you could get for kids. So just a wrap, like where can people find out more about brisk? What's the best way for educators or other tech entrepreneurs to get involved? Sure, so our website is brisk. teaching.com. So B R I S k teaching. And yeah, we also have an email Hello at brisk teaching.com If you have any questions or want to partner in some way we're looking at we're talking to so many really interesting people. I think, you know, having your own education startup, a lot of people come out of the woodwork to talk about, hey, I have this, like, I'm part of this nonprofit, and we want to be able to scale brisk as part of it. And so I think it's been really interesting to have a lot of really cool conversation. So we're really looking to talk to anyone who is excited about risk, and we're also hiring. So if you are an engineer listening to this, please go to jobs at risk teaching.com and email us about kind of what excites you about risk. Amazing. Well, thank you so much brisk founding team. Cory Tom Arman. So great to have you on today. Good luck. And thank you so much for really diving in to change the future for educators for learners, and bread Tech. Thanks so much. Thanks, Ben. Thanks, Alex. Thanks. Thanks for listening to this episode of Ed Tech insiders. If you liked the podcast, remember to rate it and share it with others in the tech community. For those who want even more Ed Tech Insider, subscribe to the free ed tech insiders newsletter on substack. Thanks for listening to this episode of Ed Tech insiders. If you liked the podcast, remember to rate it and share it with others in the tech community. For those who want even more edtech insider subscribe to the free ed tech insiders newsletter on substack. This episode of edtech insiders is sponsored by magic edtech. Magic Ed Tech has helped the world's top educational publishers and ad tech companies build learning products and platforms that millions of learners and teachers use every day. Chances are that you're probably using a learning product that they've helped design or build. Companies like Pearson McGraw Hill Imagine learning and the American Museum of Natural History have used their help to design In or build some of their learning products. Now magic wants to bring its pedagogical and engineering expertise to make your key learning products accessible, sticky and well adopted. Check them out at Magic Ed tech.com which is Ma GIC, IDI Te ch.com and when you get in touch tell them Ed Tech Insider sent you