Edtech Insiders

Week in Edtech 8/7/2024: ChatGPT 5 on the Horizon, High-Level OpenAI Departures, Meta's LLAMA 3, Google's AI Moves, Instructure's Canvas AI Integration, AI-Powered Tutors, Custom Chatbots, and More! With Special Guest Host, Matthew Tower

Alex Sarlin and Matt Tower

Send us a text

Join Alex Sarlin and guest host, Matt Tower, as they explore the most critical developments in the world of education technology this week:

🧠 OpenAI's ChatGPT 5 is Coming
🚪 High-Level Departures at OpenAI
🚫 AI Detection Technology Still Unreleased
🦙 Meta Launches LLAMA 3 and AI Studio
🤝 Google Hires Character.AI Co-founders
🎓 Community Colleges Band Together on AI Education
🔗 Instructure Canvas Integrates AI Across Platform
👩‍🏫 Heeyo's AI Chatbot Aims to Be Kids' Interactive Tutor
🍐 Pear Deck’s New Instant AI-Generated Lessons
🛠️ Pluralsight’s AI Assistant Iris
🌍 Neuberger Berman Nears $15B Deal for Nord Anglia Education
🚀 Uplimit Announces $11M Series A Led by Salesforce Ventures
🗣️ Fluently’s AI-Powered English Coach Secures $2M Seed Round

Upcoming Event:

📚 Book Club with FOHE: Discussion on Sal Khan’s new book Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing)

Stay updated with the latest Edtech news and innovations. Subscribe to Edtech Insiders podcast, newsletter and follow us on LinkedIn!

This season of Edtech Insiders is once again brought to you by Tuck Advisors, the M&A firm for EdTech companies. Run by serial entrepreneurs with over 25 years of experience founding, investing in, and selling companies, Tuck believes you deserve M&A advisors who work as hard as you do.

Alexander Sarlin:

Welcome to EdTech insiders, the top podcast covering the education technology industry from funding rounds to impact AI development across early childhood, K 12, higher ed and work, you'll find it all here at edtech Insider.

Ben Kornell:

Remember to subscribe to the pod. Check out our newsletter and also our event calendar. And to go deeper, check out edtech insiders, plus where you can get premium content. Access to our WhatsApp channel, early access to events and back channel insights from Alex and Ben. Hope you enjoyed today's pod.

Alexander Sarlin:

Welcome to This Week in Edtech, the week of August 7. We are really excited to have our special guest host of formerly etched now, Ed sheet, Matthew tower, Matt tower, who is now at whiteboard advisor, is doing amazing work. Hey, Matt, great to have you here.

Matthew Tower:

Hey, Alex, thanks for having me. Yeah, it's been an exciting summer, joining the whiteboard team and moving to San Francisco and lots and lots of positive change, yeah,

Alexander Sarlin:

and I think only one has come out so far, right? One edge sheet, yeah, the

Matthew Tower:

next one actually will come out tomorrow morning.

Alexander Sarlin:

I think fantastic. So that final touches on it had probably be out a few days by the time people hear this. So go check out the brand new ED sheet from whiteboard advisors and Matt tower. So couple quick announcements. We'll keep this really quick, you know, on the pod in the next couple weeks, we just published a really cool interview with Kevin Leem, who's the CFO of Mathpresso, South Korean company that has millions of users of its AI education app and is about to launch in the US. We talked to Paul LeBlanc from SNHU, absolute legend. We have Andrew Goldman from HMH and Writable and Mark Miller from Good Harbor Partners rounding out our August schedule, I also want to remind people that we are doing an Edtech and AI Book Club, along with our friends at the future of higher ed group on the Khan Academy book Brave New Words. That is going to be August 22 and we've gotten a lot of awesome interest there, and that would be guests from the conmigo team as well. So that should be a blast with that. Why don't we kick off this week in edtech? So Matt, you know, some weeks we start with AI, some weeks we don't, it feels like there was a lot of AI news this week from some of the big companies, kind of a lot, a lot. So why don't we start with some of the news out of OpenAI, and we'll do a little bit of around the world with some of the big AI companies, big tech and big AI companies, and try to connect it to what it might mean for education. So let's talk OpenAI. What jumped out to you this week? There were a few different announcements from OpenAI. Yeah.

Matthew Tower:

So to give just a quick overview of the announcements in no specific order, although on some level, they're probably all related. The first is continued rumors of GPT five, which is super exciting. You know, I think everybody sort of expects a step change, improvement in the quality of the model with, you know, a full number grade upgrade, not just like adding letters to the little end. The second is, they had a number of senior leader departures that I think you can choose to read into or not, and that might be a fun thing for us to talk through. And then the third is, there is a rumor that they've had a cheating detection tool sort of on the shelf for at least a year. So yeah. I mean, I think GPT five super exciting, because you have no idea what it could be, right? You know, I think folks have been super impressed at the other full number upgrades, two to three, three to four. So you know that one's probably the most fun to speculate of like, well, we finally get flying cars from that, but it's hard to sort of know the senior leader departures are harder for me to get a grip on. You know, I think everybody wants to relate it to the events of last fall and sort of the fallout of the board strife there. I'm open to that argument, if you want to make it. And then to me that the actually interesting one is the cheating detection tool. Yeah, and I'll just to give like, a perspective, and then maybe we can go back and forth on it. The easy one is to say, Oh, they're not releasing it because they don't want users to fear using the tool, right? Because if you have a bunch of users that are using it to write essays, then they will stop writing their essays via chat. GPT, if there's a you know, fear of. Protection to me that feels a little simplistic. It's like, you know, it's easy to write, you know, a 250 word essay on that. But I think the more nuanced thing is, what is the sort of knock on effect of having both a, like, open ended tool and a detection tool coming from the exact same company? And like, what does that do to your models? Like, there's such a like domino effect of that that is obviously more complex and harder to capture in a short news article. But as I think the actual reason, I don't think it has anything to do with fear of people stopping using the tool. Yeah,

Alexander Sarlin:

that's an interesting story, because I think it's a really clear overlap between this sort of big tech and Ed Tech. I think the thing that struck me most about the cheating tool story was that what I had been hearing in the AI communities over the last six months to a year is that it's not necessarily even possible to create something that has that high a level. The rumor is that it has like a 99.9% accuracy rate to tell if text is written by artificial intelligence, and that is much higher than we've heard from other products, and it's much higher than expected, than I think a lot of people thought was even possible. So the fact that that's, you know, there, and was sort of secretly inside open AI for the last year, or even two years, and they've been debating it. I think it just adds an interesting, you know, Jenga piece to the entire integrity discussion. Because my sort of take over the last few months is the integrity discussion is a little bit moot, because there's no great way to actually detect other than having people inside your system, which is like what Turnitin is doing. And you know, if you're literally inside the system, they can do various things to try to check your integrity, but it's been really hard for people to do. So that stood out to me,

Matthew Tower:

and that's what open AI's tool, at least as it was related to the Wall Street Journal, does, is it is the open AI team introduces in the like to get a little bit overly technical, like, when the model is choosing the next token, it can sort of suddenly substitute a watermarked token so that when the model then reads whether the model it's created, the output, it can say like, Oh, yeah. This follows the pattern that we ourselves set into it. And I think that's a problematic for two reasons. One is it fundamentally, it warps the model, right? So again, we're already playing with fire with these probabilistic models. We like, we mostly know what they do, but like, the more kind of funkiness you put into the probabilities, the crazier the outputs, right on some fundamental level. And then two, the fear, which is realistic, is you can then take the chatgpt output, put it into somebody else's model, and then transform it back. So, you know, the example The Wall Street Journal uses is, if you take the chatgpt output, put it into Google translate into another language and then put it back into English, the model would no longer be able to catch it. So yes, if you put your raw output back into chatgpt, homebrew detection model, it can catch it with 99.9% accuracy, because they have full control over all of the variables. But as soon as you leave their ecosystem, that detection probability goes, you know, towards zero. That makes so it's just problematic, right? You know, I know everybody wants cheating detection, but there's so many like unintended consequences you'd have to control for to do it. Well, I just, I feel like that. It's sort of a hopeless cause.

Alexander Sarlin:

Yeah, Jimmy, there's these sort of level of laundering that you can expect from students, you know, you know, highly motivated students. Students highly motivated to cheat, quote, unquote, with some of these tools, have all sorts of recourse. There's actually, you know, tools specifically to humanize outputs and add misspellings and things to make it feel more real. There's, like you said, there's the sort of Google translate back and forth. There's a whole suite of different options for that. It's funny, because I feel like there's these two simultaneous sort of arms races happening in AI right now. One is between the companies, right? It's chatgpt Five. We saw meta release llama three. This week, we saw Google release a really buzzed about model this Gemini 1.5 Pro. All the big tech companies are trying to best each other in terms of having the best model. That's an arms race. But you also see this arms race in the school environment, where, you know, as these tools get better and better and better, it becomes perhaps more and more and more tempting, and honestly, I think smart for students to use them as part of their educational suite of tools. But then you need the arms race on the other side, which is the integrity. You know, how might these integrity tools possibly keep up with all the changing models and the changing underlying structure like that, sort of, you know, drop a watermark method. You know, if students realize that that's how it's going to work, they do have recourse. It's fun in. Way to watch this. It's hard to know exactly where it's gonna land, but I think it's really interesting to watch these sort of game theory kind of dynamics play out among these different players in the AI and the AI and education system

Matthew Tower:

totally. And you know, I think it will continue for as long as I can see. There's no end in sight.

Alexander Sarlin:

There's no end in sight. I mean, so, you know, a couple more, I think, notable headlines. We talked about GPT five. There's some new models, really interesting ones coming out from other folks. Google has officially, sort of hired the co founders of character AI, one of which I believe, was one of the original founders of OpenAI, who had left to do character AI, I believe, and basically is taking all of the character AI models and pulling it into its own Google Suite. That's an interesting story to me, because it's an acquisition character AI has very sophisticated AI, but it's also a sophisticated AI, specifically in the realm of imitation of particular people. That's like, that's exactly what they've been training on and other things, but that's really the core output. So it perhaps implies that Google is going to think about how its own AI models can be even better at imitating individuals, whether they're historical individuals or living individuals. Or imitate yourself. We create your own version of yourself, which you can imagine to answer Gmail and do various things like that. And then we saw meta do something similar. They announced a tool called the AI studio, which is allows you to create your own AI chat bot that basically acts like you. And they're sort of framing that as something for creators. So if you're somebody who creates on Instagram, you can then create an AI version of yourself that will continue to create on Instagram or respond to people or do the exact kind of thing that you might do. These are interesting. I'm curious what you think about this sort of direction for AI, this imitation game kind of thing. Yeah,

Matthew Tower:

I think there's two sort of strings to pull on here. The first, which is the sort of like nerdy financial one is the strategy for acquisition has transitioned from like buying a company to just buying people you know in to be a little bit flip about it where, and we saw this with inflection earlier this year. Feels like eons ago, where the big tech companies, who are feeling the pressure from any trust regulators in the US and Europe are not buying companies the same way they might have in the past, and instead, they're just picking off the best talent. And I, like intellectually, would love to see what the like, pay packages look like and how investors are dealing with this. I don't have a great sense right now. That's something I'd love to dive into. Is like, you know, both like recruiting and retention of AI talent feels as complicated as we've ever seen for a specific skill set. And then there's the more holistic product end user part of this, which is, you know, creating these synthetic characters, often or sometimes based on real people, like, with what meta is trying to do, and often based on, you know, totally fictional folks. You know, there's a huge business around AI girlfriends, which sounds kind of morbid, but it's actually, I'm really in on, you know, having synthetic personalities around you to help sort of drive your life forward. I think they can be really powerful. I think, like, even something as simple as, you know, I got an otter AI meeting recap email the other day, and I was like, wow, this is and it wasn't even mine, it was somebody else's otter. Ai sent me it. And I was like, wow, this is incredible. Like, it gave me all the to do's that I had committed to in the meeting so I didn't forget. And, you know, reminded me of the things that were said. And like, next time I meet with that person, I'll just quickly search that email and be like, oh, yeah, this was a great conversation. Like, to me, that's 100% positive, but I also understand some of the concerns folks have. Yeah,

Alexander Sarlin:

I'm glad you're bringing it up in that way. I 100% agree about the sort of financial wheeling and dealing. I mean, one of the people who left open AI left for anthropic open AI's direct competitor it was one of the original founders. There's a lot of sort of poaching and back and forth happening in this space right now. I'm sure the pay is obscene, so I definitely agree with you there. It's interesting to watch. And I remember asking Andrew Ng a long time ago when he had gone to Baidu, he was coming back to Coursera to visit at that time, and I was like, what's going on in the AI world? He's like, it's a talent war. That's what it is. It's a pure talent war, like, there's not that many people who know this stuff really well, and everybody wants them. And I think it still is. This is that was years ago. It still is. So that's really interesting. One thing I think is an interesting extension of what you're saying about the simulating people, about having people around you, is, you know, I've been thinking a little bit about how, you know, what is the real difference between a really successful school and a, you know, a failing school, or really, let's even get more specific, between a really successful student and a failing student. Often it's people, right. It's having an invested parent, it's having a tutor. It's having a really effective teacher. It's having a special. Ed or a para or a language pathologist, or, you know, people really often make the difference between successful and unsuccessful education. So the idea that AI can sort of be people, it can be trained to have the skill set of a speech pathologist or a tutor, for sure, or even potentially, a sort of supportive friend or parent. It creates a really interesting, slightly sci fi, slightly creepy, but I think mostly positive, idea that you know a student in the future might have a team around them the same way you know a student at a very elite private school now has a team around them. They have a guidance counselor, right? So tell me how you react to that

Matthew Tower:

well, and, you know, I know we're gonna do funding, and M A later in this. But you know, I was writing about Julia stiglitz's uplit, who just raised $11 million and one of the things that struck me about uplimin and the fact that it's led by Julia, specifically, who's been around the EdTech ecosystem, first with Coursera, you guys overlap there, right? Oh, yeah, yeah, I know definitely, well. And then at GSV, you know, she saw all of these sort of, I'll call them evolutions in the EdTech ecosystem, where we learned that, yeah, like having an advisor, even a non academic advisor, it makes a meaningful difference in the impact or in the outcomes of a student, right? And like being quizzed on a more regular basis, like impacts completion rates, right? Like testing what you saw, and like having it be more interactive, rather than video based like Julia saw this, and I'm sort of building a narrative for her on this, but hopefully she would frame it like this too. And what I think is compelling about implements specifically is they're taking those learnings and saying, historically, we've had to do this with humans. And most of the universities that deliver great outcomes for students online and in person, do this with people. What if we could build AI agents that are at least reasonably good at this. They might not be as good as humans, but like, can we bridge some of the gap that, you know, move completion rates like 3% and an in person is closer to like 70 plus percent. You know, closing some of that gap can make a really meaningful difference in student outcomes, and is probably a pretty good financial bet, right? It dramatically lowers the cost of serving that student.

Alexander Sarlin:

Yeah. And there's this classic, you know idea in higher ed of called, like, bow malls disease, where, like, they say that, yeah, you know, like, higher ed, Oh, that's interesting, right? Youcare about it like that. It's related, I think, right? I mean, the claim there is a Princeton ex president, I think, Baumol, who basically said that the reason higher ed can never go down in cost is because you're really paying for access to individuals. It's a service industry in a lot of ways, and it's really hard to decrease the cost there. And that may be less and less true in this world, and especially you know, the difference between completion rates at a elite, selective college and a community college are vast and in many ways, it's also because of people. It's not only the professors and instructors, but all of the RAS and the campus support and the career advising and the Student Success teams. And it's like, I think we take for granted how much the difference in outcomes in almost like in so many different fields, is based on having more or better people or both around you to help. So it's really

Matthew Tower:

with upliftment. They are starting with the assumption that we should build these with generative products, right? So it's not trying to mimic with, you know, more deterministic software development method, which is, I think, how the past 10 years have gone. Yep, and that's why, to me, it feels like a fundamentally different bet versus the crop of cohort based, course, companies that emerged in like 2019 and 2021 who, you know, I think, also have an interesting like place in the space, but like because their assumption is using generative tools and feet and building generative features that, to me, puts them in a slightly different category than that earlier crop.

Alexander Sarlin:

They were also always planning to do B to B. That's one of the big up limit things, compared to the mavens and discos and that like the crop of CBC companies. But you're right. I think planning for that generative future is looking around the corner. And I mean, one story that caught my eye this week, and I think it's related to this. I'm curious how you think about this. There's basically a set of community colleges, big ones, Houston, Miami and Maricopa, which is Phoenix, are basically launching something called the National applied artificial intelligence consortium with a big NSF grant. And what they're trying to do is say, okay, community colleges have a lot of these really pressing problems with not having enough you know, instructors, not having enough resources, not being able to keep the students on track. And they've gotten support from all sorts of industry partners, including envision. Which is also maybe in Monopoly trouble right now, IBM, Microsoft, you know, Intel, Amazon, really interesting. I mean, I think community colleges are one of the areas that have so much to gain from the kind of thinking that you just outlined about, you know, building generative AI into the systems and the infrastructure and into the curriculum. What did you make of that story? I'm excited to see where it goes. What did you make of it? Yeah,

Matthew Tower:

I think community colleges aren't getting enough credit for being the segment of the higher ed market that is changing the most, and seems, I'm sure it's much harder on the ground than I even understand willing to change right? So we know that the biggest driver of enrollment at community colleges right now, and the only reason that their enrollments going up while the rest of the sector the enrollments are going down is they are accepting more high school students. Accepting is the wrong word, because they they're open access, but they're recruiting more high school students college level courses, and so they as institutions, are morphing to serve 16 to 40 plus year olds, which is novel, right? Like, that's not historically what any type of higher ed institution has done. Even the big onlines mostly serve working adults, right? They don't serve traditional age, let alone in high school students. So I think, like, I hope, that narrative starts carrying that like the most innovative people in the system, the most innovative schools, that the schools that are changing the most and really serve the most diverse populations are community colleges, not, you know, the fancy elites who, you know, they get enough attention. Great point. Yeah, so I hope that narrative starts to carry more and, you know, I think more sets of people are recognizing that AI is sort of a data aggregation game, right? And so, you know, you guys have interviewed Paul recently. You know, he's doing his own thing, trying to build a consortium of schools that are investing in a central data infrastructure. So I think all this needs to happen. I don't, you know standards bodies are really complicated, and when they work, it's awesome. And, like, we have a couple of established standards bodies in the education world that are great, but there's not many, because, you know, most of them die, yeah, it's just, it's, yeah, it's, it's sad, but true. So, you know, I, and I don't know who should be the driver of it? Should it be somebody like Paul? Should it be the community colleges? Should it be some fancy, elite person at an elite school like I don't know. There's room for many by any group, I expect to see a bunch more of these. Exactly. There's

Alexander Sarlin:

so much space to move in the higher education system. There's so many players that I think it may not have to be, you know, one player, yeah, that's true, yeah. And one thing that's interesting about, you know, so Paul LeBlanc, snoo, like you said, their biggest innovation was realizing that their core student is an adult learner in their 30s. He talks about this in the interview that will be coming out about, you know, just realizing what their, you know, hiring education to do is totally different than the sort of quote, traditional age you know, 18 year olds going down is a really unique strategy. And not only that, you know, we're talking about Houston, Miami and Phoenix, these are some of the most diverse populations, which is another thing that community colleges are very much on the front lines of. You see the numbers of underrepresented students sort of ticking up in the selective schools, but you see them exploding, and they've gotten bigger, but you see them exploding like crazy in the community colleges, and then to bring that down to high school is really interesting, because it can give students that leg up, that dual enrollment, those college credits before they graduate high school, that is a huge benefit to them for their educational future, no matter what they do. So it's really interesting to watch, you know, Matt, I love your business. Take like, I feel like you have such a good insight into sort of the strategic business moves behind different aspects of the EdTech ecosystem. So I want to ask you about the KKR Instructure acquisition. You know, Ben, and I talked about it a little bit last week, and it's definitely interesting. There's sort of these two parallel stories within structure that have stood out to me. One is they bought a whole bunch of companies, including big ones like, you know, parchment. They are obviously building a sort of aggregate consortium of different affordances in the platform. They are really trying to be a leader in AI, and they, I think they don't entirely know how, but they're really trying to be in front of it. They put out this emerging AI marketplace, and at the same time, they were just hired by a big private equity company, just like PowerSchool was. I'd love to hear the matt tower take on that whole aspect of these sort of LMS world right now. What do you make of this?

Matthew Tower:

Yeah, there's like a 30,000 foot, a 20,000 foot, a 10,000 foot, and like five feet take. I unfortunately, this will probably come out a little bit jumbled across those altitudes. And frankly, this is something that I'm writing about literally right now. So you'll see me workshopping concepts that hopefully end up in the next dead sheet. So at the highest level, I think. Right? The market is trying to figure out how to value ed tech companies, and we see this from to you going bankrupt, to Instructure, selling for $5 billion you know, more than 2x what they were bought for three or four years ago. And I think it is fair to say, the public market sort of doesn't know how to value ed tech they've got a pretty solid grip on how to value traditional education companies. So you've got, like, Grand Canyon, strategic education and a couple other sort of publicly traded for profit education companies that the market has decided, like, these are the multiples those companies get. I don't think they really know how to value ed tech. They know that, like, contracts are sticky, but growth is, you know, hit or miss, and is Duolingo the same as Coursera? Are they all both the same as, you know, companies like guild that hopefully eventually go public. I don't think we really have a good grip on that. And I think the reason you see a lot of the private equity activity happening this year, and you know, I do think that is a theme of the year is private equity folks are much more comfortable with the like, intrinsic value of these companies. And they're like, instructures got a bunch of deals with some really, really sort of stalwart institutions that are going to be around for 100 years. It's almost impossible to switch LMS providers like it takes upwards of five years for a big institution to switch LMSs sounds like a great business like we're in. So I think that's sort of what you see at the market level. Is the public markets got super excited about education in the 2020, 2021, period, and then got super down on it when interest rates went up. And, you know, that really hurt companies like to you and structure to a lesser degree. And P guys are, like, these businesses are awesome. Like, we're happy to have them, especially at the price we can get them right now, then, you know, we'll figure it out later, and we're happy to, like, harvest profits. So I think that's, like, market level view. I think in the LMS world, there's also this very same concept is happening. I don't think there's a public LMS anymore, or there won't be, once both the PowerSchool and this deal happen. There's a lot of AI announcements flying, you know, I think basically all of them have announced something to do with AI. I think what's more interesting to me is, if you look at anthology, illusion and now Instructure, the strategy is consolidation and offering a product suite rather than just a product. So all of them, especially Instructure, was public, but they were 86% owned by Tom the Bravo. So like they were effectively private equity owned, and all of them are buying adjacent products. So that seems to be where the competition is. Is like, what product portfolio do large institutions need to adopt? And then how do we, you know, make sure those adoptions are as sticky as humanly possible? Yep. And

Alexander Sarlin:

you can imagine, you know, the concept of an emerging AI marketplace being something of a, you know, tryout for companies that integrate with Canvas, obviously, at least in some ways, and they get all the data about the usage and, you know how retention and all these things. So you can imagine that might be an interesting way to scope out potential future acquisitions from the AI space directly, which is very few of the acquisitions so far have been AI centered. They're really about, as you say, adjacency, expansion into project based learning, or into certifications, or, you know, various things that you know, enhance their offerings. That's a very good take. I feel edified by hearing that. And I think it's true. One thing that jumps out to me, yeah, there's no public LMS. And really, you know, we talk about this all the time, the big three LMSs are Power School and structure, and Google, Google, obviously, you know, as Ben always says, biggest company in the world, but their whole education business is rounding error. And then you have these two are going to be run by private equity. I guess my wondering is like, if that seems like a little bit of a new era, given that LMSs are sort of the central infrastructure of so much edtech, and it's sort of the entry point for so many students and teachers and admins and tools. It's interesting for that whole layer to sort of be owned by, let's say, you know, strategic players, let's call it that way, rather than people who are just looking for their own, you know, looking out for their own particular angle on edtech, it's interesting. And it could be good. It could be good, you

Matthew Tower:

know, Phil Hill tweeted, the saddest part of all this is we won't have nearly as much information on any of these companies. I think, like, you know, not to, you know, wax too much on this. But like, I think it's really important to have companies in public and like talking about how the market is working, and like providing data. And, you know, I think there's a really important place in the world for private equity, and I think Blackboard, especially, like they were really struggling, you could see their market share just like crap. Cashing and their private equity ownership has really turned around. But once you get to the point where there are zero, is technically a public LMS, but they participate almost entirely in the corporate market, so apples and oranges, but like, we just won't have that much information on what's going on, which, to me, is sad,

Alexander Sarlin:

and meanwhile, they'll have all the information about students and grade books and class rosters and all of these important things. So yeah, it's an interesting moment. One of the the AI companies that caught my eye this week is this company, Heo, which is, you know, got money through the startup fund, and it's basically about creating AI chat bots that like a miniature tutor or friend for young kids, and it can look like a panda or look like a little mermaid or, you know, all those kinds of things. And I think this is going to be a space that a number of people get into. What did you think of the HEO announcement?

Matthew Tower:

Yeah, I mean, I think similar to character data, AI, I really like the premise. I think, like, I think we'll be able to design these types of tools to be really impactful for people, and, you know, personalized to use that, that favorite buzzword. It sort of frustrates me that often the like, easy article to write is Oh, but like, safety, I get it like that is a legitimate concern. But, you know, the status quo is a finite amount of, you know, quote, unquote, safe children's content, and then parents go to YouTube, which, like, you know, as hard as the people at YouTube are trying, and I don't want to, like, just slander them, it's just open ended. So, you know, I think I would rather have my kids leveraging an app whose sole sort of purpose in life is to deliver safe content and understand, like, it will still screw up. Like, I don't think it'll be infallible, but at least the it'll be focused on the problem, rather than trying to serve everyone the way that that YouTube does today. So so I'm bullish. You know, obviously there's a lot to figure out there. So like, are they the one, you know, children's app to rule them all. I don't know. And you know, are they particularly efficacious? Again, I, you know, I don't know. But on some level, like just providing reasonably good content, even if it's not sort of, you know, perfect for learning is a win in this, in a space that's really complex to deal with, for parents. One of

Alexander Sarlin:

the things that I find interesting about this kind of play is that, you know, one aspect of AI that I love pointing out is that this first interface with it that we've all sort of experienced as a sort of, you know, early version is just text to text. It asks you something in like a search bar looking text, and you ask a question, and it texts you back, and it feels very texty. There's no rule that that is how interfaces should look. And I love that that voice is taking off. And I love that this kind of model is taking off. You know, if you're trying to, they work with kids three to 11, and so the lower side of three, these are not kids that are going to type or even in many ways, you know, ask a question necessarily clearly for voice. So the idea of creating these super cute avatars and these sort of touch screen images and games, it's totally natural. And I sort of like seeing that. I don't even want to use the word gamification, but just sort of like childness of the interface merge with the AI, you know, underneath it, I think that's really something we're going to see a lot of. We've seen some other things launched this week that are AI features that are very natural extensions of existing products. We saw, we saw Pluralsight launch an AI assistant to in Pluralsight is all about upskilling. So it's about, you know, helping you figure out what you need to upskill, and supporting you in that journey. We saw Pear Deck create an AI feature to accelerate teacher efficiency when they're creating lessons. And that kind of AI feature makes total sense, but I'm really excited actually about these AI interfaces that sort of expand our vision of what AI can look like. It really does not have to look like a text box

Matthew Tower:

where I think startups will find some interesting ways to break through against incumbents. Is they can use sort of novel product development approaches, and this is something I think uplimit and Heo and others can apply that is, frankly, really hard for a plural site or an instructor, or what have you to applies. They can just say, yeah, actually, the entire assumption of our company is that we will figure out this problem about, you know, we'll be pretty good on the hallucination front. Instructure doesn't really have that luxury, right? Because they're so big. When they screw up, everybody's ready to jump on them. So I think that's like, one of the more interesting things to watch is, is these smaller companies can use just fundamentally different approaches to building a product that the big companies just can't and they can take a level of risk, which, you know is sort of a trope, but I actually think is meaningful, to really leverage these tools effectively in the space. And you

Alexander Sarlin:

know, to your point about. Hallucination and about safety. I think there's something, there's going to be this sort of back and forth. It is a very easy article to write. It's a very easy sort of hand wavy complaint to make, to say, if when a tool says, oh, it's going to be personalized, I'm over the term personalized, by the way. I'm not using it anymore personally. Have replaced it with, in my mind, with precise I'm like, precise learning. You could take that or leave it, but I just the word just makes me make queasy at this point. But like, you know, when you talk about delivering experiences that are specific to an individual, that are individuated, that are precise, the first thing people think is okay, so you're gathering data. You're gathering data from a child. That's okay. Now you're really close to a third rail. But if you actually look at an app like this, I mean, the data they're gathering is they ask the child what they like, they ask them, they get their name, and then they refer to them by name. They keep the history of the choices they make in games and stories like that, is still precise learning that is still individualized, but it is so far from the kind of like dystopian data grabbing scenario that people like to sort of just vaguely point their finger at. I feel like we really have to differentiate that, you know, if there's a danger of, sort of throwing out the baby with the bathwater, if we just think that, you know, any quote, student data, meaning, like, anything a student does has to be off limits, it's, it just seems silly to me. Yeah.

Matthew Tower:

Like, look, I'm sensitive to the concept, but I want the conversation to get way more specific, right? And I actually, I give Congress credit for actually building a reasonably bipartisan approach to, you know, the next level of COPPA, right? I think it's like casa, K, O, S, A, that's great. Like, I'm not saying I agree with every single point, or I disagree with any specific point, but I think trying to get beyond the you know, oh my gosh, you're gathering data. It's like, Yeah, I like, we're gathering data on kits. Like, that's okay, as long as we are thoughtful about how it's used and whatnot. So yes, and where

Alexander Sarlin:

it lives and what kind of data? Yeah, exactly, and deploying it, and, I mean, there's so many questions there, right?

Matthew Tower:

So I don't know, I trust that we'll figure it out. And I'd rather move on to either a specific discussion of how you can gather data, like with actual recommendations and not just hand waves, or talking about, what are the benefits and what are the kind of novel approaches we can use if we have that data as an assumption rather than a hypothetical?

Alexander Sarlin:

Exactly. I mean, the whole business models are based on the fact that you can gather student data. And just that term sounds creepy, and it's really shouldn't, because the same way that you know every financial product is based on the term that you're gathering data from the people using the software and that they're putting numbers into it, and their information, it's like it's or every email provider. I mean, there's something just so I don't know moral panicky to me about the idea of just like

Matthew Tower:

and the flip side of this is you can find a knock on Heo, and it's their business model, which is token based, and like, the microtransactions and the games that do well in the App Store, and it's like, yeah, like, that's where you should be, like, hey, that's like, pretty close to casino gambling. Like, if you interrupt a kid's story that they're super invested in and say, hey, it's gonna cost you five more bucks, like that, I really don't, like, I think that's not a great business model for this type of application. Like, I'm all on board for saying, hey, like, I think you need to try something else. But just being like, yeah, you know, and this is what the TechCrunch reporter did, is like, basically trying to break the app. It's like, fine. Like, somebody should be doing that. Or we should have, like, a score sheet that says, like, did your app break? That just every company has to run through, but it's just not very interesting. Versus like, Hey, your business model is like, little gambling. Maybe we should rethink that. That's more interesting to me. Yeah, it's a great

Alexander Sarlin:

point. I think what those two things have in common, right? The data gathering issue and then the business model, or the freemium or the microtransactions, kind of model that games use. These are innovations, quote, unquote, that have come purely from the tech world that then when you apply them back into the education world, there's all these additional things you have to think about. And yes, they're very important. I mean, nobody's advocating for just sharing or selling student data, or, you know, there's all sorts of things that can go wrong there. Don't get me wrong, but it's like, you know, focusing on that part, on where the features and the techniques don't map, sort of ignores literally everything else. And if you're not actually like, you're saying, getting specific about where they don't map, then you're ignoring everything because you're just, like, saying student data can't do it. I mean, there are literally districts now where they're like, oh, teachers apply for a tool, and if it gathers student data, they just say no to it automatically. That was a story last week about freemium model, and you're like, Oh man, like that is so silly. It literally just, you know, put up a wall if it gathers student data of any kind. We can't use it like it pulls the whole education system out of the modern world, which is not what anybody wants. But, you know, Matt, your expertise is always falls in. I mean, you know, you know lots of stuff, but your M and A and your funding rounds, prowess in collecting them and understanding them, analyzing them, has been so useful for the entire edtech industry. Would you be willing to jump into a couple of the acquisitions and funding rounds for the week? Yeah,

Matthew Tower:

thank you, Alex. And I want to start with a deal that is actually not yet a deal, but is big enough that usually there's at least some degree of substantiation when it makes it into the press. And that is equity EQT, the private equity firm in Norway, is considering selling noranglia education to another PE firm, Neuberger Berman. And the reason this this deal matters is it's really big. It's at a $15 billion valuation. And you know, the valuation will probably morph and shift as the the deal continues to evolve. These things tend to take time, and usually when they get leaked to the press, it's a negotiating tactic, not because it's it's actually set in stone, but it's really big, right? That would be as big a deal as I can remember in the space. And I think it calls back to what I was saying earlier about valuations, that it's for a an asset. It's for a traditional school asset. They run 80 international schools in 30 countries with 1000s of students. And I think what makes what allows this deal to grow so big is because the public markets really know how to value it right. And so the private equity firms can say with some degree of confidence, if they pull the right levers, and those can be both financial and operational. The article does not make clear whether this is more of a financial deal or an operational one that they can get to, you know, some certain valuation higher than they bought it from, right? So we talked about within structure and some of the EdTech funds that are in the lower billions, they're interesting because the public markets don't really know how to value them, and PE firms are betting that they can improve the valuations over time and pull out some profits with school deals like this one, potentially will become it's more of a we pretty much know what the playbook is here, and it's, it's based on growth and profitability. And, you know, we, for a number of reasons, believe we can get there. I mean, that's why we should do this deal. So it's big, and therefore, I think it's important. We can also talk about some of the smaller venture deals, to give people a flavor of both we've got fluently, which is a English English coach that was backed by Y Combinator. You know, I think English language learning is, as Luis van a Duolingo likes to say, just about as big a market as it gets. He puts his Tam and I think two or 3 billion people worldwide. What I think is interesting about fluently is they are betting that they can cover the final 10% that most language learning tools struggle with. So most tools focus on going from zero to, you know, Intermediate Language Learning fluently wants to cover the call it 80% fluent, 200% fluent in Business English, which is, frankly, part and, you know, would be impactful, if they're able to pull it off, they're obviously super small, so it's hard to say how far along they are today. But I think specifically flipping the script from zero and intermediate and instead focusing on that last 10 20% is interesting and different in the market.

Alexander Sarlin:

One thing that stood out to me a little bit about this deal is in doing that last 10% one of the features they talk about is real time call analysis. So if you're already have a job and you're practicing your English and you're in a call with a client or a, you know, a colleague, it can actually give you feedback on the call in real time about your English. And when you play that out, if you think about that, that is both really exciting, but also a little bit frightening, perhaps, for either a person or the company they work for to have a third party recording your business calls. So they put in a whole privacy layer, and they have all of this features around encryption and anonymization and not passing audio data, you know, from your practice sessions or from your your calls, and you know, we've just had this whole conversation about data and privacy, and we've seen in other cases, especially with the big foundational models, some of the things that they do to protect corporations, to protect their B to B models, enterprise models, actually have a really positive downstream effect On education use cases, because businesses are very nervous about their data being flowing outside of the business context, and so are schools. So I know I just made this whole talk about how student data, you know, we shouldn't be that afraid of it. But of course, there are real concerns, and I think it's interesting seeing models that are touting the privacy like startups that are. Out in privacy as a key feature. I think that's interesting. And if we see more of that, you know, we should look for it for the education crossover. And, you know, we talked a little bit about uplift, but I'd still love to hear you talk a little bit more about what this round actually looked like.

Matthew Tower:

I talked mostly about the product side before, you know, I think they raised$11 million from basically a who's who of enterprise and education investors. It was led by Salesforce ventures. They had GSV, Greylock, cowboy ventures, workday and a few others all participate in the round. So you know, generally with rounds like that, you know the company is not sharing specific revenue targets or profitability or customer accounts, when those types of investors get involved. And given, I think the company's only like, two years old, that tends to be a pretty positive indicator. So, you know, it seems like a very positive sign that they raise that amount of funding from that cohort of investors,

Alexander Sarlin:

100% so I have a little bit of inside baseball on this one, because all three of the founders of uplimit are ex Coursera employees. I worked with all of them, and I remember talking to one of them, the product person, before he was started uplimit, and we were chatting, and he was like, I think I'm gonna go. He wasn't even in edtech. He had left edtech. And he's like, I think I'm gonna go back into edtech do this startup. It's really scary. You know, he has kids. He's like, I don't know, I this is a scary thing, but he's like, but it's with Julia Stiglitz, and she's amazing and unbelievably well connected in the investment field. And I remember being like, look, Julia is a boss, and so well connected that these are very talented people, and this is nothing against the product. Product is awesome, and what they've been doing is awesome. But, you know, that was a it's sort of starting on second base when you found a company, and one of the founders is a very seasoned veteran and investor. So I'm not at all surprised to see them get this, you know, who's who of investors, but I'm very happy for them, because they've worked super hard and they've created something really exciting. So good for them. But, you know, I'm a little biased there, but good for them, yeah,

Matthew Tower:

maybe one last deal to round us out here. There's a company called skillfully that is an earlier stage company. They raised $2.5 million which, you know, I could use $2.5 million but so it's not chump change. Um, they build corporate simulations for hiring. So they go and take a specific job and say, this is the type of work that a person in this role does, and build a simulation around it. And their sort of de novo approach is that they're using generative tools to do so. So this concept has been around. I was doing some research on it before we hopped on and I figured out there's been actually close to $80 million of venture funding that's gone into similar ish concepts. Companies like primetrics and belus and interviewed, all raised money over the past 10 years and all sold to, you know, pretty fancy names, like tech names. So there's something there, there's a there, there. But, you know, I think neither of us is applying for a whole lot of jobs right now, but I think it's fair to say the resume is definitely still king in that world. So feels like skillfully got some room to run and grow their business,

Alexander Sarlin:

and what's needed to see from them, as well as that, some of the investors in this, this is better ventures, which is a B Corp, and you have strata which is well known for sort of alternative credentialing and American student assistance. So some of the investors here are people who are very deeply invested, from a sort of philosophical ideological level on trying to move away from the resume still being king. So we talked to Rebecca Kantar, who was at embellish, and now is at Roblox, and it's an idea that keeps coming around. But so far, nobody's, I think, quite cracked it, like nobody's made such a big wave in the hiring industry that they've really, you know, started to shake the idea of, you know, skills based hiring, which is what the dream is here. But I'm glad they're still trying, because it is an incredibly important thing to do. And I love seeing you know, strata involved in this kind of thing, because I think they publish really amazing work about exactly this kind of issue, that if you can prove that you really know how to do a job, it shouldn't matter what college you went to, it shouldn't matter what you know, previous experience you needed. You can do the job. We just proved it, and that really changes the whole game

Matthew Tower:

totally. I couldn't agree more. And you know, I think it's a space that I keep my eye on, and I think a lot of investors keep their eye on, because we all feel like it should work some point, right? We're just not sure. And so, you know, if you can find somebody who's got a compelling founding story, and, you know, a reasonably good product, I totally get why it continues to receive venture funding.

Alexander Sarlin:

Yeah, and the AI, you know, you didn't say this, but I think it goes without saying AI powered simulations. Or maybe you

Matthew Tower:

did say it. Now it's. Did the same way that you don't like saying the word personalized. My challenge to all of the press release folks out there is, can you say what you're going to say without the word AI? That's how I extend that challenge to myself.

Alexander Sarlin:

But in this use case, it actually might matter, right? Because trying to do simulations of all these different job roles without some kind of, you know, brilliant brain behind you is extremely time consuming and difficult. Having a brilliant brain behind you that can actually, like, respond in real time to what you you're doing in a simulation you never know. Maybe this is the the breakthrough we've all needed.

Matthew Tower:

Yeah, I definitely think that, like, the ability to quickly generate things that are almost as good as a human again, that almost is can be a make or break point, but if it's, if it's just good enough, it can make a really big difference, and it allows a different type of company, right? So Rebecca, you know, a lot of embellishes funding went towards research, and, you know, humans building out the assessments that under under lay the game skillfully will probably still need to do a lot of research, but there might be some cost efficiency with the kind of generative tools that are available today versus what embellis had in the early to mid 20 teens. You

Alexander Sarlin:

know what I'm envisioning just putting together two of the subjects we talked about today. What if these AI powered simulations basically made AI versions of the manager that just about to hire you, and learned their style and how they respond to things and what they like and what they want, and then you basically do a little simulated version of exactly the person you're going to be working with, or even the whole team. That is pretty cool,

Matthew Tower:

and that's the way a lot of hiring ends up happening, right? Like, I don't know, I've done a bunch of case studies right with hiring managers, and that's how I've gotten past jobs. Is, like, you know, you have a time bound case, and you have, you know, a limited set of information, and you have to do your best, you know, it's a lot of work on both sides to make happen, and things that can take some of that time out and make it more efficient, you know, I think are worth quite a bit of money to corporations.

Alexander Sarlin:

It's true, yeah, really, really good point in this world. It still might take a lot of time and effort for the applicant, right, you know, for the candidate, but it takes the time and work out from the hiring manager to have to, you know, grade, quote, unquote, all these case studies, or have all the interviews and the conversations and ask those famous questions, those McKinsey questions we all know about anyway. This has been a fantastic episode. I really think we got deep into a lot of different topics. So I want to thank you, Matt for being here with us. You're always welcome back. I feel like you have a truly like, you know, 10,000 5001 to 1100, foot view of the EdTech landscape. You can talk at any of them. It's really fun to talk to you.

Matthew Tower:

I enjoy coming here as well. Y'all always have fun topics to wrap on, and it's really fun having having people to nerd out with. So I appreciate you for that.

Alexander Sarlin:

Thanks so much for being here. Thanks everybody for being here. Remember to sign up for the EdTech and AI book club for con. We have a really good group. It's actually over 100 people in this particular book club. So we're gonna do it in chunks, in breakouts, I know, but we'd love you to be there. It's a really very interesting book about a very important topic right now, from basically the, you know, one of the main proponents of it. So really looking forward to that. It's on the 22nd go sign up now. You can find the information in the newsletter. Thank you so much, Matt. Thanks all of you for being here with us on week in edtech. From edtech insiders. Thanks for listening to this episode of edtech insiders. If you liked the podcast, remember to rate it and share it with others in the EdTech community. For those who want even more edtech insider, subscribe to the free edtech Insider's newsletter on substack.

People on this episode