The View from Davos with Meta‘s Yann LeCun – The Future of AI is Open and Human-Level Intelligent

By Patrick Moorhead - January 24, 2025

How important is open source to the future of AI and are we at human-level intelligence yet?

Patrick Moorhead and Daniel Newman are joined by Meta‘s Yann LeCun , VP & Chief AI Scientist for a conversation on the latest AI developments and insights from WEF25 in this segment of The View From Davos.

Get their take on:

  • The importance of open source for accelerating AI development and
  • Going beyond LLMs: LeCun imagines future AI systems will understand the physical world, reason, plan, and have persistent memory
  • The role of AI in addressing global challenges
  • Insights into future AI projects at Meta
  • Yann LeCun’s perspective on ethical AI and its governance

Learn more at Meta.

Watch the video below, and be sure to subscribe to our YouTube channel, so you never miss an episode.

Or listen to the audio here:

Disclaimer: The View from Davos is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.

Transcript:

Patrick Moorhead: The Six Five is On the Road with a View from Davos. It’s been a great week so far. And World Economic Forum is this very unique combination, this melding of technology, regulation, talking about governments. And of course there’s a lot of discussion around AI.

Daniel Newman: Yeah, it’s been a really good week so far, Pat. And the opportunity to speak to many of the world’s leaders, both in enterprise and government, provides the chance for us to really share with the audience everything that’s going on in the market. We’re in this really interesting inflection. And we’re seeing AI accelerate this year. And Pat, the people we have on the show just bring so much new insights. So, hopefully everybody out there is spending some time with us here in Davos.

Patrick Moorhead: And I want to introduce a guest that actually probably doesn’t even need an introduction, really a champion of open source AI, real mover and shaker out there, and is not afraid to have conversations out in social media. Yann, welcome to the show.

Yann LeCun: A pleasure.

Patrick Moorhead: Yeah. I guess first and foremost, what are you trying to achieve at the show? There’s so many changes that have happened in the last year. What do you want to achieve here?

Yann LeCun: Okay. What I spend most of my time on, despite which might be misled into thinking because of my external activities, I’m really working on fundamental research to get to the next step in AI because current technology is very limited. Everybody’s excited about LLMs, and we should push them as far as we can and they’re super useful, but they’re not a path towards human-level intelligence. So, I’m really working on how can we fix that?

Daniel Newman: Well, Yann, you and your team are doing some really incredible work, especially around open. We hear open AI, and of course a lot of people will also argue that’s not actually open.

Yann LeCun: Not at all.

Daniel Newman: Meta has been really focused on bringing open source to the market and enabling so many people to use what you’ve built with Llama, expand upon it. Can you just talk a little bit about the thinking behind that? Because historically, Meta has not necessarily been all about open. But in AI, it seems like that is the strategy. And it’s really working well.

Yann LeCun: No. Actually, the whole openness story is really in the DNA of the company. When I joined Meta in late 2013, and I was talking with Mark Zuckerberg and Mark Schaefer, who was the CTO at the time, I said, “For me to join Facebook, to create a research lab, I have three conditions. The first one is I don’t move from New York. I don’t quit my job at NYU, so I’ll be part-time. And the third one is we need to do open research, publish everything we do, and open source our code.” And the answer from both of them was, “Oh, you don’t have to worry about this. It’s in the DNA of the company.” I quote and say, “We’re already open source, all of our infrastructure software.” And so, I found that very interesting and reassuring, some message I think I wouldn’t have gotten from any other player at the time. And as a consequence, we created the lab. We famously announced that we were going to do open research. And the consequence, other labs actually became more open, like Google. And they kind of rescinded this a little bit now, but… And then, OpenAI was created a couple of years later. And they were supposed to be open, but clammed up since then completely. Same with Anthropic.

So, we’re the only major player really to play an important role in open source, together with a few Chinese players who are really good. So, the advantage of this, I mean the reason why we’ve seen such big progress in AI over the last decade or so, is because of the openness. It’s because information circulates quickly and freely, and that’s what pulls everybody. If we start clamming up, progress is going to slow down inevitably. So, that’s one reason. The second reason is, if you want to attract the best scientists and researchers, and you tell them, “You can’t talk about what you’re doing,” you’re not getting the best people. Third, we get a lot of really interesting advances from the open source world: contributions, ideas, like how to accelerate inference with Llama and things like that. There’s a lot of really interesting work coming from academia, from startups, from independent researchers. A lot of applications are enabled by AI. I mean, basically Llama is the substrate on which the entire AI industry now is being built. Most startups use Llama. And a lot of large companies are now migrating from proprietary systems to Llama.

Patrick Moorhead: Yeah, it’s been a really impressive run. But Llama openness didn’t surprise me. Because if I look at the Open Compute Project that you did, PyTorch, you do have a history of enabling a lot of developers to make things happen. I want to ask you about the future. I know research should be measured in terms of years, but I’d like to ask you about what should we expect over the next two years. I know everybody’s got a different definition of AGI. What should we expect? And I know there’s no black and white answer here, but what are your thoughts about the future?

Yann LeCun: Okay. So, I don’t like the phrase AGI famously because human intelligence is very specialized in the first place, right? We know this because we have a lot of computer systems that can do much better than humans in narrow areas. That means we’re not so good at everything. So, at Meta we use the phrase AMI, Advanced Machine Intelligence.

Patrick Moorhead: Okay.

Yann LeCun: We pronounce it Ami because that means friend in French. And that’s the main mission of FAIR. So, FAIR is the Fundamental AI Research Lab. The F used to stand for Facebook, but now it’s fundamental. And the main mission is really to figure out the next generation AI system that is capable of doing things current systems can’t do: understanding the physical world, having persistent memory, and being able to reason and plan. Okay, those are the four things that LLMs really can’t do without added ingredients to it. So, what’s going to happen over the next two years is that there’s going to be progress using the current paradigm, LLMs with words with things bolted on it.

So, we can do a little bit of reasoning, it can understand images, and various things like this. But it is going to be a huge hack. And there is diminishing return in how much better they get with more data. We’re running out of data, so it’s saturating. So, we need this new paradigm for the next… After that, so I expect to see some early progress on this sort of new paradigm within three to five years. And perhaps in five years we’ll know if we’re on the good path towards something like human level intelligence. The idea behind this, I mean the reason we’re working on this, is because we see a future where everyone will wear one of those smart glasses and we’ll interact with them through voice or through bracelets with EMG and various other interfaces. And we need the system to have human level intelligence if you want them to basically act like human staff or assistant, right?

Daniel Newman: Yeah.

Yann LeCun: So, all of us would be a boss of a staff of virtual smart people.

Daniel Newman: Well, it’s a really exciting future, Yann. I want to thank you so much. By the way, the glasses look great. It’s come a long way.

Yann LeCun: Yes, yes.

Daniel Newman: Very stylish. That’s been sort of my inflection, is when they were stylish enough that I could actually pull it off and wear them.

Yann LeCun: Right.

Daniel Newman: And you’re wearing them very well. But thanks so much for opening up to us. This is definitely one of those conversations, Pat, that I would like to have spent maybe another 20, 30 minutes. But in Davos here, spending 20 or 30 minutes is like eight meetings, right?

Yann LeCun: Basically.

Daniel Newman: Speed dating. But Yann, thanks for joining The Six Five. Let’s have you back again sometime soon.

Yann LeCun: Thanks for having me on.

Patrick Moorhead: Thanks, Yann.

Daniel Newman: And thank you, everybody, for tuning in. What a fascinating conversation. We appreciate you joining the Six Five On the Road. It’s a View from Davos. Hit subscribe. Join us for all the great conversations here on the magic mountain for this episode. Time to say goodbye. We’ll see you all later.

Patrick Moorhead
+ posts

Patrick founded the firm based on his real-world world technology experiences with the understanding of what he wasn’t getting from analysts and consultants. Ten years later, Patrick is ranked #1 among technology industry analysts in terms of “power” (ARInsights)  in “press citations” (Apollo Research). Moorhead is a contributor at Forbes and frequently appears on CNBC. He is a broad-based analyst covering a wide variety of topics including the cloud, enterprise SaaS, collaboration, client computing, and semiconductors. He has 30 years of experience including 15 years of executive experience at high tech companies (NCR, AT&T, Compaq, now HP, and AMD) leading strategy, product management, product marketing, and corporate marketing, including three industry board appointments.