Yesterday, Mark Zuckerberg called me to talk about what he predicts will be one of the biggest moments in the AI race: the release of Meta’s Llama 3 models and widespread availability of the company’s ChatGPT competitor, Meta AI.
Q&A: Mark Zuckerberg on winning the AI race
Meta’s CEO is pushing for the company’s assistant to be the most used chatbot in the world. He opens up about competing with other AI companies, model training, and why he bought all those GPUs when he did.
Meta’s CEO is pushing for the company’s assistant to be the most used chatbot in the world. He opens up about competing with other AI companies, model training, and why he bought all those GPUs when he did.


We last spoke in January, when Zuckerberg announced that Meta wants to build artificial general intelligence using the massive stockpile of Nvidia GPUs he secured. In the below interview, parts of which were published on The Verge today, we touch on where he thinks Meta is in the AI race, open versus closed source, and the backstory to why he bought all those GPUs when he did…
The following conversation has been edited for length and clarity:
What were you pushing the team to achieve with Llama 3?
I always felt that if this is going to be very prominent within our apps, it needed to be so that people can access it very easily. So, I gave the team this thought experiment: I think this should be integrated into the search box at the top of basically all of our apps. What quality level would we have to achieve in order to feel ready to do that? With Llama 3, we basically feel like we’re there.
I don’t think that today, many people really think about Meta AI when they think about the main AI assistants that people use. But I think that this is the moment where we’re really going to start introducing it to a lot of people, and I expect it to be quite a major product.
I think Llama 3 is a big deal. But I think Llama 4 and Llama 5 will be big deals, too. But the emergence of Meta AI as both the highest quality and probably, over some period of time, the most used AI assistant, I think that’s a very meaningful thing.
When Llama 2 came out, it was the best open-source model. It seems like Mistral has since led on open-source benchmarks. Are you going to be back at the top of the leaderboard again?
Yeah, I think so. The 8 billion and the 70 billion [Llama 3 models] are clearly better than anything else at their scale. But at this point, our goal is not to compete with the open-source models. It’s to compete with everything out there and to be the leading AI in the world.
Does that mean that there may be a point where you make a call to make a model closed-source if it means getting it out there and winning?
We lean toward wanting to open-source all of this. But as a matter of process, we can’t decide that we’re going to open-source a specific thing before it’s done training and we’ve taken it through all of our safety processes. In terms of all of the concerns around the more existential risks, I don’t think that anything at the level of what we or others in the field are working on in the next year is really in the ballpark of those types of risks. So, I believe that we will be able to open-source it.
Multimodality is one case where it may not end up making as much sense to open-source every modality. For example, image generation is one that we’re looking at closely. Especially in an election year, is that a net positive thing to do? I think that’s something that we’re still thinking through.
Are you concerned at all about the supply of data running out as you scale models? I know you’ve mentioned Meta’s user data being quite substantial, but I know there’s some red tape with actually being able to use that.
It’s really very use-case-specific. If you’re trying to teach a system how to reason, you don’t just want to give it knowledge — you want to give it examples of reasoning through things. There are big classes of information that these models just have not been trained on yet, video being the biggest one.
Then I think there’s going to be a lot in synthetic data, where you are having the models trying to churn on different problems and see which paths end up working, and then use that to reinforce. I don’t tend to think that we’re going to run out of data.
The thing that I think is going to be more valuable is the feedback loops rather than any kind of upfront corpus. Having a lot of people use it and then seeing how people use it and being able to improve from there is actually going to be a more differentiating thing over time.
I noticed that Google is now helping with real-time knowledge in Meta AI. I think you’re the only assistant that has Google and Bing. I’m surprised that Google especially is outsourcing search to another AI assistant when they have Gemini. Did it surprise you that they were willing to do that?
I guess I wouldn’t have been surprised if they didn’t want to do it. But it seems like they are building up a whole model around this, so it makes sense. It’s good for Google. It shows Google prominently and links to Google. They pay Apple a ton of money for distribution. They’re not paying us. So, I think it’s good for them on that.
Are you paying them then?
There’s not a ton of money flowing either way.
If OpenAI comes out with GPT-5 this year and it’s a meaningful step change, that resets the race, right? How fast do you see Meta closing the gap?
I think we’ve closed the gap in terms of being able to build leading models pretty substantially. When we came out with Llama 2, GPT-4 was already out. So, I feel pretty good about our velocity.
For Llama 3, the goal wasn’t to build something that was way ahead. It was to have Meta AI be the most intelligent assistant that people can use for free. I don’t think that people around the world are in a position to pay a ton to use this stuff yet.
I feel really good about where we are and also the cadence and roadmap and where we expect to be with Llama 4. I think that’s where we’ll start to do some stuff that’s more differentiated and probably ahead of where everyone else is.
The quarter where your stock fell to around $90 in the fall of 2022, it was around that time that you would have had to make the big Nvidia order for all the H100 GPUs to power your AI. Were you seeing this big AI wave coming before GPT-4’s release?
It was around then, yeah. I think the stock went down in retrospect because revenue was slowing for a lot of companies across the industry, too. While revenue is slowing, I think investors generally expect you to pull back on expenses. Instead, we doubled down on both AI infrastructure and the metaverse. I think, at the time, people thought that it was mostly metaverse. But now I think it’s pretty clear it was a combination of both.
To be honest, I wish I could claim more foresight on this than I had. The reality is that we were in the middle of building up Reels and the recommendation engine. We were constrained on how many GPUs we had.
It was going to take a while to get the Reels GPUs. I made the decision at that point that I don’t want to be in this position again. We were going to get enough GPUs to do another Reels-sized service when we don’t know what it’s going to be yet. I didn’t know what it was going to be.
Notebook
My notes on what else is happening in tech right now:
- This week in Google: My colleague David Pierce has the scoop on a big reorg. Rick Osterloh is now overseeing a centralized “Platforms and Devices” group that will oversee “all of Google’s Pixel products, all of Android, Chrome, ChromeOS, Photos, and more.” Hiroshi Lockheimer, meanwhile, is taking on “other projects” at Google, though it’s hard not to see this as him being on the way out. Elsewhere, I reported that 28 employees were fired on Wednesday in connection with sit-in protests over Google Cloud’s work with the Israeli government. And more layoffs hit the finance department and YouTube.
- Reddit update: I caught up with chief product officer Pali Bhat this week, who told me that the company will have “additional stuff” to share on its data licensing business in the coming weeks. That sounds like another AI training deal may be getting lined up. He also said Reddit is working to let creators directly pay each other and also pay developers for the experiences they make on the platform. A big focus for Reddit this year is using AI models it has developed in-house to translate the service into more languages. I expect to hear more about all this when Reddit reports its first quarterly earnings as a public company on May 7th.
- Reviews don’t kill products: All the tech bro uproar at Marques Brownlee calling the Humane AI pin the worst product he has ever reviewed is ridiculous. Journalists don’t take the Hippocratic Oath. It’s actually a public service to say that an expensive, widely hyped product is bad. I was getting Magic Leap redux vibes about Humane before they even shipped anything. I mean, they came out of stealth via a TED Talk. There are good ideas in the pin; it’s just too early and a victim of its own hubris.
Interesting links
- Sam Altman and OpenAI COO Brad Lightcap do a podcast together.
- Telegram CEO Pavel Durov’s first extended interview in many years with… Tucker Carlson.
- Travis Kalanick has also resurfaced, though video of his first interview in years this week doesn’t appear to be online yet.
- Chamath Palihapitiya’s conversation with Groq founder Jonathan Ross in Paris.
- SignalFire’s annual state of tech talent report.
- Weekend read: my colleague Josh Dzieza’s well-written (and beautifully designed) feature on the “invisible seafaring industry that keeps the internet afloat.”
That’s it for this issue.
If you aren’t already subscribed to Command Line, don’t forget to sign up and get future issues delivered directly to your inbox.
As always, I appreciate your feedback and story tips. Respond to this email, and I’ll get back to you, or ping me on Signal.
Thanks for subscribing.
Most Popular
- Sony’s PlayStation 5 is $200 off for the first time since December
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Elon Musk admits that millions of Tesla vehicles won’t get unsupervised FSD
- The unraveling of Dan Crenshaw
- I bought Alienware’s $350 OLED monitor and I can’t believe how good it is











