It’s hard to have a conversation about AI startups these days without Perplexity coming up.
Perplexity is ready to take on Google
‘Factfulness and accuracy is what we care about,’ says the CEO of the AI search startup. ‘Google has many other cultural things that they care about, and that’s why they made their products that way.’
‘Factfulness and accuracy is what we care about,’ says the CEO of the AI search startup. ‘Google has many other cultural things that they care about, and that’s why they made their products that way.’


Nvidia CEO Jensen Huang professes to using the AI search engine “almost every day,” Shopify CEO Tobi Lütke says it has replaced Google for him, and I’ve heard Mark Zuckerberg is also a user. I’ve been testing Perplexity in place of Google the past couple of months and have found it to be better for some searches, like ones with a very specific answer I’m looking for. But I’m not ready to completely switch.
Part of the buzz around the one-year-old startup can be attributed to its CEO Aravind Srinivas, who isn’t afraid to lean on his impressive investor list and dish out hot takes on social media. Perplexity has raised over $74 million to date and was last valued at over $500 million, making it one of the highest-profile names in consumer AI right now. The product has over 1 million daily users and continues to grow quickly, Srinivas told me in an interview from the back of an Uber last week.
We talked about the state of the AI industry, what it’s like competing for talent these days, his thoughts on the state of Google, and more. Srinivas is refreshingly candid for a tech CEO. I hope you enjoy our interview.
The following conversation has been edited for length and clarity:
What’s it like on the frontlines of the AI talent war right now?
I made mistakes in chasing the wrong people. Recently there was a really senior backend engineer who ended up joining X.AI. He was talking to us, too.
I was talking to Patrick Collison for advice on this, and he said, “Why are you even in this race? Why are you trying to compete with these people? Go after people who want to actually build the stuff that you’re building and don’t chase AI clout.”
There are a lot of good engineers who are applying to us and Anthropic and OpenAI and X.AI and Character.ai. These are the top five choices of AI startups. And people normally just go to the highest bidder. Whoever has the highest valuation will be able to win this race all the time because, on paper, you’re always going to be able to offer the same amount of shares but the dollar value is going to be much higher.
For Google employees, what we usually say is, “If you really just want money and comfort, stay back.” The only reason to come to Perplexity instead of staying at Google is if you want to actually ship things very regularly and own a big chunk of the code base yourself, versus building on top of tools that other people have already built inside Google.
Is OpenAI still setting the ceiling for compensation in the AI industry? Who is generally paying the most?
I think OpenAI was known to be the biggest payer until now. Now X.AI is also paying a lot. Whatever offer we make, the same offer made by X.AI is more valuable because they say on paper that they’re valued more.
We have a product that has more users than Grok. Subscriptions on X don’t make that much revenue. So we have the right to say, “You should consider us approximately in the same ballpark.” But then they say, “No, Elon Musk is different.” So that makes it difficult to compete with X.AI.
Character.ai used to make big offers, too, because even though they were valued at $1 billion in the previous funding round, they raised more money from Google. But that has not been priced in an actual round yet. They use more inflated numbers in their compensation for making offers.
Anthropic and OpenAI usually just match each other. OpenAI’s compensation is a lot more standardized. So I tell people, “That’s great, but the upside from $100 billion is pretty hard.”
Have you taken any kind of lesson away from the Gemini diversity scandal? I saw you recently integrated photo generation into Perplexity.
Factfulness and accuracy is what we care about. Google has many other cultural things that they care about, and that’s why they made their products that way. They should only prioritize one aspect, which is giving an accurate answer. They don’t do that for whatever reasons. They have all these other people in the room trying to make decisions.
If I learned one thing, it’s that it’s better to be neutral. Don’t try to have any values you inject into the product. If your product is an answer engine, where people can ask questions and get answers, it better respond in a scholarly way. There’s always a nerd in your classroom who’s just always right, but you don’t hate them for having a certain political value, because they are just going to give you facts. That’s what we want to be. And Google’s trying to be something different. That’s why they got into trouble.
What are you hearing generally about the state of Google from people there right now?
The researchers are still pretty excited about what they’re doing. But the product team messes up their releases. The Gemini product team was fine-tuning all these models to put in the product. There’s a lot of bureaucracy, basically.
I know Sergey Brin being there is making things faster and easier for them. You might have seen the video that was circulating of him being at some hackathon. He brushed it [the Gemini diversity scandal] off as just some kind of a small bug, right?
It’s not a small bug. It’s actually poor execution. The image generation thing is actually very easy to catch in testing. They should have caught it in testing. When you consider Google as the place for de facto information and correctness, when they make mistakes it changes the way you perceive the company.
What’s the origin story of Perplexity?
The first idea I pitched to my seed round investor, Elad Gil, was the idea of search via glasses. You would wear glasses and look at things and ask questions about them. The reason I pitched that was because I felt like Google was hard to disrupt in the text form factor.
He pushed us to think about a more narrow use case, like using large language models to search over internal databases. We prototyped ideas where you could search over tables, spreadsheets, and things like that. But no enterprise gave us their data. So we just started scraping data from the web and organized it into tables and powered searches on that. This was a cool combination of LLMs and search that got us a lot of these initial angel investors like Jeff Dean and Yann Lecun.
One day we built a tool that could scrape all the links [on the web] and summarize them and we started using it ourselves internally for coding. We put it out with the hope that at least some people would notice and we could get some contracts for people to work with us if they want this experience for their own data. But the consumer adoption has just been growing since the day we put it out. And so we committed ourselves to the harder mission of making it a successful consumer product.
How much of your tech is in-house versus fine-tuning all these models that you work with? What’s your tech secret sauce?
In the beginning, we were just daisy-chaining GPT-3.5 and Bing. Now, we post-train all these open-source models ourselves. We also still use OpenAI’s model.
We are never going to do the full pre-training ourselves. It’s actually a fool’s errand at this point because it takes so much money to even get one good model by pre-training yourself. There are only four or five companies that are capable of doing that today. And when somebody puts out these open-source models, there’s no reason for you to go and recreate the whole thing.
There is a new term that has emerged in this field called post-training. It’s actually like fine-tuning but done at a much larger scale. We are able to do that and serve our models ourselves in the product. Our models are slightly better than GPT-3.5 Turbo but nowhere near GPT-4. Other than Anthropic and Gemini, nobody has actually gotten to that level yet.
How are you doing to solve AI hallucination in your product? Can you?
The reason why we even have sources at the top of the answer is because we want to make sure that users have the power to go verify the answer. We precisely tell you which link to go to versus showing ten blue links and you not being sure which to read.
The other way is constantly improving the authority of which sources we use to cite the answer and then getting rid of the bad ones. When you don’t have sufficient information, it’s better to say you don’t know rather than saying something you made up.
Are most of your users on the free plan or are they paying?
We have mostly free users, but our conversion is very high in high-GDP countries. We only need to get the top users who care about their time. We don’t even need them to switch. We just need them to use Perplexity for 50 percent of the searches that require deeper research. One percent or 0.5 percent of Google’s market cap is a huge outcome for me and our company.
Does it make sense for you to ever do your own foundational models?
I think we need a lot of money for that. And my sense is it’s probably late. In an ideal scenario, if you ask me, “Hey Aravind, do you want $1 billion to go try to train your models?” I would say yes. But would I succeed at it? I don’t know. I would not bet so high on the odds. And by the time I could even get to a good model, maybe OpenAI or Anthropic have the next-generation models already and all the money I spent might not be worth it.
Is your biggest company cost right now compute for post-training?
Yeah. It’s not as much [relatively] as pre-training compute. You can build a different kind of company that is a lot more compute efficient, cost efficient, and spends the money you raise on product and distribution and expansion. It’s a different way of company building than OpenAI.
We’re not going to give the entirety of our funding to Azure or AWS. We’re going to use it to distribute and get as many users as possible.
Notebook
My notes on what else is happening in tech right now:
- Speaking of X.AI: Devoted readers of this newsletter will remember that I was the first to note that Elon Musk was looking to raise a lot of money for his new AI company. Now, I’ve heard recruits are being told that their equity in X.AI, which exists so far to power the Grok chatbot on X.com, will be anchored to a valuation of $25 billion. As far as I’ve heard, the logic goes: Musk will finish his ongoing round of funding at roughly that valuation because… he is Elon Musk. Okay! (It remains to be seen how that approach will work for X.com employees who had their equity issued at $19 billion in November.)
- “Tic-tac-toe. A winner.” While most of the headlines this week about TikTok pretended otherwise, anyone plugged into the machinery of Capitol Hill will tell you that the ban bill has always had a near-zero chance of passing in the Senate. I’ve heard Majority Leader Chuck Schumer privately supports the bill but doesn’t want to waste time putting it to a vote. The Republicans will just fall in line with Trump, who recently flipped his stance because Facebook is the actual enemy or something. TikTok has to act concerned and resolute since the optics of the bill passing the House are still terrible. But make no mistake, this ban attempt will fail just like all the others. Sorry, Steven Mnuchin.
- OpenAI: I feel like I just need to keep saying this until it’s not true anymore: We still don’t know what actually happened at OpenAI! I’m not surprised that the full report from WilmerHale, which answered to Bret Taylor and Larry Summers during the investigation process, is being kept secret. But I expected a bit more to be shared publicly than the exact same “breakdown in trust” language that the board members who fired Sam Altman used. Meanwhile, the fate of Ilya Sutskever remains a huge open question…
Hello, Googlers
I’ve heard that my last issue on Google got passed around quite a bit on the inside. I appreciated this note from an anonymous employee: “I learn significantly more about my employer from you and this newsletter than I do from my own company. You’re doing the lord’s work.”
As always, I’d love to hear more about what life is like inside Google. Respond to this email and we can set up a secure thread, or ping me directly: @alexheath.96 on Signal. Also: I’ll see you at this year’s I/O in May.
Interesting links
- Stripe’s 2023 annual letter.
- OpenAI CTO Mira Murati tells Joanna Stern that the company’s text-to-video model, Sora, is being released later this year. (Just don’t ask about the training data. 🙃)
- Telegram CEO Pavel Durov’s first interview in years.
- Hugo Barra thinks the Vision Pro is an “over-engineered devkit.”
- More details on Meta’s generative AI infrastructure buildout.
- MarcoPolo’s global AI talent tracker.
- Nvidia CEO Jensen Huang’s viral talk at Stanford business school.
- Apple spent the most ever lobbying the US government last year. (I’m hearing the Department of Justice’s antitrust lawsuit is probably hitting before the end of the month.)
That’s it for this issue. As always, I appreciate your feedback and tips. Thanks for subscribing.











