He first pitched this idea of Tesla’s fleet running inference compute for AI during the last earnings call. “So there’s 100 hours of 100 gigawatts of inference compute, which I think we should use. Why not?” he says.
We asked some experts. They’re skeptical.




