More from Google I/O 2024: all the news from the developer conference
Obviously, someone noticed our video that clipped every single AI mention at I/O 2023 last year. Sundar Pichai closed the 2024 keynote by showing how AI can save us some work by using it to keep track. At the time, it was up to 121 AI mentions.
...by the time they were finished, it was probably more like 124.


Head over to Google’s Vertex AI Studio site and click “Try it in console” to goof around with some of the AI tools Google talked about at I/O today. The site is meant for developers who want to test the company’s models out while deciding what works best for their software, but anyone can play with it.
The scam detection feature Google just announced requires Android users to opt in, and Google claims it’s on-device only, but it’s still essentially listening to your every conversation to look for fraudulent-sounding language.
Are we really ready to swap scamming concerns with privacy-related ones?
Google teased the next generation of its small language model Gemma, including a larger version with 27 billion parameters (or how much a model understands).
The company also announced PaliGemma, an open-source model in the Gemma family for labeling photos and adding captions to images.
Google’s Sameer Samat just made very clear how much Google sees Gemini as a way to make Android a better operating system. Gemini app? On iOS? Yeah yeah sure sure. But AI on Android, and particularly on Pixel, is where the real stuff is for Google. The AI race is very much a smartphone race, friends.
Toward the end (maybe?) of the I/O keynote, Google threw in a cute little ditty about all the things you can do with Gemini prompts: generate photos of cats playing guitar, find smart things to say about Renoir, etc.
It includes the phrase “There’s no wrong way to prompt,” which, have you met people?
With more AI features coming to Google Workspace and other Google products, customers might be wondering if this means the next Gemini version learned from their emails. Google says it will not use user files on its platform to teach Gemini or other AI models.
When you “go live” — I guess that’s what we’re calling it — you can wave around your smartphone camera and ask about what’s around you in real time. Like OpenAI’s GPT-4o, you can even interrupt it. (It is not clear if it sounds like ScarJo.)
The company’s new Gemini voice chat feature will come out “later this year.”
Google added a new feature to Workspace that lets users ask an AI agent questions about meetings, emails, and everything else that someone may need to know at work. The AI Teammate, renamed Chip during Google I/O, takes information from company data to answer questions.
Here’s a quick look at the new multimodal AI project Google just announced that’s called Astra and how it can help you find misplaced glasses.
Note: this video was edited for length and clarity, but the original video was one single take.
Google added AI into Search so people who don’t like planning that much (aka me) can cosplay an organized person. It lets people use Google Search to find the best restaurants for their specific event, create a meal plan with food they actually like, or organize trips with friends.
You’ve long been able to search Google using still images, but now the company bringing “ask with video” search to Google Lens.
In an example during the I/O keynote, Google’s Rose Yao asked Lens why her turntable’s tonearm wouldn’t stay still, recording a brief clip to demonstrate the issue. Not exactly mind-blowing, but certainly helpful!































