More from Google I/O 2024: all the news from the developer conference


Onstage at Google I/O, the company is showing off its AI enhanced Google Search that can take a complex and long entry and generate an AI Overview. In an example, Google searched “Find the best yoga or pilates studios in Boston and show details on their intro offers and walking time from Beacon Hill,” and it provided all of that for the user.


CEO Sundar Pichai just announced new Trillium chips, coming later this year, that are 4.7 times faster than their predecessors, as Google competes with everyone else building new AI chips. Pichai also highlighted Axion, Google’s first ARM-based CPU, which the company announced last month.
Google will also be “one of the first” cloud companies to offer Nvidia’s Blackwell GPU starting in 2025.
Correction: Axion was announced last month, not last year. Also, corrected the spelling of Axion.
Google has announced Gemini’s 1.50 Pro model is coming to its AI-powered note-taking app. Soon, users will be able to create notebook guides out of student notes with summaries, quizzes, and FAQs.
But the real star is the Audio Overviews feature, which can turn the material into an interactive discussion and answer students’ questions.
The faster version of its next-gen large language model offers similar multimodal reasoning and long context capabilities to Gemini 1.5 Pro (also announced today) but optimized for low-latency responses and overall efficiency.
Google announced Gemini 1.5 Flash is available for developers to try in Google AI Studio and Vertex AI today, with 1 million tokens to start and 2 million available upon request.
Google previewed Veo, a new text-to-video generator that lets filmmakers write prompts to build cinematic shots. Google DeepMind CEO Demis Hassabis says Veo will be available on preview via a waitlist.
Google says Project Astra is its new multimodal AI project — that can interpret things you show it with your smartphone’s camera. The company just demoed it with an impressive video where, in one unbroken shot, it identified several items correctly, recalled where it saw the owner’s glasses (near a red apple on a desk), and explained code on a screen.
Google DeepMind CEO Demis Hassabis talked up AlphaFold 3 during Google I/O. This AI model uses generative AI capabilities to model how molecules of DNA, RNA, and proteins would look with a prompt from scientists. The idea is to make drug discovery easier and faster.
Google CEO Sundar Pichai announced that Gemini 1.5 Pro will have a 2 million token context window (the amount of information an AI model can understand), up from the current 1 million tokens it reads.
Google CEO Sundar Pichai just announced that the AI-generated summaries, now known as “AI Overviews,” will be launching to everyone in the US “this week,” with more countries coming soon.

Rhymes with “high time.”
We’re in our seats at the Shoreline Amphitheatre and about 45 minutes out from the start of Google I/O’s keynote. There’s a DJ on stage and some trippy visuals getting our morning started. Not spotted yet: dancing ducks, like we were treated to last year. Let’s keep it that way, huh?
First, who holds their phone like this? Second, the Gemini AI Google teased after OpenAI’s announcements sure seems fast and natural and accurate, as canned demos are prone to do.
We’ll learn more when Google I/O kicks off starting at 10AM PT / 1PM ET / 7PM CET.









































