Google unveils Project Astra chatbot tech and brings 'AI overview' to search for all U.S. users | Fortune

Google unveils Project Astra chatbot tech and brings ‘AI overview’ to search for all U.S. users

Google IO livestream screenshot

Google showed off new AI chatbot technology dubbed Project Astra, along with a series of announcements infusing artificial intelligence throughout its catalogue of products, as company executives took the stage at its annual developers conference on Tuesday.

Alphabet CEO Sundar Pichai announced Tuesday that Google will roll out AI capabilities in its flagship search product to all U.S. users this week, while Demis Hassabis, the head of Google’s DeepMind AI unit unveiled Project Astra, a “universal AI agent” that can understand the context of a user’s environment.

In a video demonstration of Astra, Google showed how users can point their phone camera to nearby objects and ask the AI agent relevant questions such as “What neighborhood am I in?” or “Did you see where I left my glasses?” Astra technology will come to the Gemini app later this year, the company said.

Speaking on stage near Alphabet’s headquarters in Mountain View, Calif., Pichai, Hassabis, and a parade of executives sought to show the company’s progress in the high-stakes AI competition against BigTech rivals such as Microsoft, Meta, and Amazon, as well as richly-funded startups like OpenAI, Anthropic, and Perplexity.

Google said that its Gemini technology is now incorporated, in some way or another, in all of Google’s key products — those that boast more than 2 billion users, including YouTube and Search. And Google unveiled a new, standalone Gemini app for users to play with the latest AI features.

Google Search has already answered billions of queries with Gemini technology, Pichai said. “We are encouraged not only to see an increase in search usage, but also in customer satisfaction.”

In the coming weeks, Google will add multi-step reasoning in search, in which Gemini can answer long, multi-part questions. For example, it can find the best-rated yoga studios in Los Angeles, calculating the walking distance from each and offering the cost per class, all from one search query.

“Google search is generative AI at the scale of human curiosity,” Pichai said.

Here are some of the other big Google IO announcements

*Sixth generation of Trillium GPUs. Available to Google Cloud customers in late 2024, the new chips boast a 4.7x improvement in compute performance from the previous version.

*Veo, a generative video model available to use in VideoFX. Some Veo features will become available to select developers soon, and the wait list is open now.

*Google DeepMind and YouTube are building Music AI Sandbox, or a group of AI tools that can help artists create music.

*A new, dedicated Gemini smartphone app, which will offer all of the AI model’s features in one place. In the next few months, Gemini will also become available as an assistant on Android, as an overlay on whatever app a user is on, so they don’t have to switch apps to use Gemini.

*Google is also rolling out a new feature to customize Gemini in the coming months. With what Google is calling “Gems,” users can curate what they want to see from the AI model.

*Gemini for Workspace side panel, an AI helper that lives on the side of the screen in Google Workspace applications, will be available next month.

*Data Q&A rolling out to Labs users in September, in which Gemini can help users organize spreadsheets.

Subscribe to Data Sheet, our daily newsletter about the business of tech. Sign up for free.