During the annual I/O developer conference, Demis Hassabis, the top executive spearheading the company's AI resurgence at Google, unveiled Project Astra, a "next-generation AI assistant." Project Astra is a cutting-edge multimodal AI agent designed to respond to user queries instantly through text, audio, or video inputs. A demo video released by Google showcased the AI assistant engaging in real-time conversations with users, mirroring the capabilities demonstrated by OpenAI's just-launched GPT-4o model.


Project Astra's functionalities will be integrated into various Google products, including the Gemini app through the Gemini Live interface, later this year.


Also read: Google Pixel 8a Review: Excellent Mid-Ranger With Flagship Features


A video clip demonstrated its functionality as a smartphone app and showcased a prototype of smart glasses. This new innovation fulfills a pledge made by Hassabis regarding Gemini's potential upon its initial introduction in December last year.


Astra demonstrated its ability to understand spoken commands, interpret objects and environments captured by the device cameras, and engage in natural language conversations about them. It successfully identified a computer speaker, answered inquiries about its features, recognised a London neighbourhood from an office window view, analysed code displayed on a computer screen, crafted a limerick about pencils, and recalled the location of misplaced glasses.


Also read: Google I/O 2024 Conference: Scam Call Detection Coming To Android


According to Google, Project Astra accelerates information processing by encoding video frames, merging video and speech inputs into a chronological sequence of events, and caching this data for retrieval. Additionally, Google has enhanced the sound of the new AI assistant, rendering it more natural, and offers users the flexibility to switch between various voices.


“To be truly useful, an agent needs to understand and respond to the complex and dynamic world just like people do -- and take in and remember what it sees and hears to understand context and take action. It also needs to be proactive, teachable and personal, so users can talk to it naturally and without lag or delay," Demis Hassabis, Google DeepMind CEO, said during the event.