The Gemini AI assistant for Android could reportedly get multimodal memory and better noise handling during Live chats.
Gemini assistant is also said to get new Agent controls, hinting at DeepMind’s Project Astra integration
Google is reportedly working on new capabilities for its Gemini artificial intelligence (AI) assistant for Android devices. As per the report, the Mountain View-based tech giant could soon release multiple new features across Gemini Live, Deep Research, Thinking mode, and new agentic AI capabilities. The company has been aggressively integrating AI capabilities across its products, and one of its most ambitious projects has been to replace Google Assistant on all devices with a more capable Gemini assistant.
According to an Android Authority report, the tech giant is currently working on several new features for Android's default AI assistant. The publication found hints of these capabilities within the strings of code in the latest version of the Google app for Android. It is said that these features are part of a new Gemini Labs section. The name suggests that the section is similar to Google Labs, which is focused on creating new and experimental AI features across the company's products.
The strings mention “assistant robin,” which has previously been noted to be the company's internal name for the Gemini AI assistant. One of the header strings reportedly mentions “Live Experimental Features,” and names several capabilities, such as “multimodal memory, better noise handling, responding when it sees something, and personalised results based on your Google apps.”
These appear to be new upgrades for the Gemini Live, which offers a more conversational and human-like two-way voice-based real-time interactions. Multimodal memory will likely allow the mode to remember things it sees on-device or via the camera feed, even if it is no longer visible. Better noise handline appears to offer ambient noise reduction, whereas “responding when it sees something” could be a proactive capability. The last one is self-explanatory. However, these suggested functionalities are purely speculative, and we will have to wait for the official release to know what Google is planning.
Beyond this, there is a mention of “Live Thinking Mode,” which is described as “a version of Gemini Live that takes time to think and provide more detailed responses.” Thinking Mode is a mainstay in the Gemini app, but the company might be bringing it to the assistant's Live mode too. Additionally, there is a mention of “Deep Research,” where a string says, “Delegate complex research tasks.” It is not clear what the new capability in this mode will be.
Apart from this, within the UI Control string, “Agent controls phone to complete tasks” is mentioned. This is an entirely new capability, and it appears that the company might allow the Gemini assistant to complete certain on-device tasks on behalf of the user. It cannot be said which task automations will be added to the voice assistant.
Do note that the abovementioned capabilities were only referenced in the code, and it does not mean that the company will definitely release them. Sometimes, developers also add these strings as placeholders or ideation spaces, which do not always materialise. We recommend taking this information with a pinch of salt, and to wait till Google officially announces these features.
Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.
Sony to Cede Control of Bravia TVs to China’s TCL Electronics