Google Titans AI Architecture Unveiled With Ability to Solve Long-Term Memory Issues in AI Models

Google claims that Titans can scale LLM memory to larger than the context window of two million tokens.

Advertisement
Written by Akash Dutta, Edited by David Delima | Updated: 20 January 2025 17:27 IST
Highlights
  • Google’s Titans ditches Transformer and RNN architectures
  • LLMs typically use the RAG system to replicate memory functions
  • Titans AI is said to memorise and forget context during test time

Google’s new architecture uses a surprise-based learning system

Photo Credit: Reuters

Google researchers unveiled a new artificial intelligence (AI) architecture last week that can enable large language models (LLMs) to remember the long-term context of events and topics. A paper was published by the Mountain View-based tech giant on the topic, and the researchers claim that AI models trained using this architecture displayed a more “human-like” memory retention capability. Notably, Google ditched the traditional Transformer and Recurrent Neural Network (RNN) architectures to develop a new method to teach AI models how to remember contextual information.

Titans Can Scale AI Models' Context Window More Than 2 Million Tokens

The lead researcher of the project, Ali Behrouz, posted about the new architecture on X (formerly known as Twitter). He claimed that the new architecture provides a meta in-context memory with attention that teaches AI models how to remember the information at test-time compute.

Advertisement

According to Google's paper, which has been published in the pre-print online journal arXiv, the Titans architecture can scale the context window of AI models to larger than two million tokens. Memory has been a tricky problem to solve for AI developers.

Humans remember information and events with context. If someone asked a person about what he wore last weekend, they would be able to remember additional contextual information, such as attending a birthday party of a person who they have known for the last 12 years.This way, when asked a follow-up question about why they wore a brown jacket and denim jeans last weekend, the person would be able to contextualise it with all these short-term and long-term information.

Advertisement

AI models, on the other hand, typically use retrieval-augmented generation (RAG) systems, modified for Transformer and RNN architectures. It uses information as neural nodes. So, when an AI model has been asked a question, it accesses the particular node that contains the main information, as well as the nearby nodes that might contain additional or related information. However, once a query is solved, the information is removed from the system to save processing power.

However, there are two downsides to this. First, an AI model cannot remember information in the long run. If one wanted to ask a follow-up question after a session was over, one would have to provide the full context again (unlike how humans function). Second, AI models do a poor job of retrieving information involving long-term context.

Advertisement

With Titans AI, Behrouz and other Google researchers sought to build an architecture which enables AI models to develop a long-term memory that can be continually run, while forgetting information so that it be computationally optimised.

To this end, the researchers designed an architecture that encodes history into the parameters of a neural network. Three variants were used — Memory as Context (MAC), Memory as Gating (MAG), and Memory as a Layer (MAL). Each of these variants is suited for particular tasks.

Advertisement

Additionally, Titans uses a new surprise-based learning systen, which tells AI models to remember unexpected or key information about a topic. These two changes allow Titans architecture to showcase improved memory function in LLMs.

In a separate post, Behrouz claimed that based on internal testing on the BABILong benchmark (needle-in-a-haystack approach), Titans (MAC) models were able to outperform large AI models such as GPT-4, LLama 3 + RAG, and LLama 3 70B.

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement
Popular Mobile Brands
  1. Aadu 3 OTT Release Date Revealed: Know Everything About Plot, Cast, and More
  1. NASA’s SPHEREx Maps Water Ice Deposits in Cygnus X, Offering Clues to Earth’s Water
  2. Kaadhal Enbadhu Podhuudamai Out on OTT: Where to Watch it Online?
  3. The Legend of Vox Machina Season 4 OTT Release Date: When and Where to Watch it Online?
  4. Aadu 3 OTT Release Date Revealed: Know When and Where to Stream it Online
  5. Safe House (2025) Now Streaming Online: Cast, Plot, Trailer and Where to Watch
  6. Uranus’ Outer Rings May Reveal Hidden Moons, Scientists Say
  7. WhatsApp Is Finally Working on Adding Support for Android's Notification Bubbles Feature
  8. Realme C100x Tipped to Launch in India Soon as Key Specifications and Design Surface Online
  9. Morgan Stanley Announces MSILF Stablecoin Reserves Portfolio for Issuers
  10. Jio Youth and Gaming Plan With Snapchat+, FanCode and Gemini Pro Launched: Price, Benefits
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.