Gemini Embedding 2 understands text, images, and videos in the same language for easier retrieval.
Photo Credit: Google
Gemini Embedding 2 can also understand interleaved input across multiple modalities
Google released its first fully multimodal embedding model on Tuesday. Dubbed Gemini Embedding 2, the artificial intelligence (AI) model maps text, images, audio, and videos into a single, unified embedding space. This means it uses an architecture to understand concepts whether they are written as words, spoken aloud, or shown in an image or a video. The Mountain View-based tech giant says this new system will simplify the way a large language model (LLM) understands information and will allow it to perform more complex actions.
In a blog post, the tech giant detailed the new AI model. It is the successor to the text-only embedding model that was released last year, and it captures semantic intent across more than 100 languages. Gemini Embedding 2 is currently available in public preview via the Gemini application programming interface (API) and Vertex AI.
AI models typically have different digital file cabinets to store text, photos, videos, and audio files. Whenever a user requests information in a specific format, it begins looking into that specific cabinet. Usually, an LLM treats a "cat" in a text document and a "cat" in a video as two completely different things. And to make matters more complex, the method to obtain information differs with each format.
Gemini Embedding 2 solves this problem by creating a new architecture that can only use a single cabinet for all kinds of information. This allows it to process a document that has both text and images at the same time, as humans do. Google says this new system simplifies “complex pipelines and enhances a wide variety of multimodal downstream tasks.” Some of these include Retrieval-Augmented Generation (RAG) and semantic search, sentiment analysis, and data clustering.
Coming to the AI model's capabilities, it has a text context window of up to 8,192 input tokens. It can also process up to six images per request in PNG and JPEG formats, and supports up to 120 seconds of video input in MP4 and MOV formats. Additionally, it can natively process and map audio data without needing text transcriptions. Further, it can also embed up to six-page-long PDFs.
The Gemini Embedding 2 can also understand interleaved input, so users can send across multiple modalities (such as text and image) in the same request. Google says this capability allows the model to gain a more accurate understanding of complex, real-world data.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
iPhone 17e, MacBook Neo, iPad Air, Refreshed MacBook Pro and MacBook Air Models Go on Sale in India: Price, Availability
Exclusive: iQOO Neo Series to Skip 2026 Launch as Brand Refines Flagship Strategy