Apple Researchers Working on MM1, a Family of Multimodal AI Model With Up to 30 Billion Parameters

Apple researchers said that the MM1 AI models are currently in the pre-training phase.

Advertisement
Written by Akash Dutta, Edited by Siddharth Suvarna | Updated: 18 March 2024 13:04 IST
Highlights
  • Apple researchers said MM1 has achieved competitive performance
  • The paper claims MM1 is capable of in-context learning
  • Recently, Apple acquired AI startup DarwinAI

Apple researchers said the MM1 model consists of both dense models and MoE variants

Photo Credit: Unsplash/Laurenz Heymann

Apple researchers have shared their work on building a multimodal artificial intelligence (AI) large language model (LLM), in a pre-print paper. Published on an online portal on March 14, the paper highlights how it was able to achieve the advanced capabilities of multimodality and make the foundation model train on both text-only data as well as images. The new advancements in AI for the Cupertino-based tech giant come following CEO Tim Cook's remarks made during the company's earning calls where he said that AI features could arrive later this year.

The pre-print version of the research paper has been published on arXiv, an open-access online repository of scholarly papers. However, the papers posted here are not peer-reviewed. While the paper itself does not mention Apple, most of the researchers mentioned are affiliated with the company's machine learning (ML) division, leading to the belief that the project is also affiliated with the iPhone maker.

As per the researchers, they are working on MM1, a family of multimodal models containing up to 30 billion parameters. Calling it a “performant multimodal LLM (MLLM), the authors of the paper highlighted that image encoders, the vision language connector, and other architecture components and data choices were made to create the AI model which is capable of understanding both text as well as image-based inputs.

Advertisement

Giving an example, the paper stated, “We demonstrate that for large-scale multimodal pre-training using a careful mix of image-caption, interleaved image-text, and text-only data is crucial for achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results.”

Advertisement

To break it down, the AI model is currently in the pre-training phase, which means it is not trained enough to give the desired outputs. This is the stage when the algorithm and the AI architecture are used to design the workflow of the model and how it processes data, eventually. The team of Apple researchers were able to add computer vision to the model using image encoders and a vision language connector. Then, when testing with a mix of just images, image and text, and text-only data set, the team found that the results were competitive compared to existing models at the same stage.

While the breakthrough is significant, this research paper is not enough to ascertain that a multimodal AI chatbot will be added to Apple's operating system. At this stage, it is difficult to even say whether the AI model is multimodal while taking inputs or in giving output as well (whether it can generate AI images or not). But if the results are confirmed to be consistent after peer review, it can be said that the tech giant has taken another big step towards building a native generative AI foundation model.


Is the Samsung Galaxy Z Flip 5 the best foldable phone you can buy in India right now? We discuss the company's new clamshell-style foldable handset on the latest episode of Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.
 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Further reading: Apple, Artificial intelligence, AI
Advertisement

Related Stories

Popular Mobile Brands
  1. Top OTT Releases of the Week: Kantara Chapter 1, Lokah Chapter 1, Idli Kadai, and More
  2. iQOO Neo 11 With Snapdragon 8 Elite SoC Launched: Price, Specifications
  3. Vivo X300 Series Launching Today: Everything You Need to Know
  4. Gemini 3 AI Model Will Be Released Soon, Says Google CEO Sundar Pichai
  5. Samsung Galaxy S26 Series Teased to Launch With These Notable Upgrades
  6. Realme GT 8 Pro Will Launch in India in November With This Chipset
  7. Meta's VR Headsets and AI Glasses Cost the Company $4.4 Billion
  8. How to Claim 18 Months of Free Google AI Pro Access on the MyJio App
  9. Lava Agni 4 With Metal Design and Flat Edges Teased Ahead of Debut
  1. Bitchat Becomes Jamaica’s Go-to App as Hurricane Melissa Cripples Communication
  2. Google Maps Is Reportedly Developing a New Power Saving Mode for Navigation
  3. Take-Two CEO Says AI Won't Be 'Very Good' at Making a Game Like Grand Theft Auto
  4. Reliance Users to Get Free Google AI Pro Access for 18 Months Worth Rs. 35,100 With Gemini, Veo Features
  5. Meta’s VR Headsets and AI Glasses Cost the Company $4.4 Billion in Q3 2025
  6. iQOO Neo 11 With 7,500mAh Battery, Snapdragon 8 Elite Chip Launched: Price, Specifications
  7. Telegram Founder Pavel Durov Launches Cocoon, a Decentralised AI Project on TON
  8. Hedda (2025) Now Available for Streaming on Amazon Prime Video: What You Need to Know
  9. Samsung Galaxy S26 Series Teased to Launch With Upgraded Chipset, Camera, and AI Features
  10. Snapdragon 8 Gen 5 Chipset Key Specifications and Benchmark Scores Tipped; May Power Upcoming iQOO, Vivo Phones
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.