Anthropic first announced the Claude 3.5 Haiku model in October.
Photo Credit: Anthropic
Currently, the Claude 3.5 Haiku can only generate text
Anthropic has silently released the Claude 3.5 Haiku artificial intelligence (AI) model to users. On Thursday, several netizens began posting about the model's availability in Claude's web interface and mobile apps. Anthropic stated that the new generation of Haiku is the company's fastest large language model developed. Further, in several benchmarks, the foundation model also outperforms the Claude 3 Opus, the previous generation's most capable model. Notably, all Claude users will get access to the Claude 3.5 Haiku irrespective of their subscription.
While the AI firm did not make any announcements regarding the release of the new Haiku model, several users on X (formerly known as Twitter) posted about its availability on both the website as well as the mobile apps. Gadgets 360 staff members were also independently able to verify that Claude 3.5 Haiku is now the default language model on the chatbot. Additionally, it is the sole model available for those on the free tier of Claude.
Anthropic first announced the Claude 3.5 family of AI models in October, when the first iteration of the 3.5 Sonnet was released. At the time, the company highlighted that the 3.5 Haiku is its fastest model. Some of the upgrades in the new generations include lower latency (improved response time), improved instruction following, as well as precise tool use.
For enterprises, the AI firm highlighted that Claude 3.5 Haiku excels at user-facing products, specialised sub-agent tasks, and generating personalised experiences from large volumes of data.
Coming to performance, the new Haiku model scored 40.6 percent on the Software Engineering (SWE) benchmark, outperforming the first iteration of 3.5 Sonnet and OpenAI's GPT-4o. It also outperforms GPT-4o mini on the HumanEval and Graduate-Level Google-Proof Q&A (GPQA) benchmarks.
Notably, earlier this month, Anthropic optimised the Claude 3.5 Haiku for the AWS Trainium2 AI chipset and added support for latency-optimised inference in Amazon Bedrock. The company is yet to add support to Google Cloud's Vertex AI. The new AI model can only generate text but accepts both text and images as input.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
Starlink Subscription Price in India Revealed as Elon Musk-Led Firm Prepares for Imminent Launch
Meta’s Phoenix Mixed Reality Smart Glasses Reportedly Delayed; Could Finally Launch in 2027