Amazon Lens Live uses the device’s camera to detect an object and find similar products on the shopping platform.
Photo Credit: Amazon
Amazon Lens Live runs on AWS-managed Amazon OpenSearch and Amazon SageMaker services
Amazon Lens Live, a new artificial intelligence (AI) feature for its shopping app, was introduced by the company on Tuesday. It is an expansion of the firm's AI-powered Amazon Lens feature, which allows users to click or upload a photo and scan it to show visually similar products on the platform. The new AI feature can access the user's camera and scan objects in real-time, eliminating the need to click or upload images. Lens Live is currently only available on iOS devices for select users in the US.
The Seattle-based tech giant says the expansion of the Lens feature is now available to tens of millions of US users via the iOS version of the Amazon shopping app. The feature will be rolled out to all US users in the months to come. The company has yet to announce any plans to introduce this AI feature in global markets.
Once the feature becomes available, iOS users can open the Amazon shopping app and tap on the camera icon in the search bar to open the Lens and activate the Lens Live capability. It can directly begin detecting their surroundings by processing the camera feed, and once the user points the camera at an object, it will scan and show matching items instantly. The visually similar products will appear on the same screen in a small swipeable carousel at the bottom.
Rufus in Amazon Lens Live
Photo Credit: Amazon
From there, users can directly add the object to their cart by tapping the + icon or save their wish list by tapping the heart icon. Amazon has also integrated Rufus, the platform's AI assistant, with the feature. This allows users to see suggested questions and quick summaries about the matched products without leaving the camera view. These prompts appear underneath the carousel.
Explaining the tech behind the feature, the company said Lens Live is powered by Amazon Web Services (AWS)-managed OpenSearch and SageMaker services. These services connect the feature with cloud-hosted AI models. Additionally, a lightweight computer vision-based object detection model runs on-device to identify products in real time.
On the server side, the feature is paired with a deep learning visual embedding model that matches the detected object with Amazon's product catalogue and retrieves the exact or similar listings.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.