11.2 C
London
Thursday, February 22, 2024
HomeBusinessMeta’s AI for Ray-Ban Smart Glasses Can Identify Objects and Translate Languages

Meta’s AI for Ray-Ban Smart Glasses Can Identify Objects and Translate Languages

Date:

Related stories

Courage in the Face of Tyranny – Remembering Alexei Navalny

“A concrete kennel, measuring 2.5m x 3m. Most often,...

Can Ukraine Still Win?

As Congress continues to delay aid and Volodymyr Zelensky...

Sandboarding and ancient temples: What to do in Peru if you’ve already seen Machu Picchu

From glacier hikes to exhilarating sandboarding, Peru is packed...

Will the Third Time Be the Charm for Tajikistan’s Thwarted Power Transition?

Infighting over the succession and growing frustration in the...

Opinion: Peru’s revised forestry law will undermine citizens’ human rights

Recent changes decriminalising illegal logging are an attack on...
spot_imgspot_img

Meta is finally going to let people try its splashiest AI features for the Meta Ray-Ban smart glasses, though in an early access test to start. Today, Meta announced that it’s going to start rolling out its multimodal AI features that can tell you about things Meta’s AI assistant can see and hear through the camera and microphones of the glasses.

Mark Zuckerberg demonstrated the update in an Instagram reel where he asked the glasses to suggest pants that would match a shirt he was holding.

It responded by describing the shirt and offering a couple of suggestions for pants that might complement it. He also had the glasses’ AI assistant translate text and show off a couple of image captions.

Zuckerberg revealed the multimodal AI features for Ray-Ban glasses like this in an interview with The Verge’s Alex Heath in a September Decoder interview. Zuckerberg said that people would talk to the Meta AI assistant “throughout the day about different questions you have,” suggesting that it could answer questions about what wearers are looking at or where they are.

The AI assistant also accurately described a lit-up, California-shaped wall sculpture in a video from CTO Andrew Bosworth. He explained some of the other features, which include asking the assistant to help caption photos you’ve taken or ask for translation and summarization — all fairly common AI features seen in other products from Microsoft and Google.

The test period will be limited in the US to “a small number of people who opt in,” Bosworth said. Instructions for opting in can be found here.

Source : The Verge

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img