Tech/Science

Ray-Ban Introduces Multimodal AI to Meta Smart Glasses

Ray-Ban has recently introduced a new feature to their Meta Smart Glasses – multimodal AI. This innovation allows for an AI assistant to process various types of information such as photos, audio, and text. Initially launched last fall, the glasses were well-received for their content capture capabilities and quality headphones, but the absence of multimodal AI was a notable limitation.

Following an early access program, Meta has now made multimodal AI available to all users. This development comes at a time when the tech world is closely scrutinizing AI gadgets, especially after the disappointing reception of the Humane AI Pin. Despite initial skepticism, the Ray-Ban Meta Smart Glasses with AI beta have shown promise, hinting at a brighter future for this type of technology.

While the glasses do not promise boundless capabilities, they offer practical functionalities through voice commands. Users can prompt the AI with phrases like ‘Hey Meta, look and…’ to perform tasks such as identifying objects, translating text, creating Instagram captions, or providing information about landmarks. The glasses capture an image, send it to the cloud for processing, and deliver the response audibly, offering a seamless user experience.

Although the AI is not infallible and occasionally makes mistakes, it adds an element of fun and engagement to daily activities. For instance, users have enjoyed testing the AI’s accuracy in identifying various objects, like cars, leading to amusing outcomes and memorable experiences.

Overall, the integration of multimodal AI in the Ray-Ban Meta Smart Glasses represents a step forward in wearable technology, enhancing the functionality and appeal of these innovative devices. As users explore the diverse capabilities of the AI assistant, they are discovering new ways to interact with their surroundings and simplify tasks on the go.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *