Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Tech/Science

Ray-Ban Introduces Multimodal AI to Meta Smart Glasses

Ray-Ban has recently introduced a new feature to their Meta Smart Glasses – multimodal AI. This innovation allows for an AI assistant to process various types of information such as photos, audio, and text. Initially launched last fall, the glasses were well-received for their content capture capabilities and quality headphones, but the absence of multimodal AI was a notable limitation.

Following an early access program, Meta has now made multimodal AI available to all users. This development comes at a time when the tech world is closely scrutinizing AI gadgets, especially after the disappointing reception of the Humane AI Pin. Despite initial skepticism, the Ray-Ban Meta Smart Glasses with AI beta have shown promise, hinting at a brighter future for this type of technology.

While the glasses do not promise boundless capabilities, they offer practical functionalities through voice commands. Users can prompt the AI with phrases like ‘Hey Meta, look and…’ to perform tasks such as identifying objects, translating text, creating Instagram captions, or providing information about landmarks. The glasses capture an image, send it to the cloud for processing, and deliver the response audibly, offering a seamless user experience.

Although the AI is not infallible and occasionally makes mistakes, it adds an element of fun and engagement to daily activities. For instance, users have enjoyed testing the AI’s accuracy in identifying various objects, like cars, leading to amusing outcomes and memorable experiences.

Overall, the integration of multimodal AI in the Ray-Ban Meta Smart Glasses represents a step forward in wearable technology, enhancing the functionality and appeal of these innovative devices. As users explore the diverse capabilities of the AI assistant, they are discovering new ways to interact with their surroundings and simplify tasks on the go.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *