It is not common for big technology companies to recognize their own setbacks, and even less so in projects in which they went all out. But the truth is that from the ashes of two of the most famous, Google Glass and the Meta metaverse, a device concept that seemed to have been buried in oblivion is being reborn. These are smart glasses, which both multinationals now want to revive thanks to artificial intelligence.
Both Google Glass and the Meta metaverse foundered due to the disconnect between business vision and market expectations. Google glasses, launched with great fanfare in 2013, promised a revolution in technology wearables (which is worn), but no one understood what they were for. Something similar happened with the metaverse years later, with users trying to figure out what was so fun about that empty and boring virtual world that Meta invested billions of dollars in.
With the Ray-Ban Stories, glasses developed by Meta in 2021, it seemed that the same thing was going to happen. They were advertised as the company’s first device with “augmented reality” prepared for the metaverse, but they came face to face with reality. About 300,000 units were sold worldwide, much less than expected, and of which in February 2023 only about 27,000 were still in regular use.
Despite Meta’s advertising, Ray-Ban Stories did not go beyond glasses with a camera, microphone and small speakers pointing directly at the wearer’s ear. In practice, they only allowed you to take photos and record videos, listen to music and answer calls, functions that did not justify their price for many consumers (300 euros). But what seemed like a niche product for Instagrammers has ended up taking off in 2024, which will close with more than one million units sold.
It’s a success story. In 60% of Ray-Ban stores in EMEA [Europa, Oriente Medio y África]Ray-Ban Meta are the best-selling product
Stefano Grassi
— CFO of the EssilorLuxottica group (owner of Ray-Ban)
The turning point was ChatGPT, which came out a year after the glasses. Meta took advantage of its pull to reformulate the product, burying the surname “Stories” to separate it from Instagram and selling it as a way to interact with artificial intelligence to “receive answers, information, recommendations and even creative inspiration in real time anywhere” .
Instead of using a keyboard and screen, the AI of Meta, ChatGPT or compatible models speaks directly into the user’s ear through the temple of the glasses. Thanks to that and the synchronization with the mobile phone, the device can be used to send messages, call, schedule or any other action that can be carried out through virtual assistants.
The result? “It is a success story. Not only in the US, where it is obvious, but also here in Europe. To give you an idea, in 60% of Ray-Ban stores in EMEA [Europa, Oriente Medio y África]the Ray-Ban Meta are the best-selling product. It is something that pleases us enormously,” said the financial director of the EssilorLuxottica group (owner of Ray-Ban), Stefano Grassi, during the presentation of the company’s latest accounts in November.
From failure to “key segment”
Technology companies have fantasized about smart glasses for more than a decade. Google Glass was the first big attempt. They were capable of offering information in real time through an integrated screen, controlled by voice or gestures. They allowed you to access maps, notifications, take photos and videos or run applications. Google thought that being able to do all this without using your hands, as the mobile phone requires, was going to be valued by consumers. However, its unattractive design and limited functionality compared to smartphones were its grave.
The Ray-Ban Meta do not have a screen, but the key is that they have been able to integrate AI in a “friendly and interactive” way, reflects a recent report by the market analysis consultancy Counterpoint Research. Meta has delegated brand and design issues to the Italian firm, avoiding creating a “gadget” or crude device as happened in Google’s attempt. In appearance they are normal glasses. “Another advantage is its lightness (49 grams), which makes it comfortable to wear, even for outdoor activities,” the aforementioned report states.
Added to all this is the fact that it is not a prototype that explores a new market niche, but rather a product that is already leaving notable benefits in the coffers of Meta and EssilorLuxottica. Their price ranges from 330 to 410 euros, depending on the Ray-Ban model on which they are based. Figures that leave a margin of more than 50%, since the total production cost of the basic edition is about $135, according to Counterpoint Research.
These numbers will cause smart glasses to face 2025 on the take-off runway. “The success of the Ray-Ban Meta is inspiring the industry to pay more attention to the development of lightweight smart glasses, while leveraging on-device and cloud-based AI to offer interactive services,” the consultancy continues: “There are reasons to believe that this type of design will gradually become a key segment of the wearables “intelligent.”
Devices of the AI era
The emergence of ChatGPT as the fastest-adopting digital service in history not only had a huge impact on the software sector, with more and more products integrating AI into their everyday applications. It also got the hardware industry thinking about how to get that technology out of desktop computers and allow for more fluid interaction with the user.
The first attempt was a pin with ChatGPT that can see and hear everything that happens in front of its wearer. Created by former Apple executives and supported by OpenAI itself, its first model has not worked in sales. However, he pointed out the line that the industry believes generative AI will follow, as a technology that must free the user’s hands and be detached from screens.
Meta isn’t the only one who thinks smart glasses may be the answer. Google announced at its latest developer conference that it is developing an experimental assistant that combines text, image, video and audio into a single channel, a “conversational entity” that will look like a “real person.” He calls it Project Astra and in the latest video about it, published a couple of weeks ago, he showed how it would work inserted into glasses.
Google’s goal is for Project Astra to be able to recognize objects, read and interpret texts, abstract concepts, geolocate, etc. Also be able to remember previous conversations and use a wide range of long-term memory. It is the line of innovation that the industry now maintains, which has stopped proclaiming that general artificial intelligence (self-aware and capable of self-evolution) is just around the corner to focus on “AI agents”smaller models but super specialized in the needs of a person or organization.
The development of these agents goes parallel to that of the devices that house them. The success of the Ray-Ban Meta marks a turning point: it is no longer about replicating the functions of the smartphone in glasses, but rather about creating a new way of interacting with artificial intelligence that is more natural and integrated into the user’s life. With Google also going in this direction with its Project Astra, 2025 could be the year that smart glasses finally deliver on the promise they made a decade ago, albeit in a way that no one expected then.
#return #home #expect #smart #glasses #speak #ear #ChatGPT