The artificial intelligence made in Cupertino will work directly on the device and offline: more integrated into the operating system than Google's Gemini.
Until now, Apple has not yet shared any important artificial intelligence functions with the public, falling behind giants like Google and OpenAI in this area.
However, the company is focusing its efforts on the next big iPhone update, which will introduce AI to the ecosystem.
There are rumors that Apple intends to integrate its LLM, or large language model, for perform processing directly on the device; this would provide a number of advantages over traditional cloud-based processing.
According to Bloomberg's Mark Gurman, Apple is expected to present its fully integrated technology and usable offline for iPhones running iOS 18 during WWDC 2024 of June 10th.
And if the analyst's words weren't enough, the claim itself sheds light on the focus of the conference “Absolutely Incredible“, which presents, randomlythe same initials as “AI”.
Do you have an AI in your pocket?
How Apple will decide to develop this technology is still under discussion; It remains to be seen how the company's offering will stand out from the competition.
It seems that Cupertino intends to highlight how AI can be useful in daily liferather than focusing solely on the power of chatbots and other generative AI tools.
LLMs designed to run entirely on mobile devices are more efficient than powerfulas they can run with less RAM and do not depend on communicating with cloud servers.
As a result, on-device processing will make the technology less powerful than the cloud-based solution, but future functionality will not be limited to certain hardware requirements.
The company could ship high-level features regardless of the latest model or not.
Furthermore, it could compensate for the capacity with one greater privacy and securitysince all operations take place on the device.
Solutions and decisions
A further significant advantage will be the speed of operations, since with this configuration the LLM models will be able to respond in real time. For this reason, on-device AI technology is very likely to be used within apps.
Local and offline operation, without the use of cloud-based AI models, also reflects Apple's usual commitment to user privacy, as all calculations will be performed directly on the device.
However, it is not excluded that Apple could rely on the cloud for more intensive tasks, such as creating images or generating long texts. Even for “open” features such as a dedicated chatbot a la ChatGPT, one might expect the company to rely on cloud-based solutions.
Apple had initially planned to fill the gap by licensing technology from Google, OpenAI and Baidu, but there are no further details available on any deals struck with Apple.
#Apple #working #iOS #1839s #totally #cloudindependent