During the WWDC 2024 kickoff conference, broadcast online, Apple unveiled its AI strategy, introducing a series of powerful, intuitive, integrated, personal and private features. These features, called Apple Intelligence, will be available this fall on iPhone, iPad and Mac, for now only in English, and use a combination of device- and server-based generative AI models. Apple Intelligence includes a variety of advanced tools designed to improve user experience. Among these features we find smarter management of notifications on iPhone (to see the most relevant alerts first, but also generative writing tools for different applications, image generation and much more. What distinguishes Apple’s generative AI is its ability to personalize based on the user’s individual context, while maintaining privacy.
A practical example of Apple Intelligence is reading emails. AI can analyze an email, identify related contacts, provide guidance and find relevant files. Another interesting feature is image generation, which allows you to create caricatures resembling contacts in your address book and send them via Messages. Most of the Apple Intelligence features are implemented directly on the device, it will only be possible to use it on iPhone 15 Pro with the A17 Pro chip and with iPads and Macs equipped with Apple Silicon starting from the M1. For more sophisticated AI models, Apple has introduced a new server infrastructure called Private Cloud Compute, which allows the generative AI model to process user data without compromising privacy, ensuring that data is never stored or made accessible to Apple.
One of the most anticipated new features is the release of a renewed version of Siri, which promises a more natural and useful interaction. The new Siri, powered by Apple Intelligence generative AI models, features a revamped user interface with a glow around the device bezel and responses displayed in detailed cards. Apple has announced that this version of Siri will be rolled out gradually over the next year. The new Siri is able to understand the context of requests, eliminating the need to repeat information in subsequent requests. For example, it will be able to find and understand information not previously accessible, such as locating and extracting a driver’s license number from a photo to automatically fill out a form. Siri will be able to work in depth with Apple and third-party apps. Siri will also have deeper knowledge of Apple products, allowing it to provide assistance with specific features, such as scheduled sending of messages in iOS 18.
With iOS 18, Apple will allow users to access ChatGPT models through their OpenAI account. Users will be able to choose ChatGPT as the model to use for writing tools and other features, extending those (always free) of Apple Intelligence integrated into the operating system. If Siri is unable to answer a question, it can pass the request to ChatGPT for an answer. Users will be able to take advantage of the free features of using ChatGPT or link their paid subscriptions to use the benefits of ChatGPT Plus. Apple has also indicated that it is working on partnerships with other AI model makers to give users more options in the future. While OpenAI’s ChatGPT will initially be the only option, other models such as Google Gemini are expected to be available in the near future. Apple AI will be available in the summer with the iOS 18 beta, but for the moment only in English. We will have to wait until next year to see them in other languages.
#Apple #unveils #artificial #intelligence #iPhone #iPad #Mac