Click to Skip Ad
Closing in...

Apple Intelligence: AI, features, research, and supported devices

Updated Oct 7th, 2024 2:28PM EDT
Siri on the Vision Pro headset. Apple GPT
Image: Apple Inc.

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Apple Intelligence is the name of Apple’s effort for Artificial Intelligence. The company says it “draws on your personal context while setting a brand-new standard for privacy in AI.” It was introduced during the WWDC 2024 keynote, and it’s a central part of Apple’s iPhone, iPad, and Mac devices, starting with iOS 18, iPadOS 18, and macOS Sequoia.

Release date

Apple Intelligence is expected to debut with iOS 18.1’s official release in October. Bloomberg‘s Mark Gurman says it could be released on October 28. So far, It is limiting features in beta. Apple teased more features coming at a later date during the iPhone 16 event.

In July, Apple seeded the first beta of iOS 18.1, iPadOS 18.1, and macOS 15.1. So far, developers and public beta testers with an iPhone 15 Pro, M1 Mac, or M1 iPad (or newer) can take advantage of this platform.

Features

Apple Intelligence feature summary.
Apple Intelligence feature summary. Image source: Apple Inc.

These are some of the Apple Intelligence features we’ll see on iPhone, iPad, and Mac:

  • Writing Tools: Users can rewrite, proofread, and summarize text nearly everywhere they write, including Mail, Notes, Pages, and third-party apps;
  • Image Playground: Users can create playful images in seconds, choosing from Animation, Illustration, or Sketch. This app is built right into apps like Messages and is also available in a dedicated app;
  • Memories in Photos: Users can create stories they want to see just by typing a description. Apple Intelligence will pick out the best photos and videos based on the description, craft a storyline with chapters based on themes identified from the photos, and arrange them into a movie with its own narrative arc;
  • Clean Up tool: This Photos app feature can identify and remove distracting objects in the background of a photo without accidentally altering the subject;
  • Siri: Users type to Siri and switch between text and voice to communicate with Siri in whatever way feels right for the moment.
  • ChatGPT integration: When you feel Apple Intelligence isn’t enough, you can allow ChatGPT to access Writing Tools and other features for a better response.

iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1 start Apple Intelligence beta testing

Apple Intelligence running on M4 iPad ProImage source: José Adorno for BGR

In July, Apple released the first beta of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1. Exclusive to iPhone 15 Pro and M1 (or newer) devices, Apple is finally beta-testing its AI platform. While many features are still unavailable, the company highlighted what developers can already try:

  • Writing Tools: Proofread your text, rewrite different versions until the tone and wording are right, and summarize the selected text with a tap.
  • Improved Siri: With a new design, Siri can maintain context between requests. Even if you stumble over words or shift what you’re saying mid-sentence, Siri can understand what you actually want.
  • Priority notifications: They appear at the top of the stack, letting you know what to pay attention to at a glance. Notifications are summarized, so you can scan them faster.
  • Priority messages in Mail: Elevate time-sensitive messages to the top of your inbox, like an invitation that has a deadline today or a check-in reminder for your flight this afternoon.
  • Record and transcribe calls in the Notes app: Just hit record in the Notes or Phone apps to capture audio recordings and transcripts. Apple Intelligence generates summaries of your transcripts, so you can get to the most important information at a glance.
  • Reduce interruptions: With iOS 18.1 beta 1, an all-new Focus Mode understands the content of your notifications and shows you the ones that might need immediate attention, like a text about picking up your child from daycare later today.
  • Smart Reply in Mail: Quickly draft an email response with all the right details. Apple Intelligence can identify the question you were asked in an email and offer relevant selections to include in your response.
  • Clean Up: This Photos app feature can identify and remove distracting objects in the background of a photo without accidentally altering the subject.
  • Summarization: Apple Intelligence can now summarize more than just Messages and Mail notifications.

Here’s how to use Apple Intelligence.

Apple Intelligence schedule

After iOS 18.1 is released, Apple will continue to work on Apple Intelligence features. This is what we expect for each major iOS 18 launch:

  • iOS 18.2: Expected to launch later this year, it should get Image Playground, Genmoji, and ChatGPT integration;
  • iOS 18.3: Expected to launch early next year, it should be available with Siri upgrades;
  • iOS 18.4: Expected to launch in March, Apple will finally revamp Siri, letting it control and find information from apps, and more.

iPhone 16 to feature exclusive Apple Intelligence features

Using the Camera Control button to find out information about a restaurant with Visual Intelligence.
Using the Camera Control button to find out information about a restaurant with Visual Intelligence. Image source: Apple Inc.

While we first thought all Apple Intelligence features would be available to all Apple compatible devices, the company revealed that some features will be exclusive to iPhone 16 users thanks to the new Camera Control.

Cupertino explains that later this year, Camera Control will unlock visual intelligence to help users learn about objects and places faster than ever before. Users can click and hold Camera Control to pull up the hours or ratings for a restaurant they pass, add an event from a flyer to their calendar, quickly identify a dog by breed, and more.

Camera Control will also serve as a gateway into third-party tools with specific domain expertise, like when users want to search on Google to find where they can buy an item, or to benefit from ChatGPT’s problem-solving skills.

Apple Intelligence set to expand in 2025 to more languages

Apple Intelligence expansionImage source: Apple Inc.

During the iPhone 16 event, Apple announced it would expand Apple Intelligence to more languages in 2025.

According to the company, Apple Intelligence will first launch in U.S. English and will quickly expand to include localized English in Australia, Canada, New Zealand, South Africa, and the U.K. in December.

In 2025, Apple will add the following languages:

  • Chinese
  • French
  • Japanese
  • Spanish

Tim Cook explains Apple and OpenAI’s ChatGPT partnership

Apple AI: Tim Cook explains in interviewImage source: YouTube/MKBHD

Rumors were true, and Apple has partnered with OpenAI. According to the company, these two projects work seamlessly, but they have core features that separate them.

With Apple AI, the company ensures that all data is private through Private Cloud Compute, while OpenAI’s ChatGPT usually collects user data. In an interview with YouTuber Marques Brownlee, Apple’s CEO Tim Cook explained the core difference between Apple Intelligence and ChatGPT partnership.

“There’s Private Cloud Computing, and there’s the arrangement with OpenAI,” says Tim Cook. “These two things are different. So, if you look at Private Cloud Compute, we’re utilizing the same basic architecture as the silicon that’s in the iPhone 15. We’re using the same software, and so we believe that we’ve done it in such a way that it’s as safe and secure and private in the Private Cloud Compute as in the device.”

That means Apple won’t collect user’s data, won’t make a profile of the user, or take this data to sell it elsewhere. Cupertino aimed to extend the iPhone’s on-device processing to the next level with a level of security that people are used to with their iPhones.

Tim Cook continues: “So we really, we really worked on this on a lot and put a lot of work behind that arrow to be sure that if you’re working on something that requires world knowledge, so you’re out of the domain of personal context and so forth, then you may want to go and use one of the large language models that are on the market, and we will be selected what we feel is the best one with OpenAI and ChatGPT.”

That said, all personal requests related to Apple’s built-in apps, such as Messages, Mail, Calendar, and more, will use the company’s intelligence. In contrast, “world knowledge” can be requested for OpenAI ChatGPT and later for other large language models.

New LLMs can join the party later

While Apple will first integrate with OpenAI, the company plans to work with other LLms as well. For example, Cupertino is in talks with Google to license Gemini.

A report also claims Apple will use Baidu for its generative AI functions in China. Baidu’s Ernie Bot is a ChatGPT rival and one of the more than 40 AI models from China that local regulators have approved. A partnership with Apple would be a big win for Baidu, considering the growing competition in the region. 

Apple Intelligence compatible devices

How to fast charge iPhone 15 ProImage source: José Adorno for BGR

During the WWDC 2024 keynote, Apple announced which devices will be compatible with its Intelligence:

  • iPhone 15 Pro models or newer
  • M1 iPad models or newer (such as the M4 iPad Pro)
  • Apple Silicon Macs running macOS Sequoia

Apple papers suggest where its AI efforts are at

Pixelmator Pro improves PDF editingImage source: Pixelmator

AI model for instruction-based image editing

In February, Apple released a revolutionary AI model for instruction-based image editing. According to a paper published by Apple researchers, instruction-based image editing improves the controllability and flexibility of image manipulation via natural commands without elaborate descriptions or regional masks. The study shows “promising capabilities in cross-modal understanding and visual-aware response generation via LM” as they investigated how MLLMs facilitate edit instructions and MLLM-guided image editing.

This image editing AI model made by Apple can produce concise and clear instructions for the editing process, create Photoshop-style modifications, optimize photo quality, and edit specific elements of a picture, such as faces, eyes, hair, clothes, and accessories.

MM1: Apple’s AI model

In March, Apple researchers published a paper highlighting how they’re training a new large language model (LLM).

Called MM1, this LLM can integrate text and visual information simultaneously. The paper offers an interesting look at the importance of various architectural components and data choices. The researchers say they were able to “demonstrate that for large-scale multimodal pre-training using a careful mix of image-caption, interleaved image-text, and text-only data is crucial for achieving state-of-the-art (SOTA) few-shot results across multiple benchmarks, compared to other published pre-training results.”

In addition, they showed that “the image encoder together with image resolution and the image token count has a substantial impact, while the vision-language connector design is of comparatively negligible importance.”

Apple’s MM1 AI model uses a family of multimodal models with up to 30 billion parameters, consisting of both dense models and mixture-of-experts (MoE) variants, that are state-of-the-art in pre-training metrics and achieve competitive performance after supervised fine-tuning on a range of established multimodal benchmarks.

ReALM could be better than OpenAI’s GPT-4

iOS 18.1 Apple Intelligence on iPhone 15 Pro all-new Siri design
iOS 18.1 Apple Intelligence on iPhone 15 Pro: The all-new Siri design Image source: José Adorno for BGR

Apple researchers have published a paper about a new AI model. According to the company, ReALM is a language model that can understand and successfully handle contexts of different kinds. With that, users can ask about something on the screen or run in the background, and the language model can still understand the context and give the proper answer.

This is the third paper regarding AI that Apple has published in the past few months. These studies only tease the upcoming AI features of iOS 18macOS 15, and Apple’s newest operating systems. In the paper, Apple researchers say, “Reference resolution is an important problem, one that is essential to understand and successfully handle context of different kinds.

One example is a user asking for pharmacies near them. After a list is presented, something Siri could do, the user could ask, “Call the one on Rainbow Rd.,” “Call the bottom one,” or “Call this number (present onscreen).” Siri can’t perform this second part, but with ReALM, this language model could understand the context by analyzing on-device data and completing the query.

Ferret LLM

This paper explains how a multimodal large language model can understand user interfaces of mobile displays. The researchers say they have advanced in MLLM usage but still “fall short in their ability to comprehend and interact effectively with user interface (UI) screens.” 

This assistive assistant is still far from being released. But once Apple masters it, it could be integrated alongside ReALM model.

BGR will update this guide as we learn more about Apple’s AI efforts.

José Adorno Tech News Reporter

José is a Tech News Reporter at BGR. He has previously covered Apple and iPhone news for 9to5Mac, and was a producer and web editor for Latin America broadcaster TV Globo. He is based out of Brazil.