• 2024-06-22

Apple launched a developer preview and revealed the use of Google processors to

On July 29th local time, Apple Inc. officially unveiled the developer preview of its highly anticipated suite of artificial intelligence features, known as Apple Intelligence.

It is important to note that currently, this feature is part of the developer beta updates for iOS 18.1, iPadOS 18.1, and macOS 15.1, and is not yet open for public testing. It is only available to registered Apple developers, with an annual fee for the developer program set at $99.

For developers, even after updating the system, they must register on the waiting list within the Settings app to gain access, as some complex requests require connection to Apple's servers for processing.

Although Apple's previous plan indicated that the official version of Apple Intelligence is expected to be released to the public later this year, the version number 18.1 suggests that it may not be launched in sync with the new iPhone hardware, which is expected to be released in the fall.

Advertisement

The compatibility of this feature has also raised concerns, as it currently only supports iPhone 15 Pro and iPhone 15 Pro Max and newer models. For iPad and Mac, the devices must include Apple Silicon chips.Investors also have high expectations for Apple Intelligence, believing that the close integration of AI with Apple's operating system may stimulate a massive wave of device upgrades in the coming years.

 

On the same day, Apple also released a technical document called Apple Intelligence Foundation Language Models, which introduces the two core AI models behind Apple Intelligence from a technical perspective.

 

 

Overview of Apple Intelligence's Features

 

Summarized by the media, the features of Apple Intelligence currently available in the developer preview include:

 

1. A brand-new Siri design that can make the screen edges glow.2. Siri's comprehension ability has been enhanced, allowing it to understand commands even when the user speaks in a non-fluency manner.

3. Siri can answer questions related to troubleshooting issues with Apple products.

4. Enhanced photo search and short film (movie) creation features.

5. AI-generated summaries are provided for emails, messages, and voicemail transcriptions.

6. Writing Tools, Apple's text generation service.However, some features that were showcased at the developer conference in June have not yet been included, and Apple has indicated that these features will be rolled out gradually over the next year:

1. Image generation

2. Emoji generation

3. Automatic photo organization

4. Other Siri improvements, including the ability to use personal information and the ability to perform actions within apps5. Integrate OpenAI ChatGPT

 

AI Model Training: Apple Chooses Google Tensor Processing Unit (TPU, Tensor Processing Unit)

In a technical paper, Apple revealed that the AI models supporting Apple Intelligence were pre-trained on processors designed by Google.

This choice indicates that tech giants are seeking alternatives to Nvidia Graphics Processing Units (GPU, Graphics Processing Unit) for training cutting-edge AI models.The Apple Foundation Model (AFM) is divided into the AFM on-device model (AFM-on-device) and the AFM server model (AFM-server), which are trained on a "cloud TPU cluster." This means that Apple rents servers from cloud providers to perform computations.

Apple states that its AFM on-device model is a model designed to run efficiently on devices with approximately 3 billion parameters, while the AFM server model is a large server language model designed for private cloud computing.

Specifically, the AFM on-device model is trained on a single "slice" composed of 2048 TPU v5p chips, while the AFM server model is trained on 8192 TPU v4 chips, which are configured to work collaboratively as 8 "slices" through the data center network.

Google's latest TPU costs less than 2 dollars per hour of use (if booked for three years). In 2015, Google first released the TPU for internal company workloads, and it was made available to the public in 2017. Today, the TPU has become one of the most mature custom chips designed specifically for artificial intelligence.

It is worth noting that Google remains an important customer of Nvidia, using both Nvidia's GPUs and its own TPUs to train AI systems, and also offering Nvidia's technology on its cloud platform.Apple previously stated publicly that the inference process (i.e., running pre-trained AI models to generate content or make predictions) will be partially carried out on Apple's own data center chips.

This is Apple's second technical paper on its AI system. In a more general paper released in June, Apple had already mentioned that it is developing its AI models using TPU (Tensor Processing Units).

Looking ahead to the future, with the release of the developer beta versions of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, developers finally have the opportunity to try the features of Apple Intelligence.

To participate in the testing, developers need to find "Apple Intelligence & Siri" in the Settings app, and then click "Join the Apple Intelligence Waitlist." The purpose of setting up the waitlist is to prevent server issues when developers first attempt to use the features.Although the specific date has not yet been announced, the public release of Apple Intelligence is expected to take place in the fall of this year. Typically, Apple also releases a public beta version for new operating system updates (the developer version requires a developer account), but no details have been announced at this time.

Overall, the introduction of Apple Intelligence will mark a significant advancement for Apple in the field of AI. By deeply integrating AI into its ecosystem, Apple hopes to further enhance the user experience, strengthen its ecosystem, and lay the foundation for the development of future AI technologies.

Comment