Apple avoids “AI” hype at WWDC keynote by baking ML into products – Ars Technica

Apple
Amid spectacular new products just like the Apple Silicon Mac Professional and the Apple Imaginative and prescient Professional revealed at Monday’s WWDC 2023 keynote occasion, Apple presenters by no means as soon as talked about the time period “AI,” a notable omission on condition that its rivals like Microsoft and Google have been closely specializing in generative AI at the second. Nonetheless, AI was part of Apple’s presentation, simply by different names.
Whereas “AI” is a really ambiguous time period today, surrounded by each astounding developments and excessive hype, Apple selected to keep away from that affiliation and as a substitute centered on phrases like “machine studying” and “ML.” For instance, through the iOS 17 demo, SVP of Software program Engineering Craig Federighi talked about enhancements to autocorrect and dictation:
Autocorrect is powered by on-device machine studying, and through the years, we have continued to advance these fashions. The keyboard now leverages a transformer language mannequin, which is cutting-edge for phrase prediction, making autocorrect extra correct than ever. And with the facility of Apple Silicon, iPhone can run this mannequin each time you faucet a key.
Notably, Apple talked about the AI time period “transformer” in an Apple keynote. The corporate particularly talked a few “transformer language mannequin,” which implies its AI mannequin makes use of the transformer structure that has been powering many current generative AI improvements, such because the DALL-E picture generator and the ChatGPT chatbot.
A transformer mannequin (an idea first launched in 2017) is a sort of neural community structure utilized in pure language processing (NLP) that employs a self-attention mechanism, permitting it to prioritize totally different phrases or parts in a sequence. Its capacity to course of inputs in parallel has led to important effectivity enhancements and powered breakthroughs in NLP duties akin to translation, summarization, and question-answering.
Apparently, Apple’s new transformer mannequin in iOS 17 permits sentence-level autocorrections that may end both a phrase or a complete sentence while you press the house bar. It learns out of your writing model as nicely, which guides its options.
All this on-device AI processing is pretty simple for Apple due to a particular portion of Apple Silicon chips (and earlier Apple chips, beginning with the A11 in 2017) referred to as the Neural Engine, which is designed to speed up machine studying purposes. Apple additionally mentioned that dictation “will get a brand new transformer-based speech recognition mannequin that leverages the Neural Engine to make dictation much more correct.”

Apple
In the course of the keynote, Apple additionally talked about “machine studying” a number of different instances: whereas describing a brand new iPad lock display screen function (“When you choose a Dwell Photograph, we use a complicated machine studying mannequin to synthesize further frames”); iPadOS PDF options (“Due to new machine studying fashions, iPadOS can determine the fields in a PDF so you should utilize AutoFill to rapidly fill them out with data like names, addresses, and emails out of your contacts.”); an AirPods Adaptive Audio function (“With Personalised Quantity, we use machine studying to know your listening preferences over time”); and an Apple Watch widget function referred to as Sensible Stack (“Sensible Stack makes use of machine studying to point out you related data proper while you want it”).
Apple additionally debuted a brand new app referred to as Journal that permits private textual content and picture journaling (type of like an interactive diary), locked and encrypted in your iPhone. Apple mentioned that AI performs an element, but it surely did not use the time period “AI.”
“Utilizing on-device machine studying, your iPhone can create personalised options of moments to encourage your writing,” Apple mentioned. “Solutions will probably be intelligently curated from data in your iPhone, like your photographs, location, music, exercises, and extra. And also you management what to incorporate while you allow Solutions and which of them to avoid wasting to your Journal.”
Lastly, through the demo for the brand new Apple Imaginative and prescient Professional, the corporate revealed that the shifting picture of a consumer’s eyes on the entrance of the goggles comes from a particular 3D avatar created by scanning your face—and also you guessed it, machine studying.
“Utilizing our most superior machine studying methods, we created a novel resolution,” Apple mentioned. “After a fast enrollment course of utilizing the entrance sensors on Imaginative and prescient Professional, the system makes use of a complicated encoder-decoder neural community to create your digital Persona.”

An encoder-decoder neural community is a sort of neural community that first compresses an enter into a compressed numerical kind referred to as a “latent-space illustration” (the encoder), after which reconstructs the info from the illustration (the decoder). We’re speculating, however the encoder half would possibly analyze and compress facial information captured through the scanning course of into a extra manageable, lower-dimensional latent illustration. Then, the decoder half would possibly use that condensed data to generate its 3D mannequin of the face.
The M2 Extremely—an AI powerhouse?

Apple
In the course of the WWDC keynote, Apple unveiled its strongest Apple Silicon chip but, the M2 Extremely, which options as much as 24 CPU cores, 76 GPU cores, and a 32-core Neural Engine that reportedly delivers 31.6 trillion operations per second, which Apple says represents 40 % quicker efficiency than the M1 Extremely.
Apparently, Apple straight mentioned that this energy would possibly turn out to be useful for coaching “giant transformer fashions,” which to our data is probably the most distinguished point out of AI in an Apple keynote (albeit solely in passing):
And M2 Extremely can assist an unlimited 192GB of unified reminiscence, which is 50% greater than M1 Extremely, enabling it to do issues different chips simply cannot do. For instance, in a single system, it may possibly prepare huge ML workloads, like giant transformer fashions that probably the most highly effective discrete GPU cannot even course of as a result of it runs out of reminiscence.
This improvement has some AI specialists excited. On Twitter, frequent AI pundit Perry E. Metzger wrote, “Whether or not by accident or intent, the Apple Silicon unified reminiscence structure means excessive finish Macs are actually actually superb machines for operating huge AI fashions and doing AI analysis. There actually aren’t many different programs at this value level that supply 192GB of GPU accessible RAM.”
Right here, bigger RAM signifies that larger and ostensibly extra succesful AI fashions can slot in reminiscence. The programs are the brand new Mac Studio (beginning at $1,999) and the brand new Mac Professional (beginning $6,999), which might probably put AI coaching inside attain of many new individuals—and within the kind issue of desktop- and tower-sized machines.
Solely rigorous evaluations will inform how the efficiency of those new M2 Extremely-powered machines will stack up towards AI-tuned Nvidia GPUs just like the H100. For now, it appears to be like like Apple has overtly thrown its hat into the generative-AI-training {hardware} ring.