One of the new features of iOS 16, and something that was highlighted again during Apple’s event on Wednesday, is personalized spatial audio. Once you’ve installed the latest iOS release on your iPhone starting September 12, you’ll be able to create a custom sound profile that should improve the sense of immersion and overall spatial audio experience you get from AirPods.
To produce this personalized setting, Apple uses the iPhone’s front-facing TrueDepth camera to scan your ears. The process, which involves holding your iPhone about 10 to 20 centimeters from the side of your head, takes less than a minute, and the resulting data is then used to optimize spatial sound for your unique ear shape. “The way we all perceive sound is unique, based on the size and shape of our heads and ears,” said Apple’s Mary-Ann Rau during the keynote. “Personalized Spatial Sound will deliver the most immersive listening experience by precisely placing sounds in the room adjusted for you.”
But Apple is not the first company to go this route. Sony has offered “personalized 360 Reality Audio” since 2019 for supported music services such as Amazon Music, Tidal, Deezer and Nugs.net. Conceptually, it’s very similar: both Sony and Apple try to determine your ear structure and adjust spatial sound processing to account for the unique folds and contours of your ears. The goal is to maintain the 3D sound experience and eliminate all the sound characteristics that reduce the sensation.
Here’s how Sony explained the benefits to me in June, courtesy of Kaz Makiyama, vice president of video and audio at Sony Electronics:
Humans are able to recognize spatial sound sources by the subtle changes in the intensity and timing of sound entering the left and right ears from the sound source. In addition, the sound can depend on our head and ear shape. So by analyzing and reproducing the characteristics of both ears by taking pictures of the ears, this technology enables reproduction of the sound field while wearing headphones.
However, Sony’s approach is a bit more difficult than Apple’s. AirPods technology is built right into the iOS settings. But to build a personal sound field with Sony’s products, you need to take an actual photo of each ear with the Headphones Connect app and your phone’s camera.
These images are uploaded to Sony’s servers for analysis – and then Sony keeps them for an additional 30 days so they can be used for internal research and feature improvements. The company says that the ear images are not personally linked to you in this window.
That’s not to say Apple has the ear scanning procedure completely figured out, either. Throughout the iOS 16 beta period, some on social media and Reddit have mentioned that the process can feel tedious and sometimes fails to detect an ear. I think the truth of the matter is that there is no easy way to solve this while also get a good, accurate reading of your ear shape.
The consensus seems to be that it’s worth the effort: these personalized profiles often make a noticeable difference and can improve our perception of spatial sound. And Apple doesn’t take actual photos: the TrueDepth camera captures a depth map of your head and ear, much in the same way that Face ID learns your facial features.
Apple’s website notes that once you’ve created a personal spatial audio profile from an iPhone, it will be synced across your other Apple devices, including Macs and iPads, to maintain a consistent experience. At least that will be true starting in October: you need upcoming updates to macOS and iPadOS for the sync to work. Personalized spatial audio will be supported on third-generation AirPods, both generations of AirPods Pro, and AirPods Max.
Apple has never claimed to achieve any firsts with personalized spatial audio. The company’s executives have routinely stated that their goal is to come up with the best execution of meaningful features, even though others—in this case, Sony—were already pushing in that direction.