iPhone Product Line Manager Francesca Sweet and Apple Camera Software Engineering Vice President Jon McCormack have provided a new interview to PetaPixel, offering insight into Apple’s approach to camera design and development. Crucially, the executives explained that Apple sees camera technology as a holistic union of hardware and software.
The interview reveals that Apple sees its main goal for smartphone photography as allowing users to “stay in the moment, take a great photo, and get back to what they’re doing,” without being distracted by the technology behind it.
McCormack explained that while professional photographers go through a process to fine-tune and edit their photos, Apple is attempting to refine that process down to the single action of capturing a frame.
“We replicate as much as we can to what the photographer will do in post,” McCormack said. “There are two sides to taking a photo: the exposure, and how you develop it afterwards. We use a lot of computational photography in exposure, but more and more in post and doing that automatically for you. The goal of this is to make photographs that look more true to life, to replicate what it was like to actually be there.”
He went on to describe how Apple uses machine learning to break down scenes into natural parts for computational image processing.
“The background, foreground, eyes, lips, hair, skin, clothing, skies. We process all these independently like you would in Lightroom with a bunch of local adjustments,” McCormack continued. “We adjust everything from exposure, contrast, and saturation, and combine them all together… We understand what food looks like, and we can optimize the color and saturation accordingly to much more faithfully.
Skies are notoriously hard to really get right, and Smart HDR 3 allows us to segment out the sky and treat it completely independently and then blend it back in to more faithfully recreate what it was like to actually be there.”
“Apple wants to untangle the tangled industry that is HDR, and how they do that is leading with really great content creation tools. It goes from producing HDR video that was niche and complicated because it needed giant expensive cameras and a video suite to do, to now my 15-year-old daughter can create full Dolby Vision HDR video. So, there will be a lot more Dolby Vision content around. It’s in the interest of the industry to now go and create more support.”
The executives also discussed the camera hardware improvements of the iPhone 12 and iPhone 12 Pro, commenting that “the new wide camera, improved image fusion algorithms, make for lower noise and better detail.” The specific camera advancements of the iPhone 12 Pro Max were also an area of interest:
“With the Pro Max we can extend that even further because the bigger sensor allows us to capture more light in less time, which makes for better motion freezing at night.”
When asked why Apple has only chosen to increase the sensor size now with the iPhone 12 Pro Max, McCormack revealed Apple’s perspective:
“It’s not as meaningful to us anymore to talk about one particular speed and feed of an image, or camera system,” he said. “As we create a camera system we think about all those things, and then we think about everything we can do on the software side… You could of course go for a bigger sensor, which has form factor issues, or you can look at it from an entire system to ask if there are other ways to accomplish that. We think about what the goal is, and the goal is not to have a bigger sensor that we can brag about. The goal is to ask how we can take more beautiful photos in more conditions that people are in. It was this thinking that brought about deep fusion, night mode, and temporal image signal processing.”
Read the full interview with Sweet and McCormack at PetaPixel.