The Google Pixel 9’s AI Camera Features Let You Reshape Reality

the-google-pixel-9’s-ai-camera-features-let-you-reshape-reality
The Google Pixel 9’s AI Camera Features Let You Reshape Reality

Google’s Pixel smartphones have long been known for their great camera systems, but in recent years the company has taken to juicing its imaging platform with artificial intelligence features that expand its capabilities. Taking a photo is no longer just about tapping a shutter button and getting a nice picture. Now there are ways to erase undesirable objects, move subjects in your photos and let AI fill in the background, and even remove annoying sounds like sirens from videos. This year’s Pixel 9 series goes even further with more generative AI capabilities that can alter, improve, and generally goose your photos.

Google says it has completely rebuilt the Pixel 9 series’ HDR+ pipeline—the image processing algorithm that ensures your photos have the right levels of contrast, exposure, colors, and shadows. But new features like Add Me, Reimagine, Autoframe, and Zoom Enhance go past the capture stage and make it easier for anyone to perform tasks that previously required a little technical know-how in a photo-editing app. Here’s the breakdown.

And for a deep dive on how these features were developed, read my exclusive interview with members of Google’s Pixel camera team.

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

Add Me

Photograph: Joel Chokkattu

You’ve probably been in a situation where you want to take a selfie with your partner or family in front of a subject, like the Eiffel Tower, but someone has to take the picture, right? Instead of handing your $1,000 phone to a stranger, Add Me accomplishes the same task.

This is a special mode in the Pixel 9 phones that first asks you to scan the surrounding area briefly. Then you’ll snap a picture of your loved one in front of the subject, and then swap places. When they take over photo-capture duties, they’ll see a faded-out image of themselves in the camera preview, and the camera app will suggest a place for the second person to stand. Once they press the shutter button, it’ll superimpose the images so it appears as if both people were standing right next to each other, even when they weren’t.

It worked well in my brief testing, and naturally, I tried to see if I could duplicate myself. This worked once, but every other attempt failed. That’s because Google says it was not designed for the same person to show up twice. Maybe if you change your shirt or try to look different enough, it might do the trick. I’ll need to do more testing to see how well it works when you want to put your hand around another person’s shoulder.

Reimagine in Magic Editor

Photograph: Joel Chokkattu

Reimagine is the latest addition to Google’s Magic Editor, which currently lets you move subjects around a photo or erase objects. This new tool lets you select an area of a photo and then a text prompt pops up, where you can type in what you want to see as your end result. This can be anything ranging from turning the photo from daytime to nighttime, adding stormy clouds, or, like I tried, adding a UFO over the Empire State Building.

The more descriptive you are, the better the results. However, Google says it works best with backgrounds and objects instead of people—there are guardrails in place so that you don’t alter how someone looks. It’s similar to Samsung’s Sketch to Image feature in its latest folding phones, except Samsung asks you to sketch what you want to see rather than using text.

Reimagine isn’t perfect—sometimes it didn’t produce results with what I typed in, and sometimes the results were just plain bad. But you do get four results to choose from, and you can always try again and be more descriptive.

Autoframe

Photograph: Joel Chokkattu

Composition is important in photography, and if adding grid lines in your camera app doesn’t help you line things up (yes, most smartphone cameras offer this feature in the camera settings), Google thinks this is another task generative AI can help with.

Autoframe lives in Magic Editor much like Reimagine. Once you’re editing a photo, you’ll see the option to select Autoframe. Tap this and it will generate four images with different framing. For example, I intentionally took a photo where I was standing very close to the edge of the frame. Not great compositionally! I used Autoframe, and it generated pixels above and to the right side of me, pushing me closer to the center, following the classic rule of thirds. It even gave me a vertical crop of an originally horizontal photo.

These “generated pixels” essentially understand the context of the photo and expand the edges of the frame so that it looks natural, even if it’s all artificial. In the images I tested it with, it did not know how much of the tree was really to my left, or how far the fence went, so it made some assumptions. If you look closely you can probably find some mistakes, but most people will never notice the difference.

Zoom Enhance Is Finally Here

Photograph: Joel Chokkattu

Google first announced Zoom Enhance with the Pixel 8, but it never shipped because it wasn’t ready. Now, it’s finally launching in the Pixel 9 series (and will arrive to Pixel 8 phones at a later date). Currently, if you zoom into a photo pre-capture, Google uses its Super Res Zoom algorithm to ensure the image is sharper than what you’d get with typical digitally zoomed-in photos. Zoom Enhance, however, is a post-capture feature.

In the Google Photos app, select the photo you want to zoom in on, tap the Edit button, and then go to Tools to find Zoom Enhance. You’ll have to zoom to the area you want, then tap Zoom Enhance, and just like in the early 2000s CSI shows, it’ll enhance the photo by generating pixels to make it appear sharper. I tried it on some faraway buildings and the results delivered sharper lines that looked much cleaner than the previously pixelated image.

Other Camera Improvements

Photograph: Julian Chokkattu

There are a few other notable camera upgrades in the Pixel 9 series, including the Pixel 9 Pro Fold.

Video Boost Gets a Boost

Last year, Google’s Pixel 8 Pro introduced Video Boost. When enabled, your video would be sent to Google’s cloud servers to be processed, improving the video quality and brightness while reducing noise and upgrading the stabilization. Sometime later, you’d get a notification saying your video is ready, and you’d be able to share it. (You still get access to the original, non-boosted video to share immediately too.)

This has been upgraded on the Pixel 9 Pro series to be twice as fast at processing over last year. Now, when you move to the Night Sight tab in the Video mode, Video Boost will automatically turn on (so you don’t have to remember). It works with the telephoto camera, even when you zoom up to 20X with Super Res Zoom, and can upscale your footage to 8K resolution. Google says it should deliver the “best smartphone video,” though I’ll have to test that claim.

Even Better Panoramas

Panorama is one of those features you sparingly use when you find yourself at the end of a hike at the top of a mountain. It hasn’t gotten much love, but Google says the feature has been completely rebuilt with the latest HDR+ and Night Sight pipeline, and it also has a new way to capture the photos. When you start the Panorama, you can choose a direction to move (left or right), and then you’ll find spots along the horizon where you’ll have to pause for the phone to capture a photo. The capture process is very similar to Google’s Photosphere feature.