Google Photos will now let you know if an image you’re looking at has been modified or wholly created with generative AI.
The new feature from the Google service will also let you know if an image has been created from a stitched-together composition of multiple images. Using AI.
This initiative from Google is part of efforts by the company and other organizations to be “as transparent as possible” (according to reporting from The Verge) about the sometimes subtle influence of AI rendering technologies in photos and images today.
Google has announced that the new feature for its Photos service will roll out next week and will notify users of AI manipulation in the images they pass through Google Photos.
John Fisher, engineering director at Google Photos states in a recent blog post,
“Photos edited with tools like Magic Editor, Magic Eraser and Zoom Enhance already include metadata based on technical standards from The International Press Telecommunications Council (IPTC) to indicate that they’ve been edited using generative AI,”
He also mentions, “Now we’re taking it a step further, making this information visible alongside information like the file name, location and backup status in the Photos app.”
Users who want to see the “AI info” section for an image can find it in the image details view of Google Photos, both on its web platform and in the app version.
The new labels won’t only apply to generative AI. According to Google, they’ll also specify if a photo is made of or contains elements from several images stitched together through AI technology.
On the other hand, since at least part of this detection technology depends on Google Photos reading the metadata in an image, there will be ways to circumvent it.
However, Fisher adds, “This work is not done, and we’ll continue gathering feedback and evaluating additional solutions to add more transparency around AI edits,”
One thing worth noting here is that while Google will use metadata from the IPTC, it won’t yet use the standard created by the Coalition for Content Provenance and Authenticity (C2PA) from the Adobe-led Content Authenticity Initiative.
This is a bit of an odd oversight and especially when you consider that Google already integrates C2PA with its Search, Ads, and even for YouTube.
Implementation flaws aside, it’s good to see further effort from a major tech platform in making AI imagery as detectable as possible. The sheer saturation of digital media platforms with generative AI “photos” has, predictably, reached firehose proportions and the flood shows no signs of abating.
Credit : Source Post