Published in AI

Google starts identifying AI fakes

by on18 September 2024


Fed up with them crowding out real images

Online search outfit Google will begin labelling AI-generated and AI-edited image search results in the coming months.

In recent months, Google has seen many AI-generated images appear in search results, crowding out legitimate results and making it harder for users to find what they’re actually looking for.

The company will flag such content through the “About this image” window, which will be applied to Search, Google Lens, and Android’s Circle to Search features.

The search outfit said it will apply the technology to its ad services and is considering adding a similar flag to YouTube videos, but according to the announcement post, it will “have more updates on that later in the year. "

It will not do the work itself but will use Coalition for Content Provenance and Authenticity (C2PA) metadata to identify AI-generated images. For those not in the know, that is an industry group Google joined as a steering committee member earlier in the year.

 This “C2PA metadata” will be used to track an image’s provenance, identifying when and where it was created, as well as the equipment and software used in its generation.

Amazon, Microsoft, OpenAI, and Adobe have also joined. Still, the standard has received little attention from hardware manufacturers and can currently only be found on a handful of Sony and Leica camera models.

 A few prominent AI-generation tool developers have also declined to adopt the standard, such as Black Forrest Labs, which makes the Flux model that Elon [look at me] Musk’s Grok uses for its image generation.

Last modified on 18 September 2024
Rate this item
(0 votes)