Dubbed Nightshade the tool fights back against AI companies that use artists' work to train their models without the creator's permission. Using it to "poison" this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless turning objects into something else. A dog might become cat, cars into cows, and a laughing cavalier the miserable bus driver on the 7.30 pm train to Nottingham.
Ben Zhao, a professor at the University of Chicago who led the team that created Nightshade, says the hope is that it will help tip the power balance back from AI companies towards artists by creating a powerful deterrent against disrespecting artists' copyright and intellectual property.
Zhao's team developed Glaze, a tool that allows artists to "mask" their style to prevent it from being scraped by AI companies. It works similarly to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulating machine-learning models to interpret the image as something different from what it shows.
The team intends to integrate Nightshade into Glaze, and artists can choose whether to use the data-poisoning tool. The team is making Nightshade open source, which would allow others to tinker with it and make their versions.
The more people use it and make their own versions of it, the more influential the tool becomes, Zhao says. The data sets for large AI models can consist of billions of images, so the more poisoned images can be scraped into the model, the more damage the technique will cause.