Artists and computer scientists are testing out a new way to stop artificial intelligence from ripping off copyrighted images: “poison” the AI models with visions of cats.
A tool called Nightshade, released in January by University of Chicago researchers, changes images in small ways that are nearly invisible to the human eye but look dramatically different to AI platforms that ingest them. Artists like Karla Ortiz are now “nightshading” their artworks to protect them from being scanned and replicated by text-to-photo programs like DeviantArt’s DreamUp, Stability AI’s Stable Diffusion and others.
“It struck me that a lot of it is basically the entirety of my work, the entirety of my peers’ work, the entirety of almost every artist I know’s work,” said Ortiz, a concept artist and illustrator whose portfolio has landed her jobs designing visuals for film, TV and video game projects like “Star Wars,” “Black Panther” and Final Fantasy XVI.
“And all was done without anyone’s consent — no credit, no compensation, no nothing,” she said.
Nightshade capitalizes on the fact that AI models don’t “see” the way people do, research lead Shawn Shan said.
“Machines, they only see a big array of numbers, right? These are pixel values from zero to 255, and to the model, that’s all they see,” he said. So Nightshade alters thousands of pixels — a drop in the bucket for standard images that contain millions of pixels, but enough to trick the model into seeing “something that’s completely different,” said Shan, a fourth-year doctoral student at the University of Chicago.In a paper set to be presented in May, the team describes how Nightshade automatically chooses a concept that it intends to confuse an AI program responding to a given prompt — embedding “dog”…
Read the full article here