Nightshade: The Latest Defense Against AI Art

In the past year, AI image generators such as Stable Diffusion and Midjourney have become more prevalent in artistic spaces, due to the use of these tools in generating artwork. The nature of these tools requires that art pieces on the internet are scraped to create the database on which the AI model is trained. This allows people to use AI generators to replicate the likeness of specific art styles without receiving permission from the artists themselves. The lack of protection over artists’ works online along with the rise in usage of AI art for commercial practices has caused concern regarding the rights of artists within the art industry. Despite the various calls for lawsuits against companies developing AI tools such as Meta, OpenAI, and Google, no concrete legislation has been passed to protect the work and personal information of artists online. However, the creators of Nightshade, a data poisoning tool targeted at AI scraping, may have a solution.

Nightshade is a tool developed by researchers from the University of Chicago, aimed at giving power back to artists. The way that the tool works to “poison” data is by allowing artists to add invisible pixels to their art in a way that is undetectable by humans, but impairs the AI model’s ability to correctly label the work. For example, the AI may process a picture of a car as a cow due to the effects of Nightshade on the image. This improper training of the AI can cause the AI to generate more images of cows when the user prompts it to generate an image of a car. According to the research team, the usage of poisoned images was effective in causing applications like Stable Diffusion to generate distorted images throughout training the AI. Ben Zhao, one of the creators of Nightshade notes that “we show that a moderate number of Nightshade attacks can destabilize general features in a text-to-image generative model, effectively disabling its ability to generate meaningful images”. This means that, with enough uses, Nightshade can heavily impact the AI model that is used to generate images.

To become so effective, Nightshade takes advantage of the security vulnerability in AI models that programs like Stable Diffusion use. Because AI-generating applications use large amounts of data collected all over the internet, there is a greater chance that “poisoned” data can manipulate AI models. Likewise, after the model has been trained with multiple “poisoned” images over time, it is difficult to unlearn the habits created. Zhao acknowledges the potential risks of security vulnerability in the AI models, as people may abuse the data poisoning technique for malicious reasons. However, those such as Junfeng Yang, a computer scientist at Columbia University believe that Nightshade may be a necessary force in making AI-utilizing companies respect the rights of artists and creators. She has studied the security of deep-learning systems and agrees that Nightshade if used on a widespread level, can seriously impact the way that AI-utilizing companies operate their applications.

The tool shows great potential in protecting artists looking to continue posting online, without the added risk of their art being used to train an AI model against them. After months of no response through copyright protection laws or intellectual property violations, many believe that Nightshade may be a powerful force in bringing back power to the creators. Professional illustrator Autumn Beverly tells MIT Review that “tools like Nightshade and Glaze have given her the confidence to post her work online again”. The morale boost given by Nightshade may be the push needed for artists to mobilize against the growing usage of AI image-generating companies.

Works Cited

David, Emilia. “Artists Can Use a Data Poisoning Tool to Confuse Dall-E and Corrupt AI Scraping.” The Verge, The Verge, 25 Oct. 2023, www.theverge.com/2023/10/25/23931592/generative-ai-art-poison-midjourney.

Heikkilä, Melissa. “This New Data Poisoning Tool Lets Artists Fight Back against Generative AI.” MIT Technology Review, MIT Technology Review, 24 Oct. 2023, www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/.

Magazine, Smithsonian. “Artists Can Use This Tool to Protect Their Work from A.I. Scraping.” Smithsonian.Com, Smithsonian Institution, 3 Nov. 2023, www.smithsonianmag.com/smart-news/this-tool-uses-poison-to-help-artists-protect-their-work-from-ai-scraping-180983183/.