J
Artists can now use this data ‘poisoning’ tool to fight back against AI scrapers.
The University of Chicago’s Glaze Project has released Nightshade v1.0, which enables artists to sabotage generative AI models that ingest their work for training.
Nightshade makes invisible pixel-level changes to images that trick AI models into reading them as something else and corrupt their image output — for example, identifying a cubism style as cartoon.
It’s out now for Windows PC and Apple Silicon Macs.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.
Loading comments
Getting the conversation ready...
Most Popular
Most Popular
- Sony’s PlayStation 5 is $200 off for the first time since December
- Anthropic’s most dangerous AI model just fell into the wrong hands
- Elon Musk admits that millions of Tesla vehicles won’t get unsupervised FSD
- You’re about to feel the AI money squeeze
- I bought Alienware’s $350 OLED monitor and I can’t believe how good it is












