April 20, 2024
A.I

Artists continue to protest against creative companies for using AI

Popular graphics tablet maker Wacom is the latest target attacked by the digital art community after it appears to use artificial intelligence-generated images in its advertisements. Over the weekend, creatives from all over X (formerly Twitter) and TikTok noted that Wacom was promoting its Intuos pen tablet with an illustration of a dragon that featured telltale marks of AI-generated images, such as questionable scale designs and fur that blended unnaturally with other sections of the image.

Wacom removed the images without explanation, sparking speculation that an industry-standard brand for artists was using widely criticized tools to replace them. And it wasn’t the only AI controversy this weekend. Wizards of the Coast (WotC), the publisher behind Magic: The Gathering and Dungeons and Dragonsalso issued an apology on Sunday for using an ad with AI-generated elements. The controversies have increased mistrust around an already complicated question: How can creatives and the companies that work with them navigate an avalanche of images that are easy to create and difficult to conclusively detect?

Many artists have already spoken out against companies that use generative AI. They fear it could affect job security in a wide range of creative professions such as graphic design, illustration, animation and voice acting. But there is a particular sense of betrayal around brands like Wacom, whose core audience is artists. The company has long been considered the industry’s standard supplier of graphics tablets, thanks to the success of products like its Cintiq professional line. At the same time, it faces more competition than ever from rival brands such as XPPen and Xencelabs.

Initially, Wacom did not respond to the artists’ complaint. Several days later, and after the publication of this article, the company issued a contrite statement saying that the images in question had been purchased from a third-party vendor and had eluded being flagged by the online AI detection tools it used for the investigation.

“We want to assure you that using Al-generated images in these assets was not our intention,” Wacom wrote. The company said it is now unsure whether AI was used in its creation. “For this reason, we immediately discontinued its use.”

WotC’s problems are simpler, but in some ways they point to a more difficult problem. The company announced in August that would ban AI-generated images on its products after confirming that AI was used to create some artwork for the Bigby presents: Glory of the Giants D&D consultation book. In December, WotC also refuted the claims. that AI-generated images were included in the upcoming 2024 Players Handbook for D&D.

Despite this, the company shared a new marketing campaign for its Magic: The Gathering card game on January 4 that was quickly scrutinized for containing strangely deformed elements commonly associated with AI-generated images. The company initially denied that AI was involved, insisting that the image was made by a human artist, only to backtrack three days later and acknowledge that indeed it was. contain AI generated components. In an apology that followed, WotC implied that the issue was related to the increasing prevalence of generative AI integrations within widely used creative software like Adobe Photoshop’s generative fill feature.

“We can’t promise to be perfect in such a rapidly evolving space, especially now that generative AI is becoming standard in tools like Photoshop,” WotC said in its online statement. “But our goal is to always be on the side of human-made art and artists.”

WotC says it is examining how it will work with vendors to better detect unauthorized use of AI in any marketing materials they send, but these days that’s easier said than done. Currently there is no really Reliable means to verify if a given image was generated using AI. AI detectors are notoriously unreliable and regularly detect false positives, and other methods, such as Adobe-supported Content Credentials metadata, can only provide information for images created with specific software or platforms.

Even defining AI-generated content is becoming more difficult. Tools like Firefly, the AI ​​model built into Photoshop and Illustrator, allow users to make quick edits to individual layers. Some creative professionals maintain that these are simply tools that artists can benefit from, but others believe any Generative AI functions are exploitative because they are often trained on large amounts of content collected without the knowledge or consent of the creators. Adobe has assured users that the AI ​​tools powered by Firefly only train on content Adobe owns, but that doesn’t mean they’re passing everyone’s ethical sniff test.

The situation has left passersby visually scanning projects for telltale signs of inhuman origins and organizations trusting that artists are not lying about how their content was created. Neither option is exactly foolproof.

That uncertainty has driven a wedge of paranoia and anxiety throughout the online creative community as artists desperately try to avoid contributing to or being exploited by the growing AI infestation. The rapid deployment of generative AI technology has made it incredibly difficult to avoid. The inability to trust artists or brands to reveal how their content is produced has also sparked AI “witch hunts” led by creatives. The goal is to hold companies accountable for using the technology rather than paying designers, but in some cases, the accusations are entirely speculative and actually harm human artists.

Even when companies insist that AI has not been involved, creatives are understandably skeptical. A Marvel poster Loki Disney Plus TV series also came under fire last year after some creatives claimed they contained AI-generated stock assets from Shutterstock. Following its own investigation, Shutterstock said the stock image was created by humans and that a “software tool” had instead been used to create the “subtle creative imperfections most often associated with AI-generated art.” . However, Shutterstock declined to share what the software tool in question was.

The headache of trying to avoid generative AI entirely (even unknowingly promoting it or creating online portfolios to train it) has proven too much for some creatives. The artists have accredited it. as a reason to leave the industry or contemplate abandoning their studies. At least one art contest, the annual Self-Published Fantasy Blog Cover Contest (SPFBO), was shut down entirely after last year’s winner admitted under duress to using banned AI tools.

But even creatives who don’t hope to stop the development of generative AI want something more substantial from “trust me, bro” companies, especially when those companies depend on their patronage. Artists value honesty and responsibility over evasion and evasions about whether AI tools are being used.

Wacom and WotC ultimately gave similar answers to their respective situations: that the offending images came from a third-party vendor, that the companies were unaware that AI had been used to create them, and that they promised to do better in the future. However, this has not reassured some artists, who questioned how apparent hallucinations within the images had gone unnoticed and why these creative companies They weren’t hiring artists directly..

Both cases suggest that pressure campaigns against AI will continue to be a constant force in the creative world. Generative AI may not be going anywhere, but for many companies, using it has become a public relations nightmare.

Update January 9 at 5:11 pm ET: Added Wacom’s response to accusations that it used AI-generated images.

Leave a Reply

Your email address will not be published. Required fields are marked *