April 20, 2024
A.I

Fury as extremely graphic AI images of Taylor Swift go viral and outraged fans call out image creators for harassment and predatory behavior.

Graphic AI images of Taylor Swift are sweeping the internet, showing the singer in a series of explicit acts related to the Kansas City Chiefs, in the latest example of the disturbing rise of deepfake porn.

DailyMail.com has seen the images in question but will not publish them.

They are hosted on Celeb Jihad, one of the many deepfake porn websites that exist and continue to outperform cybercrime investigators.

Swift has regularly attended Chiefs games for the past six months to support her boyfriend, star player Travis Kelce.

The images are the latest to come forward from Celeb Jihad, who was previously involved in a series of indecent scandals; In 2017, the website was sued by celebrities for posting explicit images that had been hacked from their phones and iCloud accounts.

The abominable sites hide in plain sight, seemingly cloaked by proxy IP addresses.

According to an analysis by independent researcher Genevieve Oh that was shared with The Associated Press in December, more than 143,000 new deepfake videos were posted online this year, a number that surpasses all other years combined.

There are increasing calls for the website to be removed and the owners to be criminally investigated.

Swift photographed leaving Nobu restaurant after having dinner with Brittany Mahomes, wife of Kansas City Chiefs quarterback Patrick Mahomes.

Swift photographed leaving Nobu restaurant after having dinner with Brittany Mahomes, wife of Kansas City Chiefs quarterback Patrick Mahomes.

Brittany Mahomes, Jason Kelce and Taylor Swift react during the second half of the AFC Divisional Playoff game between the Kansas City Chiefs and the Buffalo Bills at Highmark Stadium.

Brittany Mahomes, Jason Kelce and Taylor Swift react during the second half of the AFC Divisional Playoff game between the Kansas City Chiefs and the Buffalo Bills at Highmark Stadium.

The images have sparked outrage from Taylor Swift fans around the world.

The images have sparked outrage from Taylor Swift fans around the world.

On Thursday morning, X began suspending accounts that some had shared, but others quickly emerged in their place. There are also posts of the images on Instagram, Reddit and 4Chan.

Swift has yet to comment on the site or the spread of the images, but her loyal and distraught fans have waged war.

‘How is this not considered sexual assault? Can’t I be the only one who finds this strange and uncomfortable?

‘We’re talking about a woman’s body/face being used for something she would probably never allow/be comfortable with. ‘How come there are no regulations or laws preventing this?’ one fan tweeted.

Nonconsensual deepfake pornography is illegal in Texas, Minnesota, New York, Virginia, Hawaii, and Georgia. In Illinois and California, victims can sue porn creators in court for defamation.

“I’m going to need the entire adult Swiftie community to log on to Twitter, search for the term ‘Taylor Swift AI’, click on the media tab, and report every AI-generated Taylor porn photo they can see because I’m Fuck, I’ve “I’m done with this nonsense. Get it, Elon,” wrote one enraged Swift fan.

The raunchy images are themed around Swift's Kansas City Chiefs fandom, which began after she began dating star player Travis Kelce.

The raunchy images are themed around Swift’s Kansas City Chiefs fandom, which began after she began dating star player Travis Kelce.

“Man this is so inappropriate,” wrote another. While another said: “Whoever is taking those AI photos of Taylor Swift is going to hell.”

‘Whoever is doing this rubbish should be arrested. “What I saw is absolutely repulsive, and this type of shit should be illegal… WE NEED to protect women from things like this,” another person added.

Explicit AI-generated material that overwhelmingly harms women and children is proliferating online at an unprecedented rate.

Desperate for solutions, affected families are pressing lawmakers to implement strong safeguards for victims whose images are manipulated using new artificial intelligence models or the large number of apps and websites that openly advertise their services.

Advocates and some legal experts are also calling for federal regulation that could provide uniform protections across the country and send a strong message to current and potential perpetrators.

The problem with deepfakes is not new, but experts say it is getting worse as the technology to produce them becomes more available and easier to use.

Biden speaks before signing executive order to regulate artificial intelligence (AI) in October 2023

Biden speaks before signing executive order to regulate artificial intelligence (AI) in October 2023

Researchers have been sounding the alarm this year about the explosion of child sexual abuse material generated by AI using representations of real victims or virtual characters.

In June 2023, the FBI warned that it was continuing to receive reports of victims, both minors and adults, whose photos or videos were used to create explicit content that was shared online.

In addition to states that already have laws in place, other states are considering their own legislation, including New Jersey, where a bill is currently being drafted to ban deepfake porn and impose penalties (either jail time, fine, or both). ) to those who spread it.

President Joe Biden signed an executive order in October that, among other things, called for banning the use of generative AI to produce non-consensual child sexual abuse material or “intimate images of real individuals.”

The order also directs the federal government to issue guidelines for labeling and watermarking AI-generated content to help differentiate between authentic and software-created material.

Some advocate caution (including the American Civil Liberties Union, the Electronic Frontier Foundation and The Media Coalition, an organization that works for trade groups representing publishers, movie studios and others) and say careful consideration is needed to avoid proposals that may go against the First Amendment.

“Some concerns about abusive deepfakes can be addressed by existing cyberstalking laws,” said Joe Johnson, an attorney with the ACLU of New Jersey.

“Whether federal or state, there needs to be substantial conversation and input from stakeholders to ensure that any bill is not overly broad and addresses the issue raised.”

Mani said his daughter created a website and created a charity with the goal of helping victims of AI. The two have also been in talks with state lawmakers pushing the New Jersey bill and are planning a trip to Washington to advocate for more protections.

“Not every child, boy or girl, will have the support system to deal with this problem,” Mani said. “And they may not see the light at the end of the tunnel.”

Leave a Reply

Your email address will not be published. Required fields are marked *