March 4, 2024

When the Internet becomes unknowable

At the beginning of the horrible war in Israel and Gaza, a new media reality became clear: real-time information on social media is less reliable than ever. X, the social network formerly known as Twitter and the most popular platform for breaking news, apparently no longer has the ability or will to combat misinformation. Images of fireworks celebrations in Algeria have been presented as evidence of Israeli attacks on Hamas, video game graphics have been passed off as reality, and a clip from the Syrian war has been recycled and X-amplified as if it were new. .

Recent decisions made by the platform’s owner, Elon Musk, are complicating the problem. On Twitter, a blue tick meant that a user’s identity had been validated. It wasn’t a perfect system, but it helped find reliable sources. Under Musk’s direction, the platform removed blue ticks from journalists’ accounts and offered them to virtually anyone willing to pay $8 a month for a premium subscription. These account holders share revenue with X when their content goes viral, incentivizing them to share engaging content whether true or not, and the algorithm gives their posts more weight in users’ feeds.

In addition, with Musk That team was reportedly reduced from 230 employees to about 20. While a voluntary system called Community Notes allows X users to flag and potentially debunk inaccurate content, users have complained that these notes can take days to appear, if they ever do.

While X’s performance has been so poor that European commissioner Thierry Breton announced an investigation into the platform’s handling of misinformation during the war between Israel and Hamas, an even bigger misinformation crisis is unfolding. Simply put, journalists, activists, and academics who study misinformation on social platforms no longer have the tools to do their jobs or a safe environment to work in.

Researchers began taking digital misinformation and disinformation seriously as a political force in 2016, when the Brexit campaign in the UK and the Trump campaign in the US featured prominent hoaxes in digital spaces. Studying the 2016 US campaigns, a Harvard team led by my colleague Yochai Benkler concluded that the most influential disinformation was not always stories made up from scratch, but rather propaganda that amplified some facts and frames at the expense of others. While stories about Eastern European teenagers writing pro-Trump political fiction gained widespread coverage, more important were stories from right-wing blogs and social media accounts, amplified within a right-wing media ecosystem and ultimately ultimately by the mainstream media, if only to refute them. .

Benkler’s analysis, and that of many others, was based on information from Twitter’s API (Application Programming Interface), a stream of data on the platform accessible to academics, journalists, activists and any other interested party. In March of this year, looking for a new revenue stream, Twitter announced that research access to the API would now start at $42,000 per month, putting it out of reach for most researchers. Other platforms, notably Reddit, which was also popular among academic researchers, followed his lead.

Historically, Facebook and Instagram have been much more protective of their APIs, but CrowdTangle, a tool developed by activists to see how their content performs on social media, provided insights into the content on these platforms. Facebook (whose parent company, Meta, owns Instagram) bought the tool in 2016 and was led by one of its founders, Brandon Silverman, until he left in 2021 amid an acrimonious atmosphere. In 2022, Meta stopped accepting new applications to use the tool and users reported that the project appeared to be under-resourced and with unfixed bugs.

Losing the tools to study social media (not allowing outside researchers to determine, for example, whether or not X was doing an adequate job eliminating misinformation) would be quite problematic. But another set of barriers has made researchers’ work even more difficult.

In July 2023, CCDH had reported that hate speech targeting minority communities had increased since Musk bought the platform in October 2022. X CEO Linda Vaccarino called CCDH’s allegations false and the lawsuit seeks unspecified damages. . It is difficult to see the action as anything other than an attempt to silence research into the platform. When the world’s richest man makes it clear that he will sue, the risk of criticizing his favorite toy increases substantially.

But an angry Musk isn’t the only powerful individual disinformation investigators are facing. The Judiciary Chairman of the US House of Representatives, Republican Congressman Jim Jordan, has been seeking information from academics who have studied the amplification of falsehoods on digital platforms. These requests, addressed to university professors, seek years of communications with the expectation of exposing a “censorship regime” involving these researchers and the United States government. These requests for information are costly for institutions and often add to the emotional burden faced by academics, who are harassed after their alleged role in social media “censorship” is reported.

When the world’s richest man makes it clear that he will sue his critics, the stakes rise substantially.

This constellation of factors (rise in misinformation on some platforms, the shutdown of tools used to study social media, lawsuits against disinformation research) suggests we may face an uphill battle to understand what happens in the digital public sphere in the future. nearby. That’s very bad news as we approach 2024, a year that features key elections in countries such as the United Kingdom, Mexico, Pakistan, Taiwan, India and the United States.

The elections in Taiwan are of particular interest to China, and journalists report that Taiwan has been inundated with disinformation portraying the United States as a threat to the territory. One story claimed that the Taiwanese government would send 150,000 blood samples to the United States so that the United States could design a virus that would kill the Chinese. The goal of these stories is to encourage Taiwanese voters to oppose alliances with the United States and push for closer ties with mainland China. Taiwanese NGOs are developing fact-checking initiatives to combat false narratives, but are also affected by reduced access to information on social media.

Indian Prime Minister Narendra Modi has enacted legislation to combat fake news on social media, and it seems likely that these new laws will target government critics more effectively than Modi supporters. Meanwhile, the 2024 US presidential election is shaping up to be a battle of disinformation artists. Serial liar Donald Trump, who made more than 30,000 false or misleading claims in his four years in office, is running not only against incumbent Joe Biden but also anti-vaccine crusader Robert F. Kennedy, who was banned Instagram for medical misinformation. before her account was restored when he became a presidential candidate.

If there is any hope for our ability to understand what will really happen on social media next year, it may come from the European Union, where the Digital Services Act requires transparency from platforms operating on the continent. But law enforcement actions are slow, and wars and elections are fast by comparison. The rise of misinformation around Israel and Gaza may indicate a future in which what happens online is literally unknowable.

Leave a Reply

Your email address will not be published. Required fields are marked *