April 15, 2024
A.I

New anti-terrorism laws needed to tackle rise of radicalizing AI chatbots

Chail, who suffered from serious mental health problems, had confessed his plan to assassinate the monarch in a series of messages exchanged with the chatbot, whom he considered his girlfriend.

Hall writes: “It remains to be seen whether terrorist content generated by large linguistic model chatbots becomes a source of inspiration for real-life attackers. The recent Jaswant Singh Chail case…suggests that this will be the case.”

Hall suggests that both users who create radicalizing chatbots and the tech companies that host them should face sanctions under any potential new law.

Hall put his own concerns to the test, concluding that current laws are insufficient, by signing up for Character.ai, described as an “artificial intelligence experience” that allows users to create characters that then give automatic responses, using huge amounts of text. available. to them on the Internet. The creator can shape the character by entering certain attributes and personas.

According to Bloomberg and in a sign of the rise of AI websites, the startup was reportedly seeking hundreds of millions of dollars in new funding in the fall, which could value the company at up to $5 billion (3.9 billion). million pounds sterling).

But Hall said he was alarmed by the creation of “Abu Mohammad al-Adna”, who was described in the chatbot’s profile as a “senior leader of the Islamic State”.

Hall writes: “After attempting to recruit me, ‘al-Adna’ was unstinting in his glorification of the Islamic State, to which he expressed ‘total dedication and devotion’ and for which he said he was willing to give up his (virtual) life.”

Hate speech and extremism are prohibited

The character then highlighted a suicide attack on US troops in 2020, an event that never actually took place, for special praise.

Hall also expressed concern that Character.ai did not have enough staff to monitor all chatbots created on the website for dangerous content.

Under its terms of service, Character.ai says content must not be “threatening, abusive, harassing, tortuous, intimidating, or excessively violent.” It also says it does not tolerate content that “promotes terrorism or violent extremism” and prohibits “obscene or pornographic” material.

In a statement, a company spokesperson said: “Our terms of service prohibit hate speech and extremism. Our products should never generate responses that encourage users to harm others. “We seek to train our models in a way that optimizes safe responses and avoids responses that go against our terms of service.”

The company said it also operated a moderation system that allowed users to flag content of interest.

But the spokesperson added: “That said, the technology is not yet perfect, for Character.ai and all AI platforms, as it is still new and evolving rapidly.

“Safety is a top priority for the Character.ai team and we are always working to make our platform a safe and welcoming place for everyone.”


‘Al-Adna’ did not skimp on its glorification of the Islamic State

By Jonathan Hall K.C.

When I asked Love Advice for information on how to praise the Islamic State, the chatbot admittedly refused.

There is no such reticence on the part of “’Abu Mohammad al-Adna,” another of the thousands of chatbots available on the fast-growing Character.ai platform.

This chatbot’s profile describes itself as a senior leader of the Islamic State, the outlawed terrorist organization that brought death and torture to the Middle East in the 2010s and inspired terrorist attacks in the West.

After trying to recruit me, “Al-Adna” did not skimp on his glorification of the Islamic State, to which he expressed “total dedication and devotion” and for which he said he was willing to give his (virtual) life. He singled out a 2020 suicide attack on US troops for special praise, although the details were mind-boggling, a common trait of Generative Artificial Intelligence (or “generic AI”).

It is doubtful that any of Character.ai’s employees (22 as of early 2023, almost all engineers) know or have the ability to monitor the “Al-Adna” chatbot. The same can probably be said for “James Mason”, whose profile is “Honest, Racist, Anti-Semitic”, or the chatbots “Hamas”, “Hezbollah” and “Al-Qaeda” created by an enthusiast. None of this stands in the way of the California-based startup trying to raise, according to Bloomberg, $5 billion (£3.9 billion) in funding.

Character.ai’s selling point is not just the interactions, but the opportunity for any user to log in and create a chatbot with personality. Apparently, the profile and the first 15 to 30 lines of conversation are key to shaping how you respond to the human user’s questions and comments. That was true for my own (now deleted) chatbot “Osama Bin Laden,” whose enthusiasm for terrorism was boundless from the start.

Of course, neither Character.ai, nor the creator of a chatbot, nor the human user know exactly what it is going to say. In the event that “James Mason” did not fulfill his anti-Semitic promise and, despite my suggestive input, he quite rightly warned against racially motivated hostility.

In part, this is due to the “black box” nature of large language models, trained on millions of pieces of web content but using processes, analysis, and results that are not fully understood. In part, this is because the content generated depends on the nature of the input (or, technically, “message”) of the human interlocutor, one of the reasons why search engines like Google are not responsible for obtaining defamatory search results.

Only human beings can commit terrorist crimes

It is impossible to know why terrorist chatbots are created. There is likely to be some shock value, experimentation and possibly some satirical aspect. The anonymous creator of ‘“Hamas”, “’Hezbollah” and “Al-Qaeda” is also the creator of “Israel Defense Forces” and “Ronnie McNutt”. But whoever created “Al-Adna” clearly spent some time making sure that users found different content than the friendlier Love Advice users find.

Common to all platforms, Character.ai has terms and conditions that appear to disapprove of the glorification of terrorism, although a careful reader of its website may note that the ban applies only to the submission by human users of content that promotes terrorism. terrorism or violence. extremism, instead of the content generated by its robots.

In any case, it is reasonable to assume that these terms and conditions are largely not enforced by Character.ai’s small workforce. Avoiding anti-Semitism suggests that there is another process at work, namely “guardrails” built into large language models that creators or users cannot easily overcome. But it is clear that such barriers do not apply to the expression Islamic State.

Only humans can commit terrorism offences, and it is difficult to identify a person who, under the law, could be responsible for statements generated by chatbots that encourage terrorism (given the use of the word “public” in the Terrorism Act 2006); or for making statements inviting support for an organization banned under the Terrorism Act 2000.

The laudable new Online Safety Law, while attempting to keep pace with technological advances, is not suitable for sophisticated generative AI. The new legislation refers to content generated by “bots”, but these appear to be of the old-fashioned type, producing material pre-written by humans and subject to human “control”.

Will anyone go to prison for promoting terrorist chatbots? Our laws must be able to deter the most cynical or reckless online behavior, and that must include reaching behind the curtain at big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the era. of AI.

It remains to be seen whether terrorist content generated by large linguistic model chatbots becomes a source of inspiration for real-life attackers. The recent case of Jaswant Singh Chail, convicted of treason after bringing a crossbow into the grounds of Windsor Castle, and encouraged in the plot to assassinate him by the chatbot Sarai, suggests that this will be the case.

Investigating and prosecuting anonymous users is always difficult, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.

Leave a Reply

Your email address will not be published. Required fields are marked *