March 4, 2024
A.I

There is already a way to share technology freely and prevent its misuse.

There are many proposed ways to try to put limits on artificial intelligence (AI), due to its potential to cause harm to society, as well as its benefits.

For example, the EU AI Law places greater restrictions on systems depending on whether they fall into the category of general-purpose and generative AI or are considered to pose limited risk, high risk or unacceptable risk.

This is a bold new approach to mitigating any harmful effects. But what if we could adapt some tools that already exist? Software licensing is a well-known model that could be adapted to meet the challenges posed by advanced AI systems.

Open responsible AI licenses (OpenRails) could be part of this answer. OpenRail licensed AI is similar to open source software. A developer can make his system public under the license. This means that anyone is free to use, adapt and reshare the original license.

The difference with OpenRail is the addition of conditions on the responsible use of AI. These include not breaking the law, impersonating people without consent or discriminating against people.

In addition to the mandatory conditions, OpenRails can be adapted to include other conditions that are directly relevant to the specific technology. For example, if an AI was created to categorize apples, the developer can specify that it should never be used to categorize oranges, as doing so would be irresponsible.

The reason this model can be useful is that many AI technologies are so general that they could be used for many things. It is really difficult to predict the dire ways in which they could be exploited.

This model therefore allows developers to help drive open innovation while reducing the risk of their ideas being used irresponsibly.

Open but responsible

In contrast, proprietary licenses are more restrictive in how the software can be used and adapted. They are designed to protect the interests of creators and investors and have helped tech giants like Microsoft build vast empires by charging for access to their systems.

Because of its broad scope, AI arguably demands a different, more nuanced approach that could promote the openness that drives progress. Many large companies currently operate private (closed) AI systems. But this could change, as there are several examples of companies using an open source approach.

Meta’s generative AI system, Llama-v2, and Stable Diffusion imager are open source. French AI startup Mistral, established in 2023 and now valued at $2bn (£1.6bn), will soon openly launch its latest model, which is rumored to have performance comparable to GPT-4 (the model behind Chat GPT).

However, openness must be tempered with a sense of responsibility towards society, due to the potential risks associated with AI. These include the potential for algorithms to discriminate against people, replace jobs, and even pose existential threats to humanity.

HuggingFace is the world’s largest AI developer hub.
Jesse Joshua Benjamin, Fourni by the author

We should also consider the more monotonous, everyday uses of AI. Technology will increasingly become part of our social infrastructure, a central part of how we access information, form opinions and express ourselves culturally.

Such a universally important technology carries its own kind of risk, different from the robot apocalypse, but still very much worthy of consideration.

One way to do this is to contrast what AI can do in the future with what free speech does now. The free exchange of ideas is not only crucial to defend democratic values ​​but is also the driving force of culture. It facilitates innovation, encourages diversity, and allows us to discern truth from falsehood.

The AI ​​models being developed today are likely to become a primary means of accessing information. They will shape what we say, what we see, what we hear and, by extension, how we think.

In other words, they will shape our culture in the same way that freedom of speech has. For this reason, there is a good argument that the fruits of AI innovation should be free, shared and open. And it turns out that most of it already is.

limits are needed

On the HuggingFace platform, the world’s largest AI development center, there are currently more than 81,000 models that are published using “permissive open source” licenses. Just as the right to speak freely overwhelmingly benefits society, this open AI exchange is an engine for progress.

However, freedom of expression has necessary ethical and legal limits. Making false statements that are harmful to others or expressions of hate based on ethnicity, religion or disability are widely accepted limitations. Providing innovators with a means to find this balance in the realm of AI innovation is what OpenRails does.

For example, deep learning technology is applied in many valuable domains, but it also underpins deepfake videos. The developers probably didn’t want their work to be used to spread misinformation or create non-consensual pornography.

An OpenRail would have given them the ability to share their work with restrictions that would prohibit, for example, anything that broke the law, caused harm or resulted in discrimination.

Legally enforceable

Can OpenRAIL licenses help us avoid the inevitable ethical dilemmas that AI will pose? Licensing can only go so far, with the limitation that licenses are only as good as the ability to enforce them.

Currently, enforcement would likely be similar to enforcement for music copying and software piracy and would involve sending cease and desist letters with the prospect of possible court action. While these measures do not stop piracy, they do discourage it.

Despite the limitations, there are many practical benefits, licenses are well understood by the technology community, are easily scalable, and can be adopted with little effort. This has been recognized by developers and to date over 35,000 models hosted on HuggingFace have adopted OpenRails.

Ironically, given the company’s name, OpenAI, the company behind ChatGPT, does not openly license its most powerful AI models. Instead, with its flagship language models, the company operates a closed approach that gives access to AI to anyone willing to pay, while preventing others from developing or adapting the underlying technology.

As with the free speech analogy, the freedom to openly share AI is a right we should value, but perhaps not fully. While not a panacea, license-based approaches like OpenRail seem like a promising piece of the puzzle.

Leave a Reply

Your email address will not be published. Required fields are marked *