April 20, 2024
A.I

OpenAI moves to reduce EU regulatory risk around data privacy

Image credits: David Paul Morris/Bloomberg/Getty Images

While most of Europe was still deep in the Christmas chocolate selection box late last month, OpenAI, the maker of ChatGPT, was busy sending out an email with details of an upcoming update to its terms that looks set to reduce your regulatory risk in the European Union.

The AI ​​giant’s technology has come under initial scrutiny in the region over ChatGPT’s impact on people’s privacy, with a number of investigations open into data protection concerns related to how the chatbot processes people’s information. people and the data it can generate about individuals, including from watchdogs in Italy and Poland. (Italy’s intervention even caused a temporary suspension of ChatGPT in the country until OpenAI reviewed the information and controls it provides to users.)

“We have changed the OpenAI entity that provides services such as ChatGPT to EEA residents and Swiss to our Irish entity, OpenAI Ireland Limited,” OpenAI wrote in an email to users sent on December 28.

A parallel update to the OpenAI Privacy Policy for Europe further states:

If you live in the European Economic Area (EEA) or Switzerland, OpenAI Ireland Limited, with registered office at 1st Floor, The Liffey Trust Centre, 117-126 Sheriff Street Upper, Dublin 1, D01 YC43, Ireland, is the data controller and is responsible for the processing of your Personal Data as described in this Privacy Policy.

The new terms of use listing its newly established Dublin-based subsidiary as a data controller for users in the European Economic Area (EEA) and Switzerland, where the bloc’s General Data Protection Regulation (GDPR) is in force , will begin to be applied in February. 15, 2024.

Users are informed that if they do not agree with the new OpenAI terms they can delete their account.

The GDPR’s single window (OSS) mechanism allows companies that process data of Europeans to streamline privacy oversight under a single data supervisor located in an EU Member State, where they are “primarily established”, as the regulatory jargon. .

Gaining this status effectively reduces the ability of privacy watchdogs located elsewhere in the bloc to act unilaterally on concerns. Instead, they would normally refer complaints to the senior supervisor of the parent company for consideration.

Other GDPR regulators still retain powers to intervene locally if they see urgent risks. But these types of interventions are usually temporary. They are also exceptional in nature, as most GDPR oversight is channeled through one primary authority. That’s why the status has been so attractive to big tech companies, as it allows the most powerful platforms to streamline privacy oversight of their cross-border processing of personal data.

When asked if OpenAI is working with Ireland’s privacy watchdog to obtain principal establishment status for its Dublin-based entity, under the GDPR OSS, a spokeswoman for the Irish Data Protection Commission ( DPC) told TechCrunch: “I can confirm that Open AI has committed to the DPC and other EU DPAs [data protection authorities] in this issue.”

OpenAI has also been contacted for comment.

The artificial intelligence giant opened an office in Dublin in September, initially hiring a handful of policy, legal and privacy staff, plus some administrative functions.

At the time of writing, it only has five Dublin-based vacancies out of a total of 100 listed on its careers page, so local hiring still appears to be limited. A leading EU Member State policy and partnerships function based in Brussels which is also recruiting at the moment asks applicants to specify whether they are available to work from the Dublin office three days a week. But the vast majority of the AI ​​giant’s open positions are listed as based in San Francisco/US.

One of the five Dublin-based roles OpenAI is advertising is Privacy Software Engineer. The other four are for: account director, platform; international payroll specialist; media relations, Europe in the lead; and sales engineer.

Who and how much hiring OpenAI does in Dublin will be relevant to obtaining main establishment status under the GDPR, as it is not simply a matter of filing some legal paperwork and ticking a box to obtain the status. The company will have to convince the bloc’s privacy regulators that the entity based in the Member States that it names as legally responsible for Europeans’ data is really capable of influencing decision-making in this regard.

That means having the right expertise and legal structures to influence and apply meaningful privacy controls to a U.S. parent company.

Put another way, opening a main office in Dublin that simply approves product decisions made in San Francisco shouldn’t be enough.

That said, OpenAI may be looking with interest at the example of . took over, despite the erratic billionaire owner taking an ax to X’s regional staff, ousting relevant experts, and making what appeared to be extremely one-sided product decisions. (So, well, go figure).

If OpenAI is granted primary GDPR establishment status in Ireland, gaining primary oversight by the Irish DPC, it would join the likes of Apple, Google, Meta, TikTok and X, to name a few of the multinationals that have chosen to establish their home in the EU. Dublin.

Meanwhile, the DPC continues to attract substantial criticism for the pace and cadence of its GDPR oversight of local tech giants. And while recent years have seen a series of headline-grabbing sanctions on big tech companies that eventually exited Ireland, critics point out that the regulator often advocates for substantially lower penalties than its peers. Other criticisms include the glacial pace and/or unusual trajectory of the DPC’s investigations. Or cases where you decide not to investigate a complaint at all, or choose to rephrase it in a way that sidesteps the key concern (on the latter, see, for example, this Google ad tech complaint).

Any existing GDPR investigations into ChatGPT, such as by regulators in Italy and Poland, may still have consequences in terms of shaping regional regulation of OpenAI’s generative AI chatbot, as investigations are likely to follow their given course. which refer to data processing prior to any future principal. establishment status that the AI ​​giant can earn. But it’s less clear how much impact they can have.

As a refresher, Italy’s privacy regulator has been looking at a long list of concerns about ChatGPT, including the legal basis OpenAI relies on to process people’s data to train its AIs. Meanwhile, Poland’s watchdog opened an investigation following a detailed complaint about ChatGPT, including how the AI ​​bot hallucinates (i.e. fabricates) personal data.

Notably, OpenAI’s updated European privacy policy also includes more detail on the legal bases it claims to process people’s data, with new wording expressing its claim to rely on a legitimate interests legal basis to process data. of people for training the AI ​​model like “necessary for our legitimate interests and those of third parties and society in general.” [emphasis ours].

While OpenAI’s current privacy policy contains a much drier line on this element of its claimed legal basis: “Our legitimate interests in protecting our Services against abuse, fraud or security risks, or in developing, improving or promoting our Services , even when we train our models.”

This suggests that OpenAI may intend to try to defend its vast non-consensual collection of internet users’ personal data to generate AI generative profits before concerned European privacy regulators by presenting some kind of public interest argument for the activity, in addition to its own (commercial) argument. interests. However, the GDPR has a strictly limited set of (six) valid legal bases for the processing of personal data; Data controllers cannot simply play pick and mix bits from this list to invent their own bespoke justification.

It’s also worth noting that GDPR watchdogs have already been trying to find common ground on how to address the complicated intersection between data protection law and big data-driven AI through a working group created within from the European Data Protection Board last year. Although it remains to be seen whether any consensus will emerge from the process. And given OpenAI’s decision to establish a legal entity in Dublin as the controller of European user data now, in the future, Ireland could well have the decisive say in the direction of travel when it comes to generative AI and privacy rights.

Yeah If the DPC were to become OpenAI’s primary overseer, it would have the ability to, for example, slow the pace of any GDPR enforcement in the rapidly advancing technology.

Back in April, in the wake of the Italian intervention into ChatGPT, current DPC Commissioner Helen Dixon warned against privacy watchdogs rushing to ban the technology over data concerns, saying regulators should Take time to figure out how to enforce block data protection. Law on AI.

Note: UK users are excluded from OpenAI’s legal base change to Ireland, and the company specifies that they fall under the purview of its corporate entity based in Delware in the US. (Since Brexit, the GDPR of the EU no longer applies in the UK; although it retains its own UK GDPR in national law, a data protection regulation that is still historically based on the European framework, which will change as the UK moves away from the bloc’s rules (gold standard in data protection through the rights-diluting ‘data reform’ bill currently making its way through parliament.)



Leave a Reply

Your email address will not be published. Required fields are marked *