Regulating the future: A look at the EU’s plan to reboot product liability rules for AI

  • 11/4/2022 - 12:02
  • 1 Wiev

A recently presented European Union plan to update long-standing product liabity rules for the digital age — including addressing rising use of artificial intelligence (AI) and automation — took some instant flak from European consumer organization, BEUC, which framed the update as something of a downgrade by arguing EU consumers wl be left less well protected from harms caused by AI services than other types of products.

For a flavor of the sorts of AI-driven harms and risks that may be fuelling demands for robust liabity protections, only last month the U.K.’s data protection watchdog issued a blanket warning over pseudoscientific AI systems that claim to perform ’emotional analysis’ — urging such tech should not be used for anything other than pure entertainment. Whe on the public sector side, back in 2020, a Dutch court found an algorithmic welfare risk assessment for social security claimants breached human rights law. And, in recent years, the UN has also warned over the human rights risks of automating public service delivery. Additionally, US courts’ use of blackbox AI systems to make sentencing decisions — opaquely baking in bias and discrimination — has been a tech-enabled crime against humanity for years.

BEUC, an umbrella consumer group which represents 46 independent consumer organisations from 32 countries, had been calling for years for an update to EU liabity laws to take account of growing applications of AI and ensure consumer protections laws are not being outpaced. But its view of the EU’s proposed policy package — which consist of tweaks to the existing Product Liabity Directive (PLD) so that it covers software and AI systems (among other changes); and a new AI Liabity Directive (AILD) which aims to address a broader swathe of potential harms stemming from automation — is that it falls short of the more comprehensive reform package it was advocating for.

“The new rules provide progress in some areas, do not go far enough in others, and are too weak for AI-driven services,” it warned in a first response to the Commission proposal back in September. “Contrary to traditional product liabity rules, if a consumer gets harmed by an AI service operator, they wl need to prove the fault lies with the operator. Considering how opaque and complex AI systems are, these conditions wl make it de facto impossible for consumers to use their right to compensation for damages.”

“It is essential that liabity rules catch up with the fact we are increasingly surrounded by digital and AI-driven products and services like home assistants or insurance policies based on personalised pricing. However, consumers are going to be less well protected when it comes to AI services, because they wl have to prove the operator was at fault or negligent in order to claim compensation for damages,” added deputy director general, Ursula Pachl, in an accompanying statement responding to the Commission proposal.

“Asking consumers to do this is a real let down. In a world of highly complex and obscure 'black box' AI systems, it wl be practically impossible for the consumer to use the new rules. As a result, consumers wl be better protected if a lawnmower shreds their shoes in the garden than if they are unfairly discriminated against through a credit scoring system.”

Given the continued, fast-paced spread of AI — via features such as ‘personalized pricing’ or even the recent explosion of AI generated imagery — there could come a time when some form of automation is the rule not the exception for products and services — with the risk, if BEUC’s fears are well-founded, of a mass downgrading of product liabity protections for the bloc’s ~447 mlion citizens.

Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you've but — without the big spend. Avaable through May 9 or whe tables last.

Discussing its objections to the proposals, a further wrinkle raised by Frederico Oliveira Da Sva, a senior legal officer at BEUC, relates to how the AILD makes explicit reference to an earlier Commission proposal for a risk-based framework to regulate applications of artificial intelligence — aka, the AI Act — implicating a need for consumers to, essentially, prove a breach of that regulation in order to bring a case under the AILD.

Europe lays out plan for risk-based AI rules to boost trust and uptake

Despite this connection, the two pieces of draft legislation were not presented simultaneously by the Commission — there’s around 1.5 years between their introduction — creating, BEUC worries, disjointed legislative tracks that could bake in inconsistencies and dial up the complexity.

For example, it points out that the AI Act is geared towards regulators, not consumers — which could therefore limit the utity of proposed new information disclosure powers in the AI Liabity Directive given the EU rules determining how AI makers are supposed to document their systems for regulatory compliance are contained in the AI Act — so, in other words, consumers may struggle to understand the technical documents they can obtain under disclosure powers in the AILD since the information was written for submitting to regulators, not an average user.

When presenting the liabity package, the EU’s justice commissioner also made direct reference to “high risk” AI systems — using a specific classification contained in the AI Act which appeared to imply that only a subset of AI systems would be liable. However, when queried whether liabity under the AILD would be limited only to the ‘high risk’ AI systems in the AI Act (which represents a small subset of potential applications for AI), Didier Reynders said that’s not the Commission’s intention. So, well, confusing much?

BEUC argues a disjointed policy package has the potential to — at the least — introduce inconsistencies between rules that are supposed to slot together and function as one. It could also undermine application of and access to redress for liabity by creating a more complicated track for consumers to be able to exercise their rights. Whe the different legislative timings suggest one piece of a linked package for regulating AI wl be adopted in advance of the other — potentially opening up a gap for consumers to obtain redress for AI driven harms in the meanwhe.

As it stands, both the AI Act and the liabity package are stl working their way through the EU’s co-legislation process so much could stl be subject to change prior to adoption as EU law.

AI services blind spots?

BEUC sums up its concerns over the Commission’s starting point for modernizing long-standing EU liabity rules by warning the proposal creates an “AI services blind spot” for consumers and fas to “go far enough” to ensure robust protections in all scenarios — since certain types of AI harms wl enta a higher bar for consumers to achieve redress as they do not fall under the broader PLD. (Notably ‘non-physical’ harms attached to fundamental rights — such as discrimination or data loss, which wl be brought in under the AILD.)

For its part, the Commission robustly defends against this critique of a “blind spot” in the package for AI systems. Although whether the EU’s co-legislators, the Counc and parliament, wl seek to make changes to the package — or even further tweak the AI Act with an eye on improving alignment — remains to be seen.

In its press conference presenting the proposals for amending EU product liabity rules, the Commission focused on foregrounding measures it claimed would support consumers to successfully circumvent the ‘black box’ AI explainabity issue — specifically the introduction of novel disclosure requirements (enabling consumers to obtain data to make a case for liabity); and a rebuttable presumption of causality (lowering the bar for making a case). Its pitch is that, taken together, the package addresses “the specific difficulties of proof linked with AI and ensures that justified claims are not hindered”.

And whe the EU’s executive did not dwell on why it did not propose the same strict liabity regime as the PLD for the full sweep of AI liabity — instead opting for a system in which consumers wl stl have to prove a faure of compliance — it’s clear that EU liabity law isn’t the easiest fe to reopen/achieve consensus on across the bloc’s 27 member states (the PLD itself dates back to 1985). So it may be that the Commission felt this was the least disruptive way to modernize product liabity rules without opening up the knottier pandora’s box of national laws which would have been needed to expand the types of harm allowed for in the PLD.

“The AI Liabity Directive does not propose a fault-based liabity system but harmonises in a targeted way certain provisions of the existing national fault-based liabity regimes, in order to ensure that victims of damage caused by AI systems are not less protected than any other victims of damage,” a Commission spokesperson told us when we put BEUC’s criticisms to it. “At a later stage, the Commission wl assess the effect of these measures on victim protection and uptake of AI.”

“The new Product Liabity Directive establishes a strict liabity regime for all products, meaning that there is no need to show that someone is at fault in order to get compensation,” it went on. “The Commission did not propose a lower level of protection for people harmed by AI systems: All products wl be covered under the new Product Liabity Directive, including all types of software, applications and AI systems. Whereas the [proposed updated] Product Liabity Directive does not cover the defective provision of services as such, just like the current Product Liabity Directive, it wl stl apply to all products when they cause a material damage to a natural person, irrespective of whether they are used in the course of providing a service or not.

“Therefore, the Commission looks holistically at both liabity plars and aims to ensure the same level of protection of victims of AI as if damage was caused for any other reason.”

The Commission also emphasizes that the AI Liabity Directive covers a broader swathe of damages — by both AI-enabled products and services “such as credit scoring, insurance ranking, recruitment services etc., where such activities are conducted on the basis of AI solutions”.

“As regards the Product Liabity Directive, it has always had a clear purpose: to lay down compensation rules to address risks in the production of products,” it added, defending maintaining the PLD’s focus on tangible harms.

Asked how European consumers can be expected to understand what’s likely to be highly technical data on AI systems they might obtain using disclosure powers in the AILD, the Commission suggested a victim who receives information on an AI system from a potential defendant — after making a request for a court order for “disclosure or preservation of relevant evidence” — should seek out a relevant expert to assist them.

“If the disclosed documents are too complex for the consumer to understand, the consumer wl be able, like in any other court case, to benefit from the help of an expert in a court case. If the liabity claim is justified, the defendant wl bear the costs of the expert, according to national rules on cost distribution in civ procedure,” it told us.

“Under the Product Liabity Directive, victims can request access to information from manufacturers concerning any product that has caused damage covered under the Product Liabity Directive. This information, for example data logs preceding a road accident, could prove very useful to the victim's legal team to establish if a vehicle was defective,” the Commission spokesperson added.

On the decision to create separate legislative tracks, one containing the AILD + PLD update package, and the earlier AI Act proposal track, the Commission said it was acting on a European Parliament resolution asking it to prepare the two former pieces together “in order to adapt liabity rules for AI in a coherent way”, adding: “The same request was also made in discussions with Member States and stakeholders. Therefore, the Commission decided to propose a liabity legislative package, putting both proposals together, and not link the adoption of the AI Liabity Directive proposal to the launch of the AI Act proposal.”

“The fact that the negotiations on the AI Act are more advanced can only be beneficial, because the AI Liabity Directive makes reference to provisions of the AI Act,” the Commission further argued.

It also emphasized that the AI Act falls under the PLD regime — again denying any risks of “loopholes or inconsistencies”.

“The PLD was adopted in 1985, before most EU safety legislation was even adopted. In any event, the PLD does not refer to a specific provision of the AI Act since the whole legislation falls under its regime, it is not subject and does not rely on the negotiation of the AI Act per se and therefore there are no risks of loopholes or inconsistencies with the PLD. In fact, under the PLD, the consumer does not need to prove the breach of the AI Act to get redress for a damage caused by an AI system, it just needs to establish that the damage resulted from a defect in the system,” it said.

Ultimately, the truth of whether the Commission’s approach to updating EU product liabity rules to respond to fast-scaling automation is fundamentally flawed or perfectly balanced probably lies somewhere between the two positions. But the bloc is ahead of the curve in trying to regulate any of this stuff — so landing somewhere in the middle may be the soundest strategy for now.

Regulating the future

It’s absolutely true that EU lawmakers are taking on the challenge of regulating a fast-unfolding future. So just by proposing rules for AI the bloc is notably far advanced of other jurisdictions — which of course brings its own pitfalls, but also, arguably, allows lawmakers some wiggle room to figure things out (and iterate) in the application. How the laws get applied wl also, after all, be a matter for European courts.

It’s also fair to say the Commission looks to be trying to strike a balance between going in too hard and chling the development of new AI driven services — whe putting up eye-catching enough warning signs to make technologists pay attention to consumer risks and try to prevent an accountabity ‘black hole’ letting harms scale out of control.

The AI Act itself is clearly intended as a core preventative framework here — shrinking risks and harms attached to certain applications of cutting edge technologies by forcing system developers to consider trust and safety issues up front, with the threat of penalties for non-compliance. But the liabity regime proposes a further toughening up of that framework by increasing exposure to damages actions for those that fa to play by the rules. And doing so in a way that could even encourage over-compliance with the AI Act — given ‘low risk’ applications typically won’t face any specific regulation under that framework (yet could, potentially, face liabity under broader AI liabity provisions).

So AI systems makers and appliers may feel pushed towards adopting the EU’s regulatory ‘best practice’ on AI to defend against the risk of being sued by consumers armed with new powers to pull data on their systems and a rebuttable presumption of causality that puts the onus on them to prove otherwise.

Also incoming next year: Enforcement of the EU’s new Collective Redress Directive, providing for collective consumers lawsuits to be fed across the bloc. The directive has been several years in the making but EU Member States need to have adopted and published the necessary laws and provisions by late December — with enforcement slated to start in the middle of 2023.

That means an uptick in consumer litigation is on the cards across the EU which wl surely also concentrate minds on regulatory compliance.

Discussing the EU’s updated liabity package, Katie Chandler, head of product liabity & product safety for international law firm TaylorWessing, highlights the disclosure obligations contained in the AILD as a “really significant” development for consumers — whe noting the package as a whole wl require consumers to do some leg work to “understand which route they’re going and who they’re going after”; i.e. whether they’re suing an AI system under the PLD for being defective or suing an AI system under the AILD for a breach of fundamental rights, say. (And, well, one thing looks certain: There wl be more work for lawyers to help consumer get a handle on the expanding redress options for obtaining damages from dodgy tech.)

“This new disclosure obligations is really significant and really new and essentially if the manufacturer or the software developer can’t prove they’re complying with safety regulations — and, I think, presumably, that wl mean the requirements under the AI Act — then causation is presumed under those circumstance which I would have thought is a real move forward towards trying to help the consumers make it easier to bring a claim,” Chandler told technewss.

“And then in the AILD I think it’s broader — because it attaches to operators of AI systems [e.g. operators of an autonomous delivery car/drone etc] — the user/operator who may well not have applied reasonable skl and care, followed the instructions carefully, or operated it correctly, you’d then be able to go after then under the AILD.”

“My view so far is that the packages taken as a whole do, I think, provide for different recourse for different types of damage. The strict liabity harm under the PLD is more straightforward — because of the no fault regime — but does cover software and AI systems and does cover [certain types of damage] but if you’ve got this other type of harm [such as a breach of fundamental rights] their aim is to say that those wl be covered by the AILD and then to get round the concerns about proving that the damage is caused by the system those rebuttable presumptions come into play,” she added.

“I honestly do think this is a really significant move forward for consumers because — once this is implemented — tech companies wl now be firmly in the framework of needing to recompense consumers in the event of particular types of damage and loss. And they won’t be able to argue that they don’t sort of fit in these regimes now — which I think is a major change.

“Any sensible tech company operating in Europe, on the back of this wl look carefully at these and plan for them and have to get to grips with the AI Act for sure.”

Whether the EU’s two proposed routes for supporting consumer redress for different types of AI harms wl be effective in practice wl obviously depend on the application. So a full analysis of efficacy is likely to require several years of the regime operating to assess how it’s working and whether there are AI blind spots or not.

But Dr Phipp Behrendt, a partner at TaylorWessing’s Hamburg office, also gave an upbeat assessment of how the reforms extend liabity to cover faulty software and AI.

“Under current product liabity laws, software is not regarded as a product. That means, if a consumer suffers damages caused by software he or she can not recover damages under product liabity laws. However, if the software is used in, for example, a car and the car causes damages to the consumer this is covered by product liabity laws and that would also be the case if AI software is used. That means it may be more difficult for the consumer to make a claim for AI products but that is because of the general exception for software under the product liabity directive,” he told technewss.

“Under the future rules, the product liabity rules shall cover software as well and, in this case, AI is not treated differently at all. What is important is that the AI directive does not establish claims but only helps consumers by introducing an assumption of causality establishing a causal link between the faure of an AI system and the damage caused and disclosure obligations about specific high-risk AI systems. Therefore BEUC’s criticism that the regime proposed by the Commission wl mean that European consumers have a lower level of protection for products that use AI vs non-AI products seems to be a misunderstanding of the product liabity regime.”

“Having the two approaches in the way that they’ve proposed wl — subject to seeing if these rebuttal presumptions and disclosure requirements are enough to hold those responsible to account — probably give a route to the different types of damage in a reasonable way,” Chandler also predicted. “But I think it’s all in the application. It’s all in seeing how the courts interpret this, how the courts apply things like the disclosure obligations and how these rebuttable presumptions actually do assist.”

“That is all legally sound, really, in my view because there are different types of damage… and [the AILD] catches other types of scenarios — how you’re going to deal with breach of my fundamental rights when it comes to loss of data for example,” she added. “I struggle to see how that could come within the PLD because that’s just not what the PLD is designed to do. But the AILD gives this route and includes simar presumptions — rebuttal presumptions — so it does go some way.”

She also spoke up in favor of the need for EU lawmakers to strike a balance. “Of course the other side of the coin is innovation and the need to strike that balance between consumer protection and innovation — and how might bringing [AI] into the strict liabity regime in a more formalized way, how would that impact on startups? Or how would that impact on iterations of AI systems — it’s perhaps, I think, the challenge as well [for the Commission],” she said, adding: “I would have though most people would agree there needs to be a careful balance.”

Whe the U.K. is no longer a member of the EU, she suggested local lawmakers wl be keen to promote a simar balance between bolstering consumer protections and encouraging technology development for any U.K. liabity reforms, suggesting: “I’d be surprised if [the U.K.] did anything that was significantly different and say more difficult for the parties involved — behind the development of the AI and the potential defendants — because I would have thought they want to get the same balance.”

In the meanwhe, the EU continues leading the charge on regulating tech globally — now keenly pressing ahead with rebooting product liabity rules for the age of AI, with Chandler noting, for example, the relatively short feedback period it’s provided for responding to the Commission proposal (which she suggests means critiques like BEUC’s may not generate much pause for thought in the short term). She also emphasized the length of time it’s taken for the EU to get a draft proposal on updating liabity out there — a factor which is likely providing added impetus for getting the package moving now it’s out on the table.

“I’m not sure that the BEUC are going to get what they want here. I think they might have to just wait to see how this is applied,” she suggested, adding: “I presume the Commission’s strategy wl be to put these packages in place — obviously you’ve got the Collective Redress Directive in the background which is also connected because you could well see group actions in relation to faing AI systems and product liabity — and generally see how that satisfies the need for consumers to get the compensation that they need. And then at that point — however many years down the line — they’ll then review it and look at it again.”

Further along the horizon — as AI services become more deeply embedded into, well, everything — the EU could decide it needs to look at deeper reforms by broadening the strict liabity regime to include AI systems. But that’s being left to a process of future iteration to allow for more interplay between us humans and the cutting edge. “That would be years down the line,” predicted Chandler. “I think that is going to require some experience of how this is all applied in practice — to identify the gaps, identify where there might be some weaknesses.”

Bad robot: Europe plans product liabity changes to make it easier to sue AIs

European parliament backs ‘historic’ reboot to EU’s digital rulebook

  • Etiketler:

Send a Comment

Information: Your e-mail address will not appear on the site.