Europe

Forecasting and profiling, or bias and discrimination?

ProDentim


Artificial intelligence has already brought down a government in Europe.

In 2019, the Dutch tax administration used self-learning algorithms to create risk profiles to detect fraud involving childcare benefits. As it became clear that the families, mostly from ethnic minority communities, had been identified on suspicion of fraud and then penalized due to algorithm-generated profiles, a huge political scandal brought down the government of Dutch Prime Minister Mark Rutte.

  • Former UN human rights chief Michelle Bachelet has warned that AI systems could carry ‘negative, even catastrophic’ risks (Picture: Wikimedia)

Rutte survived. But thousands of ordinary lives have been ruined.

As artificial intelligence becomes an essential, albeit invisible, feature of our daily interactions, prepare for even more EU-wide political wrangling, centered on the European Commission’s proposals to regulate the area.

The proposals, made in 2019, are now being debated in the European Parliament – ​​where political groups have tabled more than 3,000 amendments – and are expected to receive the green light from the Council in December. The EU AI law will then be adopted next year, after intra-institutional negotiations.

However, expect a confrontation between the parliament and the council. MEPs are expected to push for tougher regulation and better protection of rights, while governments are likely to argue for fewer rules, citing the need for competitiveness and security concerns.

“The success of the text will lie in the balance we find between the need to protect the interests and rights of our citizens and the interest of stimulating innovation and encouraging the adoption and development of AI” , said Romanian liberal MP Dragoș Tudorache, one of the recently declared European lawmakers in charge of the file. “The real political discussions are yet to come.”

Bulgarian Socialist MEP Petar Vitanov, one of the negotiators on the dossier, says the focus must be on ensuring that “fundamental rights and freedoms are safeguarded, there can be no innovation without fundamental rights”. .

Key issues include the governance of the law and the definition of risk in the legislation.

Lawmakers want to give the commission the power to expand the list of “high-risk areas” and increase fines for non-compliance to 40 million euros or 7% of annual turnover.

Some EU governments are asking for exemptions for the use of AI by migration authorities and law enforcement, which could lead to increased control over communities, including ethnic communities, who are already more closely watched than others.

While some critics, like former United Nations human rights chief Michelle Bachelet, say governments should impose a moratorium on the sale and use of AI systems until the risks “negative, even catastrophic” they pose can be treated.

A UN report on the use of AI as a forecasting and profiling tool warns that the technology could impact “the rights to privacy, fair trial, freedom from arrest and arbitrary detention and the right to life”.

Bachelet acknowledged that AI “can be a force for good, helping societies overcome some of the great challenges of our time,” but suggested the harms it could cause outweigh the benefits. “The higher the risk to human rights, the stricter the legal requirements for the use of AI technology must be,” she said.

The committee’s proposal calls for a ban on AI apps that manipulate human behavior (such as toys that use voice assistance to encourage dangerous activities in children) or systems that allow a “social score” to be the Chinese. The use of biometric identification systems, such as facial recognition, for law enforcement in public spaces, is also prohibited.

Exceptions are allowed in the case of tracing victims of abduction, identifying a perpetrator or suspect of a criminal offence, or for the prevention of imminent threats, such as a terrorist attack.

Digital rights activists warn, however, that there are “loopholes” that allow for mass surveillance.

Any exemptions given to governments and businesses for using a ‘purely incidental’ AI system and used in a minor matter could, in fact, ‘undermine the whole law’, warns Sarah Chander, who leads policy work on artificial intelligence for European Digital Rights (EDRi), a network of non-profit organizations working to defend digital rights in the EU.

High risk

The Commission’s proposal focuses on so-called ‘high-risk’ AI systems that may compromise people’s safety or fundamental rights, such as education (e.g. exam grading), employment ( such as CV screening software for recruitment) or public services. (e.g. credit scoring to prevent people from getting a loan).

Companies wishing to compete using AI systems in this “high risk” category would need to meet EU requirements, such as explainability, risk assessment and human oversight.

Some fear, however, that these requirements could discourage start-ups and companies from investing in Europe in such AI systems, thus giving a competitive advantage to the United States or China.

Companies that fail to comply with the legislation could face fines of up to €30 million or 6% of their global turnover.

Chander pointed out that some of the greatest harm from AI systems could come from public service delivery, such as social services, policing (where predictive policing based on mass surveillance is a major concern), and the migration. AI-based social decisions are dangerous because AI systems make assumptions look like facts.

Chander says the commission’s proposal doesn’t go far enough to restrict the use of facial recognition. His organization also wants to ban the use of AI for predictive policing and migration as well as predicting emotions.

Rights advocates argue that companies should be required to carry out fundamental rights impact assessments and provide information on where and how an AI system would be used and its impact on individuals.

They also want the public to be informed in a clear way and that citizens should be able to ask public authorities or companies for explanations. Citizens should also be able to seek redress if a company or authority has breached AI law, or if an individual has been affected by a prohibited system.

Chander said there is a common misunderstanding that the AI ​​system can be “perfected,” and policymakers often ask him how to improve these less bias-prone systems. But that’s the wrong question, she argues, because the problem is that AI systems are replicating an already discriminatory system.

“Some systems can’t be improved,” she says, adding, “We don’t want to create a perfect predictive policing system, or a perfect lie detector.”

This article first appeared in EUobserver’s magazine, Digital EU: the Good, the Bad — and the Ugly, which you can now read in full online.

Europe 1

alpilean
Back to top button