Opinion: How to Better Regulate Automated Decision-Making

Artificial intelligence (AI) decision making is ubiquitous. This suite of technologies is often praised for its ability to reduce friction and simplify manual and time-consuming processes. Whether in criminal justice or in the welfare state, algorithms are considered indispensable for increasing efficiency and productivity.

There is no question that this is at least partly true. For example, by automating medical diagnostics, complex services like cancer screening and MRI scans could, in principle, function as simple walk-in services. This would allow dangerous diseases to be diagnosed on a larger scale and much earlier. In finance and lending, big data and machine learning (ML) seem to have found a new solution to the problem of information asymmetries between small businesses and credit intermediaries, a problem often associated with the lack of funding for good investment opportunities and development promising entrepreneurial projects. ML now enables large companies to use credit scoring techniques to provide (or deny) credit to customers operating on their business platforms.

In recent years, however, this potential for great benefit has been somewhat intensified by a growing awareness of the “softwareization” of the discrimination and inequality that exist in the real world into the systems tasked with making decisions about the future eclipsed. This makes the use of automated decision-making too thoughtless and automatic rather than further investigating the possibility of using technology to further advance progress on what should be done.

There has been a real awakening in recent times, thanks in particular to the tireless and powerful work of activists and academics – often women of color – in raising public awareness of the risks of blind and unrestricted use of data and algorithms.

Politicians are also grappling with it, and in Europe the Artificial Intelligence Act (EU AIA) of the European Union is set to become the most far-reaching AI legislation the world has ever produced. In EU-AEOI, risk is determined by the impact that AI products have on people’s rights, including the fundamental rights that underpin the EU legal system. These include, for example, the rights to privacy and fairness. The EU-AEOI overlaps with, and to some extent extends, existing data protection laws, as the former cover systems that may not use personal data directly but nonetheless have an impact on individuals and their livelihoods.

The debate on how to “regulate” automated decision-making has been non-stop lately, which bodes well that the world is grappling with the risks that could end up thwarting the many championed benefits. China, for example, has introduced regulations aimed at limiting the power of algorithms to restrict consumer choice and autonomy, and to make the internet safer for its youngest users. A plethora of state-level initiatives are emerging in the United States, particularly when it comes to the use of algorithms in the public sector and the administrative state.

The Dutch government recently launched the Fundamental Rights Algorithm Impact Assessment (FRAIA) to help organizations understand the human rights risks posed by algorithms and take action to address those risks. FRAIA creates a deeper dialogue between professionals working on the development or deployment of algorithmic systems and aims to reduce cases of negligence, ineffectiveness or violations of civil rights.

All these initiatives and legal developments are of crucial importance. Special areas such as predictive policing are particularly important to me and I am very happy about them draft report published on 22 April by the Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs Committees of the European Parliament. Led by lawmakers Dragos Tudorache and Brando Benifei, they state very clearly that such predictions in the criminal justice system pose an unacceptable risk to one of our public’s most important tenets: the presumption of innocence.

Trust-Building Innovation

To consolidate all this and move it further, all these discussions are welcome and do not harm innovation in any way. In fact, they encourage innovation, as innovation is something that needs to be rooted in the trust of citizens, earned through the equitable distribution of benefits and dividends to society as a whole, in order for it to be sustainable.

What worries me, however, is that environmental aspects in particular are all too often overlooked when assessing or rating an AI product, as is the case in so many other areas. “Technosolutionism” is something that is still trending and that tends to turn us deaf to all the negative effects of technology on the environment, no matter what, with its deafening PR mantra that technology can solve all our problems.

Take cryptocurrencies.

Since the start of his leadership campaign in Canada, a conservative MP named Poilievre has revealed how he used crypto to buy shawarma at a London Middle East restaurant (one that invested all of its profits in bitcoin). When the transaction was finally settled, the MP declared, to loud applause, “We did it!” Needless to say, he never mentioned the fact that his bitcoin payment, worth over $100, required electricity to process.

Once again, we are being sold the false promise that technology will somehow bring social benefits without addressing the root causes of societal problems. There is plenty of evidence that crypto is about minimal regulation, while at the same time recreating aspects of the existing financial industry that should have belonged to history.

Here, as in AI, the technology has been wrapped in the promise of freedom, efficiency, and solutions to all the problems we face in modern times. In fact, however, it is potentially very dangerous – and technology enthusiasts should stay away from this dangerous narrative.

Dancing on lighter feet

AI can have a significant negative impact on the planet with its carbon footprint, an external effect that unfortunately continues to be overlooked. Training even one large AI system has a huge impact on the environment: hundreds of thousands of pounds of carbon dioxide are emitted, an amount comparable to the lifetime CO2 emissions of several cars.

In the Green AI research paper, the authors note that “the computations required for deep learning research have doubled every few months, leading to an estimated 300,000-fold increase from 2012 to 2018” and that “Deep Ironically, learning was inspired by the human brain, which is remarkably energy efficient.” Green AI is one of the new initiatives that proposes a shift from just focusing on AI for sustainability (namely, how AI can contribute to sustainability) towards sustainable AI, i.e. the Sustainability of AI itself, and that means making energy efficiency an equally important evaluation criterion for research, alongside accuracy and other related measures.

They suggest specifying the financial costs, or “price tags,” of developing, training, and operating models to provide a basis for studying increasingly efficient methods. Others, like Abhishek Gupta, propose the introduction of SECure certificates which, if properly implemented and perhaps leveraged through sound procurement rules, would encourage compliance with certain elements of environmental sustainability. This includes, for example, the use of federated learning, which, in addition to huge potential benefits from a privacy and data protection perspective, also has “the second-order advantage that calculations can be performed locally, potentially reducing greenhouse gas emissions where electricity is generated from clean energy sources.”

Conclusion

Of course, none of this would be possible without strong direction from the institutions that need to oversee the framework for responsible AI investments. There is no doubt that responsible AI requires shared resources such as data commons and the public cloud, and access to data through data cooperatives. Most importantly, there is a need to explore how AI can work with less but smarter data to avoid over-reliance on (albeit useful) costly privacy-enhancing technologies and the “extractive” model that underlies them.

Ivana Bartoletti is the Global Chief Privacy Officer at Wipro and a Visiting Policy Fellow at the Oxford Internet Institute. She is the author of “An Artificial Revolution, on Power, Politics and AI” and founder of the network “Women Leading in AI”.

The content contained herein is copyright of The Yuan. All rights reserved. The content of the Services is owned by or licensed to Caixin Global. Copying or storing any Content for any purpose other than personal use is expressly prohibited without the prior written permission of The Yuan or the copyright owner identified in the copyright notice contained in the Content.

The views and opinions expressed in this opinion section are those of the authors and do not necessarily reflect the editorial positions of Caixin Media.

If you would like to write an opinion for Caixin Global, please send your ideas or finished opinions to our email address: [email protected]

Download our app to get breaking news notifications and read the news on the go.

Receive our weekly free must-read newsletter.

Leave a Reply

Your email address will not be published. Required fields are marked *