How to ensure fairness in machine learning models for diagnosing illness

Physicians and medical experts are beginning to integrate algorithms and machine learning into many parts of the healthcare system, including experimental models for analyzing X-rays and brain scans.

The aim is to use computers to improve the recognition and diagnosis of patient complaints. Such models are trained to identify tumors, skin lesions and more using databases full of reference scans or images.

But there are also potential biases in the data that could lead to biased diagnoses from these machine learning models.

Marketplace’s Kimberly Adams spoke to María Agustina Ricci, a biomedical engineer who earned a Ph.D. at the Hospital Italiano de Buenos Aires in Argentina. She has examined how differences between low-income and developed countries might exacerbate or create these prejudices.

The following is an edited transcript of their conversation.

Maria Agustina Ricci: Databases developed in high-income countries tend to underrepresent black individuals or patients. This is a topic that concerns us very much because we are Latin American researchers. When models generated from public databases from first world countries are evaluated on our population, they tend to underperform. There are more structural races for low-income countries and for certain population groups in accessing healthcare. In some countries there are profound economic inequalities or even the lack of research funding or the prohibitive fees of trying to publish Open Access articles or databases.

Kimberly Adams: What are the implications of these differences in terms of who is represented in these databases and feeding the algorithms that are shaping the future of medical technology?

Agustina Ricci: The impact can cause these algorithms to perform worse. For example the patient who is underdiagnosed can leave the hospital without a correct diagnosis or false positive results, [which] means that the algorithm says the subject is sick when he is healthy. Both types of errors can have a very important impact on this patient.

Adam’s: What can be done to mitigate or prevent these prejudices?

Agustina Ricci: Well, some of the options are to create different international databases. This is a big challenge and requires a lot of ethical and legal considerations, for example in relation to data sharing. We can also generate scientific data through machine learning methods to compensate for the lack of representation of minorities in this database. This is like an ever growing field so I’m sure new methods will emerge in the near future. And indeed, our future work includes developing methods to mitigate these prejudices.

Read Agustina Ricci’s research on the subject here.

The Food and Drug Administration has reviewed and approved to varying degrees over 170 medical devices that use algorithms or machine learning as of Wednesday, including some that focus on imaging.

It includes a kidney test, brain imaging software, a heart rate monitor that can detect an irregular heart rhythm, and software that examines images of a heart to help doctors make more informed diagnoses.

Leave a Reply

Your email address will not be published. Required fields are marked *