Use of AI to fight COVID-19 risks harming 'disadvantaged groups', experts warn

Rapid deployment of artificial intelligence and machine learning to tackle coronavirus must still go through ethical checks and balances, or we risk harming already disadvantaged communities in the rush to defeat the disease.

  COVID-19 world map  Credit: Martin Sanchez

This is according to researchers at the University of Cambridge's Leverhulme Centre for the Future of Intelligence (CFI) in two articles, published in the British Medical Journal, cautioning against blinkered use of AI for data-gathering and medical decision-making as we fight to regain some normalcy in 2021.

"Relaxing ethical requirements in a crisis could have unintended harmful consequences that last well beyond the life of the pandemic," said Dr Stephen Cave, Director of CFI and lead author of one of the articles.

"The sudden introduction of complex and opaque AI, automating judgments once made by humans and sucking in personal information, could undermine the health of disadvantaged groups as well as long-term public trust in technology."

In a further paper, co-authored by CFI's Dr Alexa Hagerty, researchers highlight potential consequences arising from the AI now making clinical choices at scale - predicting deterioration rates of patients who might need ventilation, for example - if it does so based on biased data.

Datasets used to "train" and refine machine-learning algorithms are inevitably skewed against groups that access health services less frequently, such as minority ethnic communities and those of "lower socioeconomic status".

"COVID-19 has already had a disproportionate impact on vulnerable communities. We know these systems can discriminate, and any algorithmic bias in treating the disease could land a further brutal punch," Hagerty said.

Read the full story

Image: COVID-19 world map

Credit: Martin Sanchez

Reproduced courtesy of the University of Cambridge



Looking for something specific?