Making an Ethical Artificial Intelligence

Making an Ethical Artificial Intelligence

The future of medicine could come down to automation, with artificial intelligence increasingly being used as a substitute for human professions. But can a machine accurately diagnose and treat diseases and illnesses, while also addressing ethical, legal and social implications?

For Professor Stacy Carter, the issue of ethics in artificial intelligence first came to her attention in a context one might not first think: a breast screening.

“My colleague Professor Nehmat Houssami from the University of Sydney, who has worked for a long time in mammography, expressed her concern that AI might soon be used in breast screening programs. Her main worry was that people were only focusing on it as a technical problem, whether it can be done.”

“She said, “someone needs to think about whether this is also the right thing to do? Is this a good or bad thing?

Professor Carter is the director of the new Australian Centre for Health Engagement, Evidence and Values (ACHEEV) at the University of Wollongong. It was her interest and speciality in public health ethics that led her to question the future of artificial intelligence (AI) in healthcare, asking about the ethical, legal and social implications of the technology, and how we might be able to bring the public into the matter.

“It’s one thing to critique AI, it’s quite another to build better AI,” Carter explains.

“A lot of people are talking about AI and what it can do, but not many people are talking to the members of the public about the potential upsides and downsides.”

“We need to bring the public into the conversation about AI, take their viewpoints seriously and give them an opportunity to think about what these technological changes might mean in the future,” she said.

As part of Carter’s Global Challenges supported project, she is leading a team of interdisciplinary researchers from across four UOW facilities that will aim to do just that. “The ethical, legal and social implications (ELSI) of using artificial intelligence (AI) in health and social care” project will develop the first academic Australian survey about artificial intelligence.

It aims to reveal the knowledge, opinions and attitudes of everyday Australians surrounding AI in health and social care. 

Currently, approximately 75 per cent of Australians say they know about artificial intelligence, but only 33 per cent know it can be used for diagnosis in healthcare. Significant concerns have been raised about the effects of automation on Australians receiving payments such as Youth Allowance and Newstart.

The Global Challenges project involves an interdisciplinary team, including Dr Scarlet Wilcock (from LHA, who researches automation in social services), A/Prof Khin Win (from EIS, who heads the Centre for Persuasive Technology and Society), Senior Professor David Steel (who will lead on data analysis) and Professor Nina Reynolds (who will lead on survey design).

In collaboration with the Social Research Centre (SRC) from the Australian National University, the online survey will ask 2000 respondents a range of questions on whether they think automation and AI are acceptable to use in healthcare and social services. It will also provide a number of scenarios that will encourage deliberation and discussion about the topic.

The methodology will allow the SRC to adjust the responses to be more representative of the whole Australian population, providing insights as to how best to adapt the technology to benefit society. This is the first of a series of planned projects, with the endgame being to develop AI algorithms that are keenly attuned to an ethical, legal and social responsibility.

 

AI in healthcare: What should we be thinking about?

Artificial intelligence (AI) is an area of computer technology that utilises data to create machines that can work, react and make decisions to complete tasks that could previously only be completed by humans.

There is intense current interest in using the technology in healthcare to screen, diagnose conditions, plan treatments and predict prognosis. In social care services, AI can also be used to provide services, advice and decision-making, particularly to do with welfare administration.

By 2021, it is estimated that the total public and private healthcare sector investment in AI will reach $6.6 billion. Further than this, there are predictions that by 2026, leading AI applications may result in annual savings of $150 billion.

With such promised economic potential, the possible benefits of the technology are well discussed; the removal of human error, increased accuracy and the potential for preventative diagnoses, as well as further economic benefits, such as lower wage costs.

Yet these potential benefits are accompanied by a number of ethical, legal and social risks, and not all AI research pays attention to the importance of such implications. For Carter, these ethical responsibilities are imperative to successfully using the technology, and it will take an open debate with the public to ensure it happens.

“People are tending to ask if the algorithm can do a better job than a doctor, at say identifying disease or predicting an outcome,” she says.

“Not enough people are asking if including AI in the health system is going to make this better or worse for patients. Will this make patients more likely to get better? Or will patients live longer if AI becomes commonplace?”

In future, AI may be able to successfully screen and diagnose a disease faster than a human can, but it’s not as simple as that, Carter explains.

“If we are able to develop artificial intelligence to replace a clinician, it brings about more complicated questions, such as ‘what will a doctor be?’ ‘Who is responsible for the decisions the AI machine will make, and who takes responsibility if they are wrong?’”

Another risk identified is the potential bias that could result. A 2019 study, “Artificial Intelligence: American Attitudes and Trends” conducted by the University of Oxford, found that support for developing AI is “greater among those who are wealthy, educated, male or have experience with technology.”

And a combination of data bias and human bias is often resulting in biased AI, which will misclassify certain people, says Carter.

“AI tends to be less good at dealing with different types of people, because often the data is gained from a certain kind of person, typically a wealthy, white person in only one location.

“We need to think about how this may reinforce existing prejudices, inequities and unfairness in systems, and look at how AI can be developed to address these ethical implications.”

Among these concerns, other ethical issues at the forefront of AI include data privacy and confidentiality, with potential for data leaks of confidential information of patients and doctors.

The Montreal Declaration: Developing Responsible AI

Carter’s project highlights the increasing need for the public to be aware and involved in the distribution of AI, opening up the issue for deliberation across all of society.    

The project draws on the ideas discussed in the Montreal Declaration, a significant statement on the responsible and ethical use of AI. It’s a landmark piece; a statement that was not just developed by experts, but by considering the public’s opinion.

In collaboration with more than 500 citizens, experts, public policy makers, industry stakeholders and professional associations, the organisation declared 10 principles surrounding ethical treatment of AI, with the aim of “supporting the common good, and guiding social change by making recommendations with a strong democratic legitimacy.”

These values included well-being, autonomy, intimacy and privacy, solidarity, democracy, equity, inclusion, caution, responsibility and environmental sustainability; all factors that should be considered when engaging with AI if it is to become a beneficial technology for society.

The collaborative and deliberation process behind this declaration is what Professor Carter hopes to replicate with the ‘Ethical AI’ project.

“We are asking how we have a big conversation with all of society about issues that matter?

“How do we give people opportunities to deliberate and talk between themselves about what matters, and what is the right thing to do?” 

As for Carter’s own opinion on the use of AI in healthcare, she’s still learning, too.

“I don’t have a strong opinion yet,” she said.

“I can certainly see that it could go very wrong and I can also see that there’s ways that it could be beneficial.”

“We’re trying to be prospective about it, trying to think forwards, not just look at where we are now or look at the past.”

So, can Artificial Intelligence ever replace the work of humans?

“That’s the $60 million question, isn’t it,” Carter laughs.

“There’s no denying there are some aspects of health care where AI may one day be able to perform better, such as pattern recognition tasks. 

“But I’d like to think that there are things about interacting with humans in health and social care that are irreplaceable. If we take humans out, we need to make sure it’s for the better, not worse for people.”