The UOW Knowledge Series showcases University of Wollongong thought leaders in various locations, discussing a range of engaging topics.
Artificial Intelligence in health
Stacey Carter in conversation with Sarah Vickery
[00:00:04] We're here with Professor Stacy Carter, who is director at the Australian Centre for Health Engagement, Evidence and Values, also known as a Chave, part of the Faculty of Arts, Humanities and Social Sciences. She's also one of several people working in New South Wales on the allocation of resources, especially ventilator's in ICU beds during the covid-19 pandemic. The research undertaken at Achieve addresses contentious, controversial or challenging issues in public health and health services, some of which we are going to hear about in our conversation today.
[00:00:38] I understand you mentioned earlier one of your research areas is artificial intelligence in how it's used in the health system. What are some examples of these?
[00:00:46] So there's been a big explosion in the development of artificial intelligence in the last decade or two. And in the last decade in health, health has been an area of increasing development. So there's lots of areas where it's being used. But examples include Chappells in health. So there's quite a lot of checkbox for things like counselling or coaching and also for trials. So there's a famous example in the UK where if you have symptoms and you're at home, you can contact the chat bot via the Internet and you can talk to the chatbot and the chat bot will tell you whether you should go to the emergency room or go to a GP or will direct you to health information online. So it's a triage system. A second area is being used. A lot is around the management of healthcare systems. And that sounds really boring, but actually it has a huge impact on the care that people receive. So increasingly, there's algorithms to do things like decide how many episodes of rehabilitation someone should get after they have an accident or decide how many days someone should have in hospital and when they should be discharged and perhaps how to discharge them sooner. So those sorts of decision making algorithms or decision support algorithms really affect the way that people receive care. And then finally, in an area that we're very interested in is for screening and diagnosis. So I'm less so in Australia, but increasingly around the world, there are systems that are ready for use in screening and diagnosis, and they're just starting to trickle into health care systems here. And so we think this is the time for us to engage and talk to people about what they value around the use of these systems and to try to influence the way that artificial intelligence trickles into health care in Australia.
[00:02:37] Do you think this technology is providing us with better outcomes so far from what we've seen than what's previously been offered by humans?
[00:02:44] But that is the sixty four million dollar question, actually. So with any new technology, the question is always is this actually making things better for patients or better for members of the public? And it's surprising how often the producers of new shiny technology get away with not really answering that question. So I in health is a massive market. It's grown exponentially. So from, I think, 2014 until next year. The estimate is that the market globally is going to go from about 600 million to about six point six billion. So a massive expansion.
[00:03:20] So whenever there's a big growing commercial market like that. There's immediately a lot of people that have invested a lot of money, they have a lot of skin in the game, and so they have a lot of interest in telling a story about this technology that says that it's going to transform the world, that's going to solve problems that we've always wanted to solve, that it's it's the thing that everyone else should invest in, and particularly that everyone should purchase. So there's a lot of claims being made at the moment that I is going to greatly improve outcomes for patients. Whether that will actually occur is the big question. And it's a question that I think we really have to keep asking and demand answers to.
[00:04:04] Absolutely. And just touching on an earlier point, you may just around, you know, humans being replaced by, you know, A.I. Is it really something that you think in the fields of counselling and coaching that you mentioned earlier? Is that is that something that is is possible? A robot can predict a response or know the right response to give someone. Would you say in your in your opinion?
[00:04:26] It's a really interesting question. So and I think we still need more evidence about the effectiveness of counselling chat bots. There's some really interesting anecdotal stories about coaching chat boards. So the way that I works now has the potential to be quite responsive to you as a person. So there's now a coaching chat bots that can fashion themselves after your communication style so that your chatbot slowly becomes more and more like you. So if you make rude jokes, it makes more jokes. If you're very formal and polite, it will be very formal and polite. And some large health maintenance organisations, insurers, health insurers in the US are starting to buy into the provision of these chat bots to the people that are insured with them as a way of getting them to engage in health behaviours in the way that they would like them to. And generally speaking, I'm told anecdotally people love them because they become something like the kind of twins best friend. So there's certainly elements of the emotional connexion that people can have with AI that suggests that people might be able to experience them as supportive in the way that they might experience a human counsellor. But I think the jury's still out on the degree to which counselling could ever be outsourced entirely to an artificial intelligence.
[00:05:49] Yes, I never mentioned a time would come that that would happen. But certainly, you know, let's wait and see what happens. So what are some of the scenarios where ethical boundaries can be challenged?
[00:06:00] So there's lots of challenging ethical issues in relation to artificial intelligence and perhaps the one that people are most often worried about at the moment is the question of justice and bias and prejudice and discrimination, that kind of cluster of problems. So the thing about contemporary artificial intelligence is that it's built on data.
[00:06:24] It's built on lots and lots of data that is gathered from the world as it exists right now, the social world as it exists now. So if that social world is already prejudiced, systematically discriminatory in various ways, then that will be reflected in the data that gets fed into the artificial intelligence that makes the AI the way that it is. And in fact, over time, that is likely to amplify that prejudice to make it worse. So that can't be fixed without really deliberate, intentional action. The other potential problem with AI is it that can amplify discrimination prejudices if it's only built by one kind of person, by very heterogeneous, very homogeneous sorry developers. And if that happens, it's likely to reflect their perspective on the world. So other ethical issues that get raised a lot are around data again. So data and privacy and confidentiality. So there's been some spectacular examples of governments selling huge quantities or even giving sometimes huge quantities of patient data from public health systems to developers in exchange for what some people say is not much. And there's real questions around Will, when artificial intelligence is developed from those data, will that I actually benefit the people that supplied the data? Should the people that supplied the data have had some say in the sharing of that data with those developers? So that issue is really important. Another issue that's raised a lot is the trustworthiness of artificial intelligence and whether people will continue to trust health systems in the same way if they know that I has a significant role in them. And we don't really yet know the answer to that question, but there's a lot of people working on what would it mean to have a trustworthy AI and how can we ensure that health care remains trustworthy if I becomes a big part of it.
[00:08:23] And so how does the research that you're. Doing it achieved connecting to those scenarios that you've just outlined.
[00:08:29] So we've been really fortunate in the last 12 months or so, we've received a number of grants to do work, particularly on AI, for screening and diagnosis, and within that, especially for breast screening and for cardiovascular disease. So one of the things that's happened in AI ethics is lots of people have made lots of lists of lots of principles and they're very abstract and broad. But not many people are engaging in a lot of detail with actual application, actual use cases of AI. So what we're going to be doing in those grants is not just engaging with these quite specific use cases. So using AI for breast screening and for diagnosing cardiovascular disease, heart disease and other cardiac disease, but we'll also be engaging with all of the stakeholders around that. So with developers and investors and with regulators and with clinicians and with the people who would actually be the patients in the health systems and also with citizens, with members of the public using lots of different methods to understand what matters to all of those stakeholders, how they imagine the future of AI for screening and diagnosis, and how they want that future to be, how it should be. And that will give us really fine grained consideration and analysis of what I should look like in those applications, which is something that's not being done very much around the world. That kind of really detailed, case based analysis will give us a knowledge that we can use to guide the development of those technologies in future.
[00:10:00] That's fantastic. And it's good to get rounded insights from all stakeholders associated with AI itself. And so what would you say some of the risks are that are associated with relying on AI in managing our health?
[00:10:14] So there's lots of potential risks. There's not a lot of AI being used in Australian health care at the moment. I need to emphasise that. So in breast screening in Australia, for example, there's a lot of interest and there's a lot of algorithm development, but as yet is not being actively used in health care in Australia, in the breast screening, the public breast screening programme. But there are certainly some risks that I think need to be taken seriously before we implement those systems. So, for example, some of those include a problem that's referred to as explain ability or interprete ability. So the way that contemporary algorithms work, it's not always possible to know exactly how they're doing, what they're doing, because basically they're given a call. So find breast cancer on these images. They're given lots of data and they're not programmed to carry out a series of steps to achieve that goal. They're programmed to learn from the data. So they they see patterns in the data. They work out ways of using the data to identify the breast cancer on the screen. But you don't necessarily know exactly how they've done it. So one of the things that people worry about, one of the risks that people worry about is that will end up with systems in health care that are doing really important tasks and where we're not quite sure how. So a number of people are working on technical solutions for that. And then that goes to a related set of problems, risks around what is a doctor exactly. If artificial intelligence is doing a lot of the things that doctors used to do, and what effect will it have on clinicians if eyes become really important in health care? So, for example, there's a problem that people are concerned about called automation bias, which is that generally speaking, if humans are told that a computer has produced the right answer and they think it's not the right answer, they tend to think that they're wrong and that computers correct. So obviously, in a health care setting, that's something that if the computer's making a bad decision, if the algorithms make that decision, you want the human to be able to push back. And a related problem is deskilling. So if you outsource a component of health care to an automated system, we know that humans forget how to do that thing pretty quickly. So if in future the automated system was to fail, then you've got a problem because you have a lot of humans who don't actually know how to do that thing anymore. So that might not matter if the system's really reliable and if it does it better than the people, then no problem. But but if actually it's a critical process and if the algorithms not completely dependable, then deskilling is potentially a real risk.
[00:13:04] So from what you've said, AI is to me only as good as the data that is put into the system. Do you think that, you know, where does the liability sit, I guess when things go wrong with AI and health?
[00:13:17] Yeah, that's another really important question, actually. So in our National Health and Medical Research Council funded Grant, we have a whole bunch of work on. Regulation and law that's being led by my colleague Bernadette Richards from the University of Adelaide, and the responsibility intuitively feels like it should probably be shared in the clinician, probably has some responsibility. The developer probably has some responsibility that he's selling the algorithm and maybe the health system that's buying the algorithm should also do some due diligence. So there's some shared responsibility there.
[00:13:51] There's a little bit of precedence around algorithms that have received regulatory approval, having built into them explicit location of the legal liability to some extent with the developer, because it's an autonomous system that's doing diagnosis of a certain disease without any human intervention. So there's some precedent for the responsibility lying with the developer. But it's definitely an active area that that people are trying to find good answers in at the moment, which is changing from moment.
[00:14:25] Stacey, as I mentioned in my introduction, you've been working on the ethics surrounding the allocation of ventilator's in New South Wales during the covid-19 pandemic. What are some of the guidelines being considered around this?
[00:14:38] So when the pandemic first started, everyone who works in health ethics, I think for a little while, for maybe two months or many people were working really hard on this difficult problem of allocating ventilators. And as you can imagine, being a clinician on the front line and having to decide who gets a ventilator and who doesn't is it's an unbearable choice, really. So that felt very compelling. And people were very motivated to try to find good answers to support those clinicians that were really in such a difficult situation.
[00:15:18] So many people that work in health effects all around the world generated guidelines. And there's scores of guidelines now for allocation during covid. And they they have a reasonable amount of overlap.
[00:15:30] So generally, they agree that it shouldn't actually be left to the clinician who's responsible for caring for patients to have to make those lump, all those those life and death decisions, that there should be a committee that's properly constituted, a small group that has clinical expertise, but it sits outside of the care team and that there should be really clear rules for how they make decisions and that they should be applied in a very consistent way.
[00:15:58] And that's to make sure that people who are in a similar situation get similar treatment, that it's fair and consistent and transparent. But interestingly, just in the last little while, the evidence is kind of shifting. So when we were all doing all that work all around the world, we were all thinking, really? And I'm relying here on work from my colleague Angela Valentine at the University of Otago. So we were really thinking of ventilators as the silver bullet, as the thing that would stand between people and death. And so how you allocate that resource seemed like the most important question. But increasingly, as the evidence emerges about what happens when people very sadly end up so sick that they're on a ventilator, is that the outcomes really aren't very good in the context of covid. So between 50 and 85 per cent of people who need to be ventilated don't survive that experience. And also we know that being ventilated has harms in itself. It's very traumatic. It does all kinds of physical damage to people. Often people will have long term physical and mental effects afterwards. So increasingly the conversation is shifting actually. And people are starting to say, you know what, maybe rather than looking at ventilators as a silver bullet, particularly now in New South Wales, where it seems like we probably won't hopefully if we continue to manage community transmission, well, we might not have a terrible search like they have in some countries. And we also have much more capacity. We have a lot more ventilators now than we used to. Maybe actually what's really important is making sure that all Australian communities have really good information and support to prevent themselves getting covid-19 in the first place, and that there's really good support for everyone who ends up very, very ill with covid to make sure that they have excellent access to palliative care, really good quality communication, and that they're involved or their families involved in and end of life decisions in a meaningful way. So, in fact, perhaps prevention and really good end of life care are more important than how we allocate ventilators. But back in March and April, everyone felt like we really had to try to work out this terrible ventilator problem.
[00:18:21] So when you were back in March and April and it wasn't. No. About the long term outcomes of the use of ventilators, what kind of ethics did you need to take into consideration when I guess weighing up who does get the ventilator versus who doesn't?
[00:18:36] So the question that most ethicists were talking through was about how we balance two things that are really important and often difficult. Ethical problems are problems of having to trade off things that we don't actually really want to have to trade off. But we do. And the trade off was really around saving the most number of lives, saving the most people, versus trying to be responsive to the fact that there's lots of inequity in society. So the people that are most likely to benefit from a ventilator will be the people that are the least sick, you know, the people that the people that have the fewest underlying other conditions, for example, and that they're probably more likely to be people that are more privileged in society. So if you allocate ventilators to the people most likely to benefit, it's probably statistically likely that you're going to be you're going to be helping more people who are already advantaged. And that was a terrible tension that's very difficult to solve. But in the end, most people that were thinking about the problem came down on the side of really when you're in this crisis situation and you have very limited resources and you think that that resource is the thing that's going to save people, you really do have to allocate it in a way that saves the most lives. And so most of the guidelines that were developed were about trying to produce a standardised system for doing that. So there's lots of medical systems for using lots of criteria for grading essentially how sick people are and how likely they are to benefit from an extreme treatment like ventilation. So trying to put processes in place to make sure that because clinicians are humans, just like all of us, and they have cognitive biases like all of us, so that people weren't automatically disregarding people because they were older or automatically disregarding people because they had a disability or because they came from a minority population. So to make sure that there was supports in place to help clinicians make decisions just on clinical criteria that were about survival and benefit, rather than on what might be some background prejudices that people might have without realising it.
[00:20:51] So when we have unprecedented events such as covid-19, how do you manage to implement such important overarching ethical factors so quickly?
[00:21:00] Yeah, partly through doing a lot of work really fast.
[00:21:05] So a lot of people all around the world where we're really spending a lot of time thinking about this problem and certainly myself and I know many other people set everything else aside for a good few weeks while all this work was going on. But also there's always the thing that happens in academic life, which is kind of standing on the shoulders of giants. You know, there's always a background to this. So there's been pandemics before. There's been terrible emergency situations before. There's been crises where there's not enough resources to go around before. And it's also the nature of society in the nature of health systems that often there's not as many resources as we would like for whatever reason and decisions have to be made about how to allocate those resources. So all of the work that was done about covid was built on a platform of all of the work that had been done before in any other pandemic or crisis or emergency situation, and that that helped the work to be done as fast as it needed to be done.
[00:22:04] Absolutely. Well, Stacy, thank you so much for joining us today. I'm sure everybody's found Stacey's insights extremely interesting. And I know that I in particular is such a developing area. So we look forward to hearing more about your research and into ethics around this area as time goes on. Thank you once again.
Creating cooperative kids & connected families
Mark Donovan in conversation with Leanne Newsham
Remote learning ready: UOW's response to COVID-19
Professor Theo Farrell in conversation with Monique Harper-Richardson
How people respond to law and governance in a crisis
Associate Professor Cassandra Sharp in conversation with Sarah Vickery
Outsmarting cancer: overcoming the barriers to therapy
Presented by Dr Kara Vine-Perrow
The role of virtual reality in transforming organisations
The panel features Rick Martin, Molly McCarthy, Ken Dion and Patricia Davidson.
Proteins: a story about the stuff of life
Presented by Professor Justin Yerbury
The disrupted digital frontier
A panel discussion featuring Professor Katina Michael, Dr Shahriar Akter, Dr Alex Badran, Kylie Cameron, Dr Thomas Birtchnell and Dane Sharp
A Gut Feeling: Does bacteria in our gut influence the brain & behaviour?
Presented by Dr Katrina Green
Fighting disease, one molecule at a time
Presented by Distinguished Professor Antoine van Oijen
Our story: the epic human and natural history of Australia
Presented by Distinguished Professor Bert Roberts
Leadership, now and into the future and talent spotting for Google
Presented by Brendan Castle
A measure of pleasure: building the next generation condom
Presented by Associate Professor Robert Gorkin III