Molecular Horizons Seminar - Dr Yves Saint James Aquino


The event will be held both in 32.G01 and online via Zoom.

For non-UOW staff, please email molecular-horizons@uow.edu.au for the Zoom passcode prior to event

Details

Research on healthcare applications of machine learning (ML), a type of artificial intelligence (AI), has proliferated across clinical processes such as diagnosis and screening of diseases, allocation of healthcare resources, and developing personalised treatments. Given the increasingly complex processes behind ML systems, explainability has been considered a major caveat to its adoption in healthcare. This presentation reports the preliminary findings of a qualitative investigation of the perspectives of professional stakeholders (e.g. clinicians, data scientists, entrepreneurs and regulators) working on ML algorithms in diagnosis and screening. All participants were unified on the qualities that diagnosis should have: diagnosis should proceed in a way that enabled human oversight, promote critical thinking among clinicians, and ensure patient safety. However participants were divided on whether explanation was an important means to achieve this end. Broadly, some participants proposed ‘Outcome-assured’ diagnostic practices, while others proposed ‘Explanation-assured’ diagnostic practices, a distinction that applied either with or without the use of AI. ‘Outcome assured’ and ‘Explanation assured’ approaches differed in the significance attributed to explanation in part because they conceptualised explanation differently, not just in relation to what explanation is, but also in relation to the level of explanation and who might be owed an explanation.