posted on 2025-09-18, 01:32authored byTamara Marwood
<p dir="ltr">This study examined the application of AI ethics frameworks in healthcare. Its aim is to identify facilitators and barriers to AI ethics frameworks in Australian healthcare settings. This thesis has had two phases: Phase 1 a scoping review undertaken in accordance with the PRISMA-ScR; and Phase 2 a case study examining the application of an AI assurance framework using an existing conceptual model for studying work practices in healthcare, called the Systems Engineering Initiative for Patient Safety (SEIPS). </p><p dir="ltr">The Phase 2 case study examined application of the NSW AI assurance framework to a wound care AI system. It found that internal and external governance e^ectively addressed the AI ethics principles related to community benefit, privacy, security, and transparency. Whilst fairness and accountability were partially addressed. Facilitators included specialist knowledge, multidisciplinary collaboration, and governance processes, while barriers included time constraints, societal perceptions of AI, and managing accountability across multiple stakeholders. </p><p dir="ltr">The findings underscore the need for continued research to improve the practical application of AI ethics frameworks in healthcare settings. Insights from this research will serve to inform sector leaders and future research on: (1) how AI ethics frameworks should be integrated into existing healthcare governance and processes, ensuring that ethics risks are identified and e^ectively mitigated for better patient outcomes alongside existing governance. (2) Consideration should be given to addressing the barriers of time, expertise, and resources required to apply AI ethics frameworks to healthcare AI systems. (3) Sectorial leadership is required to navigate the many health sector ‘accountability’ stakeholders to ensure clarity and meaningful of risk assessment and risk mitigation throughout the AI system lifecycle spanning design, development, and deployment.</p>