posted on 2022-11-04, 04:28authored byAmbreen Hanif
<p>Artificial intelligence (AI) enables machines to learn from human experience, adjust to new inputs, and perform human-like tasks. AI is progressing rapidly and is transforming the way businesses operate, from process automation to cognitive augmentation of tasks and intelligent process/data analytics. However, the main challenge for human users would be to understand and appropriately trust the result of AI algorithms and methods. In this thesis, to address this challenge, we study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools. We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance. We present an interactive evidence-based approach to assist human users in comprehending and trusting the results and output created by AI-enabled algorithms. We adopt a typical scenario in the Banking domain for analyzing customer transactions. We develop a digital dashboard to facilitate interacting with the algorithm results and discuss how the proposed XAI method can significantly improve the confidence of data scientists in understanding the result of AI-enabled algorithms.</p>
History
Table of Contents
1 Introduction -- 2 Background and State-of-the-Art -- 3 Proposed Model -- 4 Experiments and Evaluations -- 5 Conclusion and Future Directions -- A Appendix -- List of Symbols -- References
Notes
A thesis submitted to Macquarie University for the degree of Masters of Research
Awarding Institution
Macquarie University
Degree Type
Thesis MRes
Degree
Thesis (MRes), Macquarie University, Faculty of Science and Engineering, 2021
Department, Centre or School
Department of Computing
Year of Award
2021
Principal Supervisor
Amin Beheshti
Additional Supervisor 1
Xuyun Zhang
Additional Supervisor 2
Noman Javed
Rights
Copyright: The Author
Copyright disclaimer: https://www.mq.edu.au/copyright-disclaimer