Explainable Embodied Conversational Agent Using User-specific Reason Explanation to Encourage Behaviour Change
Management of chronic diseases requires adherence to the treatment plan, but such adherence is on average around 50%. The therapist-patient working alliance (WA) is a good predictor of treatment adherence and regarded as a successful intervention for bringing about the recommended behaviour change. Similar to the human scenario, embodied conversational agents (ECAs) have shown evidence of their ability to build an agent-patient WA and provide a potential solution to improve treatment adherence. Building a WA implies the need for positive communication, where the ECA and the user can build mutual understanding through reason explanation by referring to the patient’s mental state (beliefs and/or goals) utilised in the agent’s reasoning and the patient’s decision-making. Existing explainable agents (XAs) commonly rely on their own mental state to explain their behaviour. However, when the XA is a personal health assistant and the behaviour is an action required to be performed by the user rather than the agent, it is reasonable to refer to the user’s mental state in the explanation rather than the agent’s mental state. Building on this point of view, this thesis investigates the impact of explaining the recommended advice by the ECA, using user-specific explanations, on the user-agent WA and the user’s behaviour change intention.
To evaluate the proposed approach, a scenario involving an ECA was created to change the behaviours of undergraduate students to help them manage their study stress. The ECA recommends several coping behaviours to reduce study-related stress and improve the students’ mental health and physical wellbeing. To implement the ECA, the cognitive agent architecture called FAtiMA (Fearnot Affective Mind Architecture) was extended to build explainable-FAtiMA (XFAtiMA), which includes an explanation engine and user model. Four experiments have been conducted to investigate the role of 1) empathic explainable ECA vs neutral explainable ECA; 2) explainable ECA vs unexplainable ECA; 3) voice anthropomorphism as an implication of the need for real-time explanation tailoring; and 4) different user-specific explanation patterns (belief-based, goal-based and belief&goal-based). The analyses confirmed the potential of user-specific explanation for building a user-agent WA and encouraging behaviour change regardless of the ECA’s empathy or anthropomorphism. Further, the analyses indicate that more factors such as the user’s context and current stage of behaviour change should be considered in the tailoring process to encourage the behaviour change. Finally, to evaluate the generalisability of the proposed solution, we have implemented the approach with a different population (Indian undergraduate students in India) and in a different context (paediatric sleep disorders).