Personalised empathic virtual agent dialogue to the user
In this thesis, we have conducted experiments on two different cohorts, high school and undergraduate students. First, among high school students, we examined the students’ willingness to disclose their emotions concerning their studies to intelligent virtual agents (IVAs) in a virtual world (VW) designed to learn scientific inquiry skills and whether the question ‘how are you?’ was disruptive or annoying. The students were willing to disclose negative and positive emotions to IVAs. The empathic feedback provided by our characters was mostly acceptable; however, the agent needed to be able to adapt its dialogue based on individual differences, such as personalities.
Second, we created a scenario to reduce study stress among undergraduate university students and examined two approaches (machine learning and cold start) to create a personalised (adaptive) virtual adviser. The machine learning approach aims to create a user model and expertise module that the agent can reason over and adapt accordingly. For this approach, we conducted experiments with two IVAs: one that uses relational cues in its dialogue (empathic Sarah) and one that does not use any relational cues (neutral Sarah). The data were captured to identify possible relationships between the users’ characteristics, emotional states (i.e., stress levels) and perceived levels of rapport with the IVA to generate rules for predicting users’ preferences regarding the relational cues for adapting the dialogue. Our work extended the Fearnot Affective Mind Architecture (FAtiMA) cognitive agent architecture by adding a personalised engine comprised of a user model and expertise module. We used machine learning techniques to acquire the agent’s expertise by capturing a collection of user profiles into a user model and developed agent expertise based on the user model. The results showed a significant reduction in stress in the empathic and neutral groups but not in the personalised group. Analyses of the rule accuracies, participants’ dialogue preferences and individual differences revealed that the three groups had different needs for empathic dialogue, highlighting the importance and challenges of accurate tailoring.
Finally, in the cold start approach, we explored training the agent to personalise its dialogue based on a first-time user’s responses to a single example of each relational cue prior to the session with the virtual adviser. The results showed that the rapport scores for the empathic (all relational cues) and personalised (personalised cues) groups were significantly higher than the neutral (no relational cues) group; there were no significant differences between the personalised and empathic groups. Further, the study stress scores were significantly reduced for the personalised and empathic groups. In the personalised group, the students received what they found helpful more often than in the other two groups. The analysis of the discrepancy between what the users found helpful and what they received was lowest in the adaptive group and highest in the neutral group. This indicates the effectiveness of this approach for the cold start problem.