Hide menu

769A12 Intelligent Virtual Agents and Social Robots

Course information

The aim of this course is to give and introduction and overview to the state of the art in research about intelligent virtual agents and social robots. The course covers theories, methods and technology in the research front of the area. It aims to provide an understanding of current theoretical issues as well as practical knowledge of implementation and/or evaluation of agent- or robot-based interactive systems with focus on the interaction between humans and such systems.

Intended learning outcomes

After completion of the course, the student should at an advanced level be able to:
  • account for and critically discuss current research questions, results and theories about intelligent virtual agents and social robots.
  • discuss limitations and possibilites in the technologies used to develop and implement intelligent virtual agents and robots.
  • reflect on how the development and use of such technologies affect humans interactions with intelligent virtual agents and social robots.
  • perform a project to develop or evaluate intelligent virtual agents or social robots

Course content

The course covers the following theoretical areas:
  • Interaction: natural language, body language
  • Emotion
  • Embodiment
  • Visual appearance: gender, ethnicity, anthropomorphism
  • Application areas: learning, training, health, entertainment
  • The project focus on design, implementation or evaluation of an interactive application with a virtual embodied agent or a robot.

Teaching and working methods

The course consist of an introductory lecture, several student led literature seminars, practical project work, and seminars for presentation and discussion of project work.


Mandatory participation in seminars. Written report and an oral presentation of the project work.

For more details, se the page Examination


Feedack from the course leader and course participants are provided regularly throught the course; in the seminars, group supervision meetings and during presentation of the individual assignment.


The literature is divided into mandatory articles that all participants should read before the seminars, an extra material that can be used for the individual assignment or those who want to go deeper into some areas of the course.

Revisions for HT21 may occur!

Articles and demos that gives overviews and provide examples.
Sem1: Interaction: natural language, body language, emotions
  • Luger, E., & Sellen, A. (2016). “Like having a really bad pa”: The gulf between user expectation and experience of conversational agents. Conference on Human Factors in Computing Systems - Proceedings, 5286–5297.
  • Mavridis, N. (2015). A review of verbal and non-verbal human-robot interactive communication. Robotics and Autonomous Systems, 63(P1), 22–35.
  • Nooraei, B., Rich, C., & Sidner, C. (2014). A Real-Time Architecture for Embodied Conversational Agents: Beyond Turn-Taking. ACHI 2014, The Seventh International Conference on Advances in Computer-Human Interactions, 1, 381–388.
  • C. Pelachaud, Modelling Multimodal Expression of Emotion in a Virtual Agent, Philosophical Transactions of Royal Society B Biological Science, B 2009 364, 3539-3548.
  • Albert Rizzo et al. Detection and Computational Analysis of Psychological Signals Using a Virtual Human Interviewing Agent. In Proceedings of ICDVRAT 2014, International Journal of Disability and Human Development, 2014.
Extra material:
  • Michael McTear. Conversational Modelling for ChatBots: Current Approaches and Future Directions. Technical report, Ulster University, Ireland, 2018.
  • Grudin, J., & Jacques, R. (2019). Chatbots, humbots, and the quest for artificial general intelligence. Conference on Human Factors in Computing Systems - Proceedings, 1–11.
  • Fabrizio Morbini et al. A Demonstration of Dialogue Processing in SimSensei Kiosk. In 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2014.
  • Fabrizio Morbini et al. FLoReS: A Forward Looking, Reward Seeking, Dialogue Manager. In 4th International Workshop on Spoken Dialog Systems, 2012.
  • Zhao, R., Papangelis, A., Cassell, J. (2014), Towards a Dyadic Computational Model of Rapport Management for Human-Virtual Agent Interaction. In Bickmore, Sidner & Marsella (eds.) Intelligent Virtual Agents (IVA) 2014. Lecture Notes in Computer Science Volume 8637, pp. 514-527.
  • Romero, O., Zhao, R., Cassell, J. (2017). Cognitive-Inspired Conversational-Strategy Reasoner for Socially-Aware Agents. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI ’17)
  • M. Mancini, C. Pelachaud, Generating distinctive behavior for Embodied Conversational Agents, Journal on Multimodal User Interfaces, Springer Berlin / Heidelberg, ISSN 1783-7677, Volume 3, Number 4, 249-261
  • Gupta, S. Walker, M. a. & Romano, D. M. (2007). How rude are you?: Evaluating politeness and affect in interaction. Affective Computing and Intelligent Interaction, Second International Conference, ACII 2007 Lisbon, Portugal, September 12-14, 2007 Proceedings, 203–217.
Sem2: Visual appearance and personality
  • Gulz, A. & Haake, M. (2006). Design of animated pedagogical agents - A look at their look. International Journal of Human-Computer Studies, vol. 64, no. 4, pp. 322-339.
  • Kätsyri, J., Förger, K., Mäkäräinen, M., & Takala, T. (2015). A review of empirical evidence on different uncanny valley hypotheses: Support for perceptual mismatch as one road to the valley of eeriness. Frontiers in Psychology, 6(MAR), 1–16. https://doi.org/10.3389/fpsyg.2015.00390
  • Walker, M. (2008). A Personality-based Framework for Utterance Generation in Dialogue Applications. 2008 AAAI Spring Symposium, Technical Report SS-08-04, Stanford, California, USA, March 26-28, 2008, (January 2008).
  • Susana Castillo, Philipp Hahn, Katharina Legde, Douglas W. Cunningham: Personality Analysis of Embodied Conversational Agents. 227-232
Extra material:
  • Straßmann, C., & Krämer, N. C. (2017). A categorization of virtual agent appearances and a qualitative study on age-related user preferences. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10498 LNAI, 413–422. https://doi.org/10.1007/978-3-319-67401-8_51
  • Baylor, A. L. (2009). Promoting motivation with virtual agents and avatars: role of visual presence and appearance. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535), 3559–3565. https://doi.org/10.1098/rstb.2009.0148
  • Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Maya B. Mathur, David B. Reichling. Cognition, 2016
  • Rushforth, M., Gandhe, S., Artstein, R., Roque, A., Ali, S., Whitman, N., & Traum, D. (2009). Varying personality in spoken dialogue with a virtual human. In Lecture Notes in Computer Science Vol. 5773 LNAI, pp. 541–542)
Sem3: Robots, Embodiment, antropomorphism, and presence
  • Ziemke, T. (2003). What’s that Thing Called Embodiment? Proceedings of the 25th Annual Meeting of the Cognitive Science Society.
  • Zlotowski, J., Proudfoot, D., Yogeeswaran, K., & Bartneck, C. (2014). Anthropomorphism : Opportunities and Challenges in Human – Robot Interaction.
  • Li, J. (2015). The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies, 77, 23–37.
  • Breazeal, Cynthia. (2003) Emotion and sociable humanoid robots. International Journal of Human-Computer Studies, Volume 59, Issues 1–2, Pages 119-155
Extra material:
  • Lindsey Byom, Bilge Mutlu (2013) Theory of Mind: Mechanisms, Methods, and New Directions, Frontiers in Human Neuroscience.
  • Agnieszka Wykowska, Thierry Chaminade and Gordon Cheng (2016) Embodied artificial agents for understanding human social cognition.
  • Airenti, G. (2018). The development of anthropomorphism in interaction: Intersubjectivity, imagination, and theory of mind. Frontiers in Psychology, 9(NOV), 1–13.
  • Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85. https://doi.org/10.1016/j.chb.2018.03.051
  • Abubshait, A., & Wiese, E. (2017). You look human, but act like a machine: Agent appearance and behavior modulate different aspects of human-robot interaction. Frontiers in Psychology, 8(AUG).
  • Nowak, K. L., & Biocca, F. (2003). The Effect of the Agency and Anthropomorphism on Users’ Sense of Telepresence, Copresence, and Social Presence in Virtual Environments. Presence: Teleoperators and Virtual Environments, 12(5), 481–494.
Sem4: Applications, (Norms, Ethics)


  • Johnson, W. L., & Lester, J. C. (2016). Face-to-Face Interaction with Pedagogical Agents, Twenty Years Later. International Journal of Artificial Intelligence in Education, 26(1), 25–36.
  • Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., … Coiera, E. (2018). Conversational agents in healthcare : a systematic review. 25(July), 1248–1258.
  • Kachouie, R., Sedighadeli, S., Khosla, R., Chu, M.T.: Socially Assistive Robots in Elderly Care: A Mixed-Method Systematic Literature Review. International Journal of Human-Computer Interaction 30(5), 369{393 (2014)
  • Scheutz (2017) The Case for Explicit Ethical Agents
  • Malle & Scheutz (2014) Moral Competence in Social Robots.
  • Cercas Curry, A., & Rieser, V. (2018). #MeToo Alexa: How Conversational Systems Respond to Sexual Harassment. (January), 7–14. https://doi.org/10.18653/v1/w18-0802
Extra material:
  • Watch the demo videos:
  • Esteban, P. G., Baxter, P., Belpaeme, T., Billing, E., Cai, H., Cao, H., … Yu, H. (2017). How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder. 18–38. https://doi.org/10.1515/pjbr-2017-0002
  • Sandewall, E. (2019). Ethics, Human Rights, the Intelligent Robot, and its Subsystem for Moral Beliefs. International Journal of Social Robotics. https://doi.org/10.1007/s12369-019-00540-z
  • Malle, B. F., Scheutz, M., & Austerweil, J. L. (2017). Networks of social and moral norms in human and robot agents. Intelligent Systems, Control and Automation: Science and Engineering, 84, 3–17. https://doi.org/10.1007/978-3-319-46667-5_1
Sem5: Methods


  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., … Horvitz, E. (2019). Guidelines for human-AI interaction. Conference on Human Factors in Computing Systems - Proceedings. https://doi.org/10.1145/3290605.3300233
  • Riek, L. (2012). Wizard of Oz Studies in HRI: A Systematic Review and New Reporting Guidelines. Journal of Human-Robot Interaction, 1(1), 119–136. https://doi.org/10.5898/jhri.1.1.riek
  • Weiss, B., Wechsung, I., Kühnel, C., & Möller, S. (2015). Evaluating embodied conversational agents in multimodal interfaces. Computational Cognitive Science, 1(1), 1–21. https://doi.org/10.1186/s40469-015-0006-9
  • Vilhjálmsson, H. H. (2018). When a Virtual Agent is a Flawed Stimulus. 8(November), 2018.
Extra material:
  • Baylor, A. L., & Ryu, J. (2003). The API (Agent Persona Instrument) for Assessing Pedagogical Agent Persona. In ED-MEDIA.
  • Hone, K. S., & Graham, R. (2001). Towards a tool for the subjective assessment of speech system interfaces (SASSI). Natural Language Engineering, 6(March 2001), 1–35.
  • Bartneck, C., Kuli, D., Croft, E., & Zoghbi, S. (2009). Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71–81.
  • Gratch, J. (2017). Understanding the mind by simulating the body: virtual humans as a tool for cognitive science research. (S. E. F. Chipman, Ed.)The Oxford Handbook of Cognitive Science (Vol. 1). Oxford University Press.
  • Matthew Marge, Claire Bonial, Kimberly A. Pollard, Ron Artstein, Brendan Byrne, Susan G. Hill, Clare Voss, and David Traum (2016) Assessing Agreement in Human-Robot Dialogue Strategies: A Tale of Two Wizards.
  • Siska Fitrianie, Merijn Bruijnes, Deborah Richards, Amal Abdulrahman, and Willem-Paul Brinkman. 2019. What are We Measuring Anyway?: - A Literature Survey of Questionnaires Used in Studies Reported in the Intelligent Virtual Agent Conferences. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (IVA '19). ACM, New York, NY, USA, 159-161
  • Foster, M. E., Giuliani, M., & Knoll, A. (2009). Comparing Objective and Subjective Measures of Usability in a Human-Robot Dialogue System. Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2009), (August), 879–887.
Sem6: Technologies
Demo and discussion by those who have tried a technical platform for design of virtual agents, chatbots or programming of robots. Code and instructions are available in Lisam.
  • Virtual human toolkit
    Hartholt, A., Traum, D., Marsella, S., Shapiro, A., Stratou, G., Morency, L., & Gratch, J. (n.d.). All Together Now Introducing the Virtual Human Toolkit.
    Scherer, S., Marsella, S., Stratou, G., Xu, Y., Morbini, F., Egan, A., … Morency, L. (2012). Perception Markup Language : Towards a Standardized Representation of Perceived Nonverbal Behaviors. 455–463.
  • FAtiMA toolkit
  • DialogFlow. Deconstructing Chatbots: Getting started with Dialogflow
  • AIML
  • IrisTk (Furhat)
  • Unity
  • Choreographe (Nao, Pepper)

Page responsible: Annika Silvervarg
Last updated: 2021-10-22