Robot Storytelling with enhancements engagement

This is a mini paper I wrote with my teammates while attending the Applied Social Robotics summer school at the Hogeschool Utrecht. Our hypothesis was about storytelling, but with and without enhancements like sound effects, gestures and questions. We then ran both versions of the storytelling experience with some participants from the university and reported our findings. The robot we used was Alpha Mini but we also practiced with Nao – the Naos were quite old and the voice recognition was worse than the Alpha Mini. The summer school was a lovely experience, with theory lectures in the morning and practical experimentation and paper writing in the afternoon. On the last day we prepared a presentation of our results.

An example of the experiment
Setting up the storytelling experience

Authors: Marie Schwed Shenker; Thosha Moodley; Busra Dogan

Introduction

Social robots hold significant potential for enhancing everyday household tasks, particularly in assisting parents by delegating responsibilities. One promising application is storytelling, where robots can entertain and educate children, offering a positive alternative to screen time (Papakostas, G. A. et al, 2021). We used the Alpha-Mini humanoid robot to narrate two short stories generated by Chat-GPT, assessing participants’ interest, engagement, and recall. The study focused on the robot’s use of iconic gestures (Bartneck et al., 2024), movements, and descriptive language. Results indicated that descriptive language and questions were most engaging, while movements were less effective, especially in interactive scenarios (Striepe, 2021). Memory recall was stronger in non-interactive settings (Belpaeme et al., 2018). Some participants displayed high engagement, though non-native English speakers struggled with the robot’s speech pace.

Method

The experiment at Utrecht Hogeschool aimed to explore the use of a humanoid robot for entertaining and educating children through storytelling (Fridin. M, 2014; Manwani, & Guruprasad, 2022), addressing parents’ desire for alternatives to screen time. The objectives were to assess the robot’s ability to engage children, compare interactive and non-interactive storytelling (Gomez, 2021), and test whether children remain engaged until the story’s end. Because this experiment was conducted in a university, only adults participated. The experiment involved coding the robot with scripts for both storytelling modules, using two stories: “Bibo in the Forest” and “Bibo on the Beach” created by ChatGPT. These stories were designed to evoke the positive emotions typically associated with childhood memories, minimizing bias by presenting new narratives to the participants. The stories were broken into three parts (Ligthart, 2020): Bibo getting to the beach/forest, finding/seeing an object, and interacting with the object. The two stories ended with Bibo feeling happy and continuing his journey.

 

11 participants were included in this experiment, 6 performed the non-interactive version and 5 performed the interactive version. After completing the module, each participant filled a questionnaire composed of three questions: 

  1. What engaged/disengaged you?
  • Gestures
  • Movements
  • Descriptive language
  • Questions
  • The robot itself
  • Speech pattern – fast/slow
  • Other: ____________________________________________
  1. What kind of goals can we achieve with this application?

__________________________________________________________

__________________________________________________________

  1. What did Bibo find on the beach OR what did Bibo see in the forest?

__________________________________________________________

Results

The results of the first questions were compiled to a table (see table 1). The majority of participants found their robot’s descriptive language to be the most engaging, and the robot’s movements to be the most disengaging. Overall, descriptive language, questions, the robot itself and speech patterns were the most engaging. We found that the robot itself and its speech pattern were more engaging in the non-interactive scenario (4 out of 6) and descriptive language and questions were most engaging in the interactive module. The movements were disengaging in the interactive module (5 out of 5) and hardly disengaging in the non-interactive case (1 out of 6). 

Secondly, the results of the second question were compiled into a word cloud (see table 2). Children, kids and storytelling were the most used words.

Lastly, the third question was compiled into a table (see table 3). In order to further measure engagement, the participant was asked to remember what the robot found in each story scenario. Most participants remembered correctly, reinforcing the positive use of humanoid robots in storytelling. In terms of recall, in the interactive scenario the results between correct and incorrect recall were the same. In the non-interactive scenario, most recalled the answer correctly. 

Some general observations were made during the execution of the experiment. We noticed that one participant was very impressed with the robot and felt compelled to speak to it during the gaps between speech.Two people we approached were too afraid to interact with the robot and declined to participate. Non-native English speakers had trouble following the robot’s speech pace (Shimada, M., & Kanda, T., 2012). The robot froze at the end of one of the sessions (Cifuentes, C. A. et al, 2020).

Conclusion

Social robots can assist with everyday tasks, such as using storytelling, to offer children an educational alternative to screen time. Our study found that the Alpha-Mini robot’s descriptive language was most engaging, while its movements were less effective. Memory recall was better in non-interactive settings, though some participants struggled with the robot’s speech pace.

We found that when asking what this application can be used for, most participants answered using either children, kids, or storytelling in their answers. This shows the aim of this experiment was successful. Moreover, recall results were similar between correct and incorrect answers in the interactive scenario, while the non-interactive scenario saw higher correct recall. This suggests that our engagement strategies did not significantly enhance recall.

References

Bartneck, C., Belpaeme, T., Eyssel, F., Kanda, T., Keijsers, M., & Sabanovic, S. (2024). Human-Robot Interaction – An Introduction. (2nd edition) Cambridge: Cambridge University Press.

Belpaeme, T., Vogt, P., Van den Berghe, R., Bergmann, K., Göksun, T., De Haas, M., … & Pandey, A. K. (2018). Guidelines for designing social robots as second language tutors. International Journal of Social Robotics, 10, 325-341.

Cifuentes, C. A., Pinto, M. J., Céspedes, N., & Múnera, M. (2020). Social robots in therapy and care. Current Robotics Reports, 1, 59-74.

Fridin, M. (2014). Storytelling by a kindergarten social assistive robot: A tool for constructive learning in preschool education. Computers & education, 70, 53-64.

Gomez, R., Szapiro, D., Galindo, K., Merino, L., Brock, H., Nakamura, K., … & Nichols, E. (2021, August). Exploring affective storytelling with an embodied agent. In 2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) (pp. 1249-1255). IEEE.

Ligthart, M. E., Neerincx, M. A., & Hindriks, K. V. (2020, March). Design patterns for an interactive storytelling robot to support children’s engagement and agency. In Proceedings of the 2020 ACM/IEEE international conference on human-robot interaction (pp. 409-418).

Manwani, R. K., & Guruprasad, B. G. (2022). An empirical study on using storytelling as a learning tool for online and offline education. Journal of Positive School Psychology, 6(3), 7442-7450.

Papakostas, G. A., Sidiropoulos, G. K., Papadopoulou, C. I., Vrochidou, E., Kaburlasos, V. G., Papadopoulou, M. T., … & Dalivigkas, N. (2021). Social robots in special education: A systematic review. Electronics, 10(12), 1398.

Shimada, M., & Kanda, T. (2012). What is the appropriate speech rate for a communication robot?. Interaction Studies, 13(3), 406-433.

Striepe, H., Donnermann, M., Lein, M., & Lugrin, B. (2021). Modeling and evaluating emotion, contextual head movement and voices for a social robot storyteller. International Journal of Social Robotics, 13, 441-457.

Leave a comment

Blog at WordPress.com.

Up ↑