Research involves reading, thinking, programming, experimenting, writing, traveling, and presenting. Here are the fruits of that labour. ★ denotes recommended reading.
Abstract: In this thesis, we design and implement a multimodal emotion system for robots. The overreaching goal is to advance the fundamentals of robot primary emotions by using clues from infant development. Our approach has three prime characteristics, which set it apart from current robot emotion systems. Firstly, it is multimodal. Humans express and recognize emotions through a variety of dynamic channels, such as voice, movement and music. Our paradigm uses speed, intensity, irregularity and extent (SIRE) to colour a robot’s voice, gesture and gait with emotion, using a simple 4-dimensional representation. Secondly, it models emotion statistically. Many emotion models are hand-defined based on a posteriori rules, yet humans are known to be statistical learning machines. Our MEI (multimodal emotional intelligence) module once trained, can recognize emotion in a context it has never encountered, and generate statistically probable emotion expressions. Finally, it is developed through a social process found in caregiver-infant interactions. Emotions are thought to be innate, but according to evidence in developmental psychology, much development happens between the ages of zero and one. In this thesis, we model this first year of life where emotional intelligence grows rapidly, possibly due to a universal phenomenon called motherese.
Here’s a quick preview from the introduction.
My Master’s thesis was entitled Design and Implementation of Emotions for Humanoid Robots based on the Modality-independent DESIRE Model (31 Mb). It discusses how we can generate and analyze emotions in the same way, whether it’s in voice, music or movement. My model is based on four parameters: speed, intensity, regularity and extent (SIRE). You can use this approach to add emotional colour to humanoids such as HRP-2, NAO, and potentially even robots without any arms or legs!
- ★ A Recipe for Empathy: Integrating the Mirror System, Insula, Somatosensory Cortex and Motherese Angelica Lim & Hiroshi G. Okuno. Springer International Journal of Social Robotics, DOI 10.1007/s12369-014-0262-y (2014)
- ★ The MEI Robot: Towards Using Motherese to Develop Multimodal Emotional Intelligence
Angelica Lim, Hiroshi G. Okuno. IEEE Transactions of Autonomous Mental Development, DOI 10.1109/TAMD.2014.2317513 (2014)
- ★ Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music
Angelica Lim, Tetsuya Ogata, Hiroshi G. Okuno. EURASIP Journal on Audio, Speech, and Music Processing, 2012:3 (2012)
- A multimodal tempo and beat-tracking system based on audiovisual information from live guitar performances
Tatsuhiko Itohara, Takuma Otsuka, Takeshi Mizumoto, Angelica Lim, Tetsuya Ogata, Hiroshi G Okuno. EURASIP Journal on Audio, Speech, and Music Processing, 2012:6 (2012)
- A musical robot that synchronizes with a co-player using non-verbal cues
Angelica Lim, Takeshi Mizumoto, Tetsuya Ogata, Hiroshi G. Okuno. Advanced Robotics, Special Issue on Cutting Edge of Robtics in Japan, Vol.26, pp.363-381 (2012)
- 音楽共演ロボット: 開始・終了キューの画像認識による人間のフルート奏者との実時間同期
リム アンジェリカ, 水本 武志, 大塚 琢馬, 古谷 ルイ賢造, 尾形 哲也, 奥乃 博. 情報処理学会論文誌, Vol.52, No.12 (2011)
- ★ Developing Robot Emotions through Interaction with Caregivers, Angelica Lim, Hiroshi G. Okuno. Synthesizing Human Emotion in Intelligent Systems and Robotics, Eds. Jordi Vallverdú (2014)
- Musical Robots and Interactive Multimodal Systems
Angelica Lim. Review, Special Issue in Robots, Music and Emotions, International Journal of Synthetic Emotions, Vol. 3, Issue 2 (2012)
- Gaze and Filled Pause Detection for Smooth Human-Robot Conversations
Miriam Bilac, Marine Chamoux, Angelica Lim. Humanoids, Birmingham, UK (2017)
- UE-HRI: A New Dataset for the Study of User Engagement in Spontaneous Human-Robot Interactions
Atef Ben Youssef, Miriam Bilac, Slim Essid, Chloé Clavel, Marine Chamoux, Angelica Lim. International Conference on Multimodal Interaction, Glasgow (2017)
- Habit detection within a long-term interaction with a social robot: an exploratory study
Claire Rivoire, Angelica Lim. Proceedings of the International Workshop on Social Learning and Multimodal Interaction for Designing Artificial Agents, Article 4. (2016) pdf
- ★ Using speech data to recognize emotion in human gait
Angelica Lim, Hiroshi G. Okuno. IEEE/RSJ HBU Workshop, Portugal, LNCS Vol.7559, pp.52-64 (2012) Acceptance rate 42%.
- ★ Converting emotional voice to motion for robot telepresence [Video]
Angelica Lim, Tetsuya Ogata, Hiroshi G. Okuno. Humanoids, Bled, Slovenia (2011) Oral presentation. Acceptance rate 17%.
- More cowbell! A musical ensemble with the NAO thereminist [Video]
Angelica Lim, Takeshi Mizumoto, Takuma Otsuka, Tatsuhiko Itohara, Kazuhiro Nakadai, Tetsuya Ogata, Hiroshi G. Okuno. IROS, Standard Platform Demo, San Francisco (2011)
- ★ Robot Musical Accompaniment: Integrating Audio and Visual Cues for Real-time Synchronization with a Human Flutist [Video]
Angelica Lim, Takeshi Mizumoto, Louis-Kenzo Cahier, Takuma Otsuka, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno. IROS, Taipei, Taiwan (2010) NTF Award for Entertainment Robots and Systems (1/832 papers)
- Programming by Playing and Approaches for Expressive Robot Performances [Poster] [Videos]
Angelica Lim, Takeshi Mizumoto, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno. IEEE/RSJ Workshop on Robots and Musical Expression, Taipei, Taiwan (2010)
- Integration of flutist gesture recognition and beat tracking for human-robot ensemble
Takeshi Mizumoto, Angelica Lim, Takuma Otsuka, Kazuhiro Nakadai, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno. IEEE/RSJ Workshop on Robots and Musical Expression, Taipei, Taiwan (2010)
International Poster Presentations
- A model for human empathy based on a neuroscience-inspired emotional robot: Motherese, mirror neurons, the insula and somatosensory cortices, HRI: a bridge between Robotics and Neuroscience, HRI Workshop 2014 CITEC Award for Excellence in Doctoral HRI Research
- Could a robot be moved by music? Using Motherese to Develop Multimodal Emotional Intelligence for Robots” Cortona International Summer School on Agent-based Models of Creativity Best Poster Award
- Guest Editor, Special Issue in Robots, Music and Emotions, International Journal of Synthetic Emotions, Vol. 3, Issue 2 (2012)
- The DESIRE Model: Cross-modal emotion analysis and expression for robots
Angelica Lim, 尾形 哲也, 奥乃 博. 情報処理学会第74回全国大会, 5ZA-4, 8 Mar. 2012. 名古屋工業大学. Presented in Japanese. Student award.
- Improving social telepresence by converting emotional voice to robot gesture
Angelica Lim, Tetsuya Ogata, Hiroshi G. Okuno. 日本ロボット学会第29回学術講演会,芝浦工業大学, Sep. 7-9, 2011.
- ★Audio-visual musical instrument recognition
Angelica Lim, 中村 圭佑, 中臺 一博, 尾形 哲也, 奥乃 博. 情報処理学会第73回全国大会, 2-309-310, 5R-9, March 3, 2011.
- Multimodal gesture recognition for robot musical accompaniment
Angelica Lim, Takeshi Mizumoto, Louis-Kenzo Cahier, Takuma Otsuka, Toru Takahashi, Tetsuya Ogata, Hiroshi G. Okuno. 日本ロボット学会第28回学術講演会, 名古屋工業大学 Sep. 2010.pdf
- Robot Musical Accompaniment: Real-time Synchronization using Visual Cue Recognition
Angelica Lim, Takeshi Mizumoto, Takuma Otsuka, Toru Takahashi, Kazunori Komatani, Tetsuya Ogata and Hiroshi G. Okuno, 情処第72回全国大会, 6T-7, Mar. 2010