AffectiveHMD
HMD組み込み型反射型光センサを利用した表情認識とバーチャルアバターへの表情のマッピング
Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display
2016
鈴木克洋,中村文彦,正井克俊,伊藤勇太,杉浦裕太,杉本麻樹
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto

[Reference /引用はこちら]
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki Sugimoto, Recognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted Display, Virtual Reality (VR ’17), IEEE, 177-185, March 18-22, 2017, Los Angeles, CA, USA. [DOI]

本研究では、HMD 装着時に利用可能な表情認識技術を提案する。反射型光センサは発光部から赤外光を照射し、反射光を受光することで正面物体との距離を計測できる。また反射型光センサは、小型、軽量、低消費電力であるため、容易に HMD の内部に取り付けることが出来る。反射型光センサを HMD の内側に取り付けた状態で表情を変化させることで、顔表面とセンサ間の距離が変化し、表情毎にセンサ値に違いが現れる。本研究では表情認識のための手法として機械学習の一種であるニューラルネットワークを利用して識別器を作成することで装着者の表情を推定する。ニューラルネットワークとして多クラス分類と回帰の2 種類を用いることで表情の識別と表出強度推定を試みる。

We propose a facial expression mapping technology between virtual avatars and Head-Mounted Display (HMD) users. HMD allow people to enjoy an immersive Virtual Reality (VR) experience. A virtual avatar can be a representative of the user in the virtual environment. However, the synchronization of the the virtual avatar's expressions with those of the HMD user is limited. The major problem of wearing an HMD is that a large portion of the user's face is occluded, making facial recognition difficult in an HMD-based virtual environment. To overcome this problem, we propose a facial expression mapping technology using retro-reflective photoelectric sensors. The sensors attached inside the HMD measures the distance between the sensors and the user's face. The distance values of five basic facial expressions (Neutral, Happy, Angry, Surprised, and Sad) are used for training the neural network to estimate the facial expression of a user. We achieved an overall accuracy of 88% in recognizing the facial expressions. Our system can also reproduce facial expression change in real-time through an existing avatar using regression. Consequently, our system enables estimation and reconstruction of facial expressions that correspond to the user's emotional changes.