toward sensor configuration recommendation in an interactive optical sensor simulator for human gesture recognition
2023
松尾佳奈,Chengshuo Xia,杉浦裕太
Kana Matsuo, Chengshuo Xia, Yuta Sugiura
[Reference /引用はこちら] Kana Matsuo, Chengshuo Xia, Yuta Sugiura, VirSen1.0: Toward sensor configuration recommendation in an interactive optical sensor simulator for human gesture recognition, International Journal of the Digital Human. [DOI]
Research is underway on the use of sensor simulation in generating sensor data to design a real-world human gesture recognition system. The overall development process suffers from poor interactive performance, because developers lack an efficient tool to support the sensor configuration, result checking, and trial-and-error that arise when designing a machine learning system. Hence, we have developed VirSen1.0, a virtual environment with a user interface to support the process of designing a sensor-based human gesture recognition system. In this environment, a simulator produces lightness data and combines it with an avatar's motion to train a classifier. Then, the interface visualises the importance of the features used for the model, via the permutation feature importance, and it provides feedback on the effect of each sensor to the classifier. This paper proposes a complete development process, from acquisition of learning data to creation of a learning model, using a single software tool. Additionally, a user study confirmed that by visualising the importance of the features used in the model, users can create learning models that achieve a certain level of accuracy.