April 27, 2022
Keio University
A research group, including Associate Professor Yuta Sugiura from the Faculty of Science and Technology at Keio University, graduate student Xiang Zhang from the university's Graduate School of Science and Technology, Senior Researcher Kaori Ikematsu from Yahoo! JAPAN Research Centers and Institutes at Yahoo Japan Corporation, and Assistant Professor Kunitaku Kato from the School of Media Science at Tokyo University of Technology, has developed a new method that uses facial photos taken with a smartphone's front camera to estimate how the device is being held through machine learning.
While many smartphone applications are designed with screens assuming operation by the right-hand thumb, estimating the smartphone's grip posture allows the screen display to be adapted to how it is being held.
The research group focused on the fact that when operating a smartphone, the light from the screen reflects on the cornea, creating a corneal reflection image shaped like the screen, and that this image differs depending on the grip posture. They therefore developed a system that takes a facial photo with the smartphone's front camera, crops the corneal reflection image from the photo, and uses machine learning to classify the corneal reflection image.
Since this new method uses the front camera built into smartphones, it can be applied to various smartphone models. By incorporating this method into smartphone applications, it is possible to improve their usability. Furthermore, it may have potential applications in preventing medical conditions caused by holding a smartphone in the same grip posture for extended periods.
The results of this research will be presented on May 4, 2022 (US Eastern Daylight Time), at the international conference "CHI'22: ACM CHI Conference on Human Factors in Computing Systems."
For the full press release, please see below.