Zuo, Zheming, Yang, Longzhi, Peng, Yonghong ORCID: https://orcid.org/0000-0002-5508-1819, Chao, Fei and Qu, Yanpeng (2018) Gaze-Informed Egocentric Action Recognition for Memory Aid Systems. IEEE Access, 6. pp. 12894-12904. ISSN 2169-3536
|
Published Version
Download (14MB) | Preview |
Abstract
Egocentric action recognition has been intensively studied in the fields of computer vision and clinical science with applications in pervasive health-care. The majority of the existing egocentric action recognition techniques utilize the features extracted from either the entire contents or the regions of interest (ROI) in video frames as the inputs of action classifiers. The former might suffer from moving backgrounds or irrelevant foregrounds usually associated with egocentric action videos, while the latter may be impaired by the mismatch between the calculated and the ground truth ROI. This paper proposes a new gaze-informed feature extraction approach, by which the features are extracted from the regions around the gaze points and thus representing the genuine ROI from a first person of view. The activity of daily life can then be classified based only on the identified regions using the extracted gaze-informed features. The proposed approach has been further applied to a memory support system for people with poor memory, such as those with amnesia or dementia, and their carers. The experimental results demonstrate the efficacy of the proposed approach in egocentric action recognition, and thus the potential of the memory support tool in health care.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.