Wang, Yuchen ORCID: https://orcid.org/0000-0002-8697-365X, Qing, Linbo ORCID: https://orcid.org/0000-0003-3555-0005, Wang, Zhengyong, Cheng, Yongqiang ORCID: https://orcid.org/0000-0001-7282-7638 and Peng, Yonghong ORCID: https://orcid.org/0000-0002-5508-1819 (2022) Multi-level transformer-based social relation recognition. Sensors, 22 (15). 5749. ISSN 1424-8220
|
Published Version
Available under License Creative Commons Attribution. Download (4MB) | Preview |
Abstract
Social relationships refer to the connections that exist between people and indicate how people interact in society. The effective recognition of social relationships is conducive to further understanding human behavioral patterns and thus can be vital for more complex social intelligent systems, such as interactive robots and health self-management systems. The existing works about social relation recognition (SRR) focus on extracting features on different scales but lack a comprehensive mechanism to orchestrate various features which show different degrees of importance. In this paper, we propose a new SRR framework, namely Multi-level Transformer-Based Social Relation Recognition (MT-SRR), for better orchestrating features on different scales. Specifically, a vision transformer (ViT) is firstly employed as a feature extraction module for its advantage in exploiting global features. An intra-relation transformer (Intra-TRM) is then introduced to dynamically fuse the extracted features to generate more rational social relation representations. Next, an inter-relation transformer (Inter-TRM) is adopted to further enhance the social relation representations by attentionally utilizing the logical constraints among relationships. In addition, a new margin related to inter-class similarity and a sample number are added to alleviate the challenges of a data imbalance. Extensive experiments demonstrate that MT-SRR can better fuse features on different scales as well as ameliorate the bad effect caused by a data imbalance. The results on the benchmark datasets show that our proposed model outperforms the state-of-the-art methods with significant improvement.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.