Abstract

In this work, we propose to use attributes and parts for recognizing human actions in still images. We define action attributes as the verbs that describe the properties of human actions, while the parts of actions are objects and poselets that are closely related to the actions. We jointly model the attributes and parts by learning a set of sparse bases that are shown to carry much semantic meaning. Then, the attributes and parts of an action image can be reconstructed from sparse coefficients with respect to the learned bases. This dual sparsity provides theoretical guarantee of our bases learning and feature reconstruction approach. On the PASCAL action dataset and a new “Stanford 40 Actions” dataset, we show that our method extracts meaningful high-order interactions between attributes and parts in human actions while achieving state-of-the-art classification performance.

Keywords

Action (physics)Action recognitionComputer scienceArtificial intelligenceAction learningHuman–computer interactionPsychologyMathematics education

Affiliated Institutions

Related Publications

Publication Info

Year
2011
Type
article
Pages
1331-1338
Citations
623
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

623
OpenAlex

Cite This

Bangpeng Yao, Xiaoye Jiang, Aditya Khosla et al. (2011). Human action recognition by learning bases of action attributes and parts. , 1331-1338. https://doi.org/10.1109/iccv.2011.6126386

Identifiers

DOI
10.1109/iccv.2011.6126386