Videos, Slides, Films

Modeling Long-Term Interactions to Enhance Action Recognition

Conferences
ICPR 2020 MAIN CONFERENCE PS T3.11: Computer Vision from unconventional sensors (2021)
Available as
Online
Summary

In this paper, we propose a new approach to understand actions in egocentric videos that exploits the semantics of object interactions at both frame and temporal levels. At the frame level, we use ...

In this paper, we propose a new approach to understand actions in egocentric videos that exploits the semantics of object interactions at both frame and temporal levels. At the frame level, we use a region-based approach that takes as input a primary region roughly corresponding to the user hands and a set of secondary regions potentially corresponding to the interacting objects and calculates the action score through a CNN formulation. This information is then fed to a Hierarchical Long Short-Term Memory Network (HLSTM) that captures temporal dependencies between actions within and across shots. Ablation studies thoroughly validate the proposed approach, showing in particular that both levels of the HLSTM architecture contribute to performance improvement. Furthermore, quantitative comparisons show that the proposed approach outperforms the state-of-the-art in terms of action recognition on standard benchmarks, without relying on motion information.

Details

Additional Information