OpenTAD: A Unified Framework and Comprehensive Study of Temporal Action Detection

March 2, 2025·
Shuming liu
*
Chen Zhao
Chen Zhao
*
,
Fatimah zohra
,
Mattia soldan
,
Alejandro pardo
,
Mengmeng xu
,
Lama alssum
,
Merey ramazanova
,
Juan leon alcazar
,
Anthony cioppa
,
Silvio giancola
,
Carlos hinojosa
,
Bernard ghanem
· 0 min read
PDF
Abstract
Temporal action detection (TAD) is a fundamental video understanding task that aims to identify human actions and localize their temporal boundaries in videos. Although this field has achieved remarkable progress in recent years, further progress and real-world applications are impeded by the absence of a standardized framework. Currently, different methods are compared under different implementation settings, evaluation protocols, etc., making it difficult to assess the real effectiveness of a specific technique. To address this issue, we propose OpenTAD, a unified TAD framework consolidating 16 different TAD methods and 9 standard datasets into a modular codebase. In OpenTAD, minimal effort is required to replace one module with a different design, train a feature-based TAD model in end-to-end mode, or switch between the two. OpenTAD also facilitates straightforward benchmarking across various datasets and enables fair and in-depth comparisons among different methods. With OpenTAD, we comprehensively study how innovations in different network components affect detection performance and identify the most effective design choices through extensive experiments. This study has led to a new state-of-the-art TAD method built upon existing techniques for each component. Our code and models are available at https://github. com/sming256/OpenTAD.
Type
Publication
IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPRW), 2025.
publication