TY - GEN
T1 - AnnChor
T2 - 27th International Conference on Pattern Recognition, ICPR 2024
AU - Bowditch, Margaux
AU - van der Haar, Dustin
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - Video action understanding is a rapidly growing field that has achieved excellent results in various application areas, such as sports and lifestyle applications. However, research that combines computer vision action understanding techniques and the artistic domain of classical ballet choreography is still in its infancy. Publicly available ballet video datasets are limited in number and need more richness to properly explore this specialized field and its extensive collection of actions. Recordings of ballet rehearsals, performances, and competitions have become more readily available on public platforms in recent years, making a substantial amount of data available in this discipline. We propose a novel video dataset, AnnChor, for temporal action localization in ballet choreography. The dataset is notable for its quality and the diversity of ballet actions found in the videos of solo ballet performances. The full dataset comprises 1020 videos with over 25 000 temporal annotations for 11 action classes. We evaluate and provide baseline results for temporal action localization using the Coarse-Fine Network and TriDet models. There is much opportunity to advance computer vision technology to aid the classical dance domain. We hope this dataset will benefit the computer vision community and enable researchers to explore the challenges present in action localization, especially in the context of fine-grained ballet movements. The dataset can be found at https://github.com/dvanderhaar/UJAnnChor.
AB - Video action understanding is a rapidly growing field that has achieved excellent results in various application areas, such as sports and lifestyle applications. However, research that combines computer vision action understanding techniques and the artistic domain of classical ballet choreography is still in its infancy. Publicly available ballet video datasets are limited in number and need more richness to properly explore this specialized field and its extensive collection of actions. Recordings of ballet rehearsals, performances, and competitions have become more readily available on public platforms in recent years, making a substantial amount of data available in this discipline. We propose a novel video dataset, AnnChor, for temporal action localization in ballet choreography. The dataset is notable for its quality and the diversity of ballet actions found in the videos of solo ballet performances. The full dataset comprises 1020 videos with over 25 000 temporal annotations for 11 action classes. We evaluate and provide baseline results for temporal action localization using the Coarse-Fine Network and TriDet models. There is much opportunity to advance computer vision technology to aid the classical dance domain. We hope this dataset will benefit the computer vision community and enable researchers to explore the challenges present in action localization, especially in the context of fine-grained ballet movements. The dataset can be found at https://github.com/dvanderhaar/UJAnnChor.
KW - Ballet Dataset
KW - Fine-grained Temporal Action Localization
KW - Video Understanding
UR - http://www.scopus.com/inward/record.url?scp=85211816014&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-78341-8_13
DO - 10.1007/978-3-031-78341-8_13
M3 - Conference contribution
AN - SCOPUS:85211816014
SN - 9783031783401
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 194
EP - 209
BT - Pattern Recognition - 27th International Conference, ICPR 2024, Proceedings
A2 - Antonacopoulos, Apostolos
A2 - Chaudhuri, Subhasis
A2 - Chellappa, Rama
A2 - Liu, Cheng-Lin
A2 - Bhattacharya, Saumik
A2 - Pal, Umapada
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 1 December 2024 through 5 December 2024
ER -