VidHarm: A Clip Based Dataset for Harmful Content DetectionShow others and affiliations
2022 (English)In: 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), IEEE , 2022, p. 1543-1549Conference paper, Published paper (Refereed)
Abstract [en]
Automatically identifying harmful content in video is an important task with a wide range of applications. However, there is a lack of professionally labeled open datasets available. In this work VidHarm, an open dataset of 3589 video clips from film trailers annotated by professionals, is presented. An analysis of the dataset is performed, revealing among other things the relation between clip and trailer level annotations. Audiovisual models are trained on the dataset and an in-depth study of modeling choices conducted. The results show that performance is greatly improved by combining the visual and audio modality, pre-training on large-scale video recognition datasets, and class balanced sampling. Lastly, biases of the trained models are investigated using discrimination probing. VidHarm is openly available, and further details are available at the webpage https://vidharm.github.io/
Place, publisher, year, edition, pages
IEEE , 2022. p. 1543-1549
Series
International Conference on Pattern Recognition, ISSN 1051-4651
National Category
Computer Vision and Robotics (Autonomous Systems)
Identifiers
URN: urn:nbn:se:liu:diva-191876DOI: 10.1109/ICPR56361.2022.9956148ISI: 000897707601077ISBN: 9781665490627 (electronic)ISBN: 9781665490634 (print)OAI: oai:DiVA.org:liu-191876DiVA, id: diva2:1738691
Conference
26th International Conference on Pattern Recognition / 8th International Workshop on Image Mining - Theory and Applications (IMTA), Montreal, CANADA, aug 21-25, 2022
Note
Funding Agencies|ELLIIT; Strategic Area for ICT research - Swedish Government; Vinnova [2020-04057]; Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation
2023-02-222023-02-222023-04-05