Language Modelling for Sound Event Detection with Teacher Forcing and Scheduled Sampling - Université Toulouse 1 Capitole Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Language Modelling for Sound Event Detection with Teacher Forcing and Scheduled Sampling

Résumé

A sound event detection (SED) method typically takes as an input a sequence of audio frames and predicts the activities of sound events in each frame. In real-life recordings, the sound events exhibit some temporal structure: for instance, a "car horn" will likely be followed by a "car passing by". While this temporal structure is widely exploited in sequence prediction tasks (e.g., in machine translation), where language models (LM) are exploited, it is not satisfactorily modeled in SED. In this work we propose a method which allows a recurrent neural network (RNN) to learn an LM for the SED task. The method conditions the input of the RNN with the activities of classes at the previous time step. We evaluate our method using F1 score and error rate (ER) over three different and publicly available datasets; the TUT-SED Synthetic 2016 and the TUT Sound Events 2016 and 2017 datasets. The obtained results show an increase of 9% and 2% at the F1 (higher is better) and a decrease of 7% and 2% at ER (lower is better) for the TUT Sound Events 2016 and 2017 datasets, respectively, when using our method. On the contrary, with our method there is a decrease of 4% at F1 score and an increase of 7% at ER for the TUT-SED Synthetic 2016 dataset.

Dates et versions

hal-03132165 , version 1 (04-02-2021)

Identifiants

Citer

Konstantinos Drossos, Shayan Gharib, Paul Magron, Tuomas Virtanen. Language Modelling for Sound Event Detection with Teacher Forcing and Scheduled Sampling. IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events Workshops (DCASE 2019), Oct 2019, New York, United States. ⟨hal-03132165⟩
39 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More