Learning to Detect Novel and Fine-Grained Acoustic Sequences Using Pretrained Audio Representations

      This work investigates pre-trained audio representations for few shot Sound Event Detection. We specifically address the task of few shot detection of novel acoustic sequences, or sound events with semantically meaningful temporal structure, without assuming access to non-target audio. We develop procedures for pre-training suitable representations, and methods which transfer them to our few shot learning scenario. Our experiments evaluate the general purpose utility of our pre-trained representations on AudioSet, and the utility of proposed few shot methods via tasks constructed from… Read More Apple Machine Learning Research 







Leave a Reply

Your email address will not be published. Required fields are marked *