Download PDFOpen PDF in browser

Single Channel Source Separation in the Wild - Conversational Speech in Realistic Environments

EasyChair Preprint no. 10685

5 pagesDate: August 7, 2023

Abstract

Recent progress in Single Channel Source Separation (SCSS) using deep neural networks led to impressive performance gains while also increasing the model sizes, requiring tremendous data resources. This demand is covered by artificially composed speech and noise mixtures that do not capture real-life characteristics of conversations taking place in noisy environments. This paper introduces a new dataset containing task-oriented dialogues spoken in a realistic environment and presents experimental results for two SCSS architectures - the Conv-TasNet and the transformer-based MossFormer. Overall, we observe a severe drop in performance of up to 4.3dB (SI-SDR improvement) for the 8kHz variant of the Conv-TasNet. For speaker pairs of homogeneous sex, the difference is even higher of up to 6dB. Only the model using 16kHz sample rate performs on a comparable level for speaker pairs of mixed sex. Our findings illustrate the need of using realistic data for both, training and evaluating.

Keyphrases: conversational speech, GRASS Corpus, Mask-based Separation, realistic environment, Single Channel Source Separation

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:10685,
  author = {Emil Berger and Barbara Schuppler and Martin Hagmüller and Franz Pernkopf},
  title = {Single Channel Source Separation in the Wild - Conversational Speech in Realistic Environments},
  howpublished = {EasyChair Preprint no. 10685},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser