TY - JOUR
T1 - Online, Loudness-Invariant Vocal Detection in Mixed Music Signals
AU - Lehner, Bernhard
AU - Schlüter, Jan
AU - Widmer, Gerhard
PY - 2018/8/1
Y1 - 2018/8/1
N2 - Singing voice detection, also referred to as vocal detection (VD), aims at automatically identifying the regions in a music recording where at least one person sings. It is highly challenging due to the timbral and expressive richness of the human singing voice, as well as the practically endless variety of interfering instrumental accompaniment. Additionally, certain instruments have an inherent risk of being misclassified as vocals due to similarities of the sound production system. In this paper, we present a machine learning approach that is based on our previous work for VD, which is specifically designed to deal with those challenging conditions. The contribution of this paper is threefold: First, we present a new method for VD that passes a compact set of features to a long short-term memory recurrent neural network classifier that obtains state-of-the-art results. Second, we thoroughly evaluate the proposed method along with related approaches to really probe the weaknesses of the methods. In order to allow for such a thorough evaluation, we make a curated collection of datasets available to the research community. Finally, we focus on a specific problem that was not obvious and had not been discussed in the literature so far. The reason for this is precisely because limited evaluations had not revealed this as a problem: the lack of loudness invariance. We will discuss the implications of utilizing loudness-related features and show that our method successfully deals with this problem due to the specific set of features it uses.
AB - Singing voice detection, also referred to as vocal detection (VD), aims at automatically identifying the regions in a music recording where at least one person sings. It is highly challenging due to the timbral and expressive richness of the human singing voice, as well as the practically endless variety of interfering instrumental accompaniment. Additionally, certain instruments have an inherent risk of being misclassified as vocals due to similarities of the sound production system. In this paper, we present a machine learning approach that is based on our previous work for VD, which is specifically designed to deal with those challenging conditions. The contribution of this paper is threefold: First, we present a new method for VD that passes a compact set of features to a long short-term memory recurrent neural network classifier that obtains state-of-the-art results. Second, we thoroughly evaluate the proposed method along with related approaches to really probe the weaknesses of the methods. In order to allow for such a thorough evaluation, we make a curated collection of datasets available to the research community. Finally, we focus on a specific problem that was not obvious and had not been discussed in the literature so far. The reason for this is precisely because limited evaluations had not revealed this as a problem: the lack of loudness invariance. We will discuss the implications of utilizing loudness-related features and show that our method successfully deals with this problem due to the specific set of features it uses.
KW - Instruments
KW - Feature extraction
KW - Speech
KW - Speech processing
KW - Spectrogram
KW - Task analysis
KW - Hidden Markov models
UR - https://ieeexplore.ieee.org/document/8334252/
U2 - 10.1109/TASLP.2018.2825108
DO - 10.1109/TASLP.2018.2825108
M3 - Article
SN - 2329-9304
VL - 26
SP - 1369
EP - 1380
JO - IEEE/ACM Transactions on Audio, Speech, and Language Processing
JF - IEEE/ACM Transactions on Audio, Speech, and Language Processing
IS - 8
M1 - 8334252
ER -