Sign in
Audio Wave
CN-Celeb
A large-scale multi-genre speaker recognition dataset
 CN-Celeb is a multi-genre dataset covering 11 different genres in real world,
        collected from multiple Chinese open media sources.

3,000

Speakers

CN-Celeb contains speech from Chinese celebrities.

600,000 +

Utterances

CN-Celeb covers multiple genres of speech, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement.

1,200 +

Hours

CN-Celeb consists of complex long-short challenge which meets the scenarios of most
real applications.

Download

The dataset consists of two subsets, CN-Celeb1 and CN-Celeb2. For each subset, we provide audio files and speaker meta-data. There is no overlap between the two subsets. CN-Celeb1 contains more than 125,000 utterances from 997 Chinese celebrities, and CN-Celeb2 contains more than 520,000 utterances from 1996 Chinese celebrities.

License

All the resources contained in the dataset are free for research institutes and individuals. The copyright remains with the original owners of the audio/video. No commerical usage is permitted.

Publications

Publications based on the dataset welcome to cite the following papers:

Y.Fan,  J.W.Kang,  L.T.Li,  K.C.Li,  H.L.Chen,  S.T.Cheng,  P.Y.Zhang,  Z.Y.Zhou,  Y.Q.Cai,  D.Wang*

CN-Celeb: A Challenging Chinese Speaker Recognition Dataset, ICASSP, 2020



L.T.Li,  R.Q.Liu,  J.W.Kang,  Y.Fan,  H.Cui,  Y.Q.Cai,  R.Vipperla,  T.F.Zheng,  D.Wang*

CN-Celeb: multi-genre speaker recognition, Speech Communication, 2022



Challenge

We are hosting the first CN-Celeb Speaker Recognition Challenge (CNSRC) at Odyssey 2022 (The Speaker and Language Recognition Workshop). CNSRC aims to evaluate how well the current speaker recognition methods work in real world scenarios, usually with in-the-wild complexity and real-time processing speed. CNSRC consists of two parts, an evaluation challenge and an accompanying workshop. The challenge website can be found here and the workshop website can be found here .

Acknowledgements

This work is supported by the National Natural Science Foundation of China (NSFC) under Grants No.61633013 and No.62171250.