1 Star 1 Fork 1

夜雨飘零 / VoiceprintRecognition-PaddlePaddle

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
Apache-2.0

基于PaddlePaddle实现的声纹识别系统

python version GitHub forks GitHub Repo stars GitHub 支持系统

本分支为1.0版本,如果要使用之前的0.3版本请在0.x分支使用。本项目使用了EcapaTdnn、ResNetSE、ERes2Net、CAM++等多种先进的声纹识别模型,不排除以后会支持更多模型,同时本项目也支持了MelSpectrogram、Spectrogram、MFCC、Fbank等多种数据预处理方法,使用了ArcFace Loss,ArcFace loss:Additive Angular Margin Loss(加性角度间隔损失函数),对应项目中的AAMLoss,对特征向量和权重归一化,对θ加上角度间隔m,角度间隔比余弦间隔在对角度的影响更加直接,除此之外,还支持AMLoss、ARMLoss、CELoss等多种损失函数。

欢迎大家扫码入知识星球或者QQ群讨论,知识星球里面提供项目的模型文件和博主其他相关项目的模型文件,也包括其他一些资源。

知识星球 QQ群

使用环境:

  • Anaconda 3
  • Python 3.11
  • PaddlePaddle 2.5.1
  • Windows 10 or Ubuntu 18.04

项目特性

  1. 支持模型:EcapaTdnn、TDNN、Res2Net、ResNetSE、ERes2Net、CAM++
  2. 支持池化层:AttentiveStatsPool(ASP)、SelfAttentivePooling(SAP)、TemporalStatisticsPooling(TSP)、TemporalAveragePooling(TAP)、TemporalStatsPool(TSTP)
  3. 支持损失函数:AAMLoss、SphereFace2、AMLoss、ARMLoss、CELoss
  4. 支持预处理方法:MelSpectrogram、Spectrogram、MFCC、Fbank

模型论文:

模型下载

训练CN-Celeb数据,共有2796个说话人。

模型 Params(M) 预处理方法 数据集 train speakers threshold EER MinDCF 模型下载
CAM++ 6.8 Fbank CN-Celeb 2796 0.25 0.09485 0.56214 加入知识星球获取
ERes2Net 6.6 Fbank CN-Celeb 2796 0.22 0.09637 0.52627 加入知识星球获取
ResNetSE 7.8 Fbank CN-Celeb 2796 0.19 0.10222 0.57981 加入知识星球获取
EcapaTdnn 6.1 Fbank CN-Celeb 2796 0.25 0.10465 0.58521 加入知识星球获取
TDNN 2.6 Fbank CN-Celeb 2796 0.23 0.11804 0.61070 加入知识星球获取
Res2Net 5.0 Fbank CN-Celeb 2796 0.18 0.14126 0.68511 加入知识星球获取
CAM++ 6.8 Fbank 更大数据集 2W+ 0.34 0.07884 0.52738 加入知识星球获取
CAM++ 6.8 Flank 其他数据集 20W 0.29 0.04768 0.31429 加入知识星球获取
ERes2Net 55.1 Fbank 其他数据集 20W 0.36 0.02939 0.18355 加入知识星球获取

说明:

  1. 评估的测试集为CN-Celeb的测试集,包含196个说话人。
  2. 使用语速增强分类大小翻三倍speed_perturb_3_class: True
  3. 参数数量不包含了分类器的参数数量。

训练VoxCeleb1&2数据,共有7205个说话人。

模型 Params(M) 预处理方法 数据集 train speakers threshold EER MinDCF 模型下载
CAM++ 6.8 Fbank VoxCeleb1&2 7205 0.27 0.03350 0.25726 加入知识星球获取
ERes2Net 6.6 Fbank VoxCeleb1&2 7205 0.21 0.03997 0.30614 加入知识星球获取
ResNetSE 7.8 Fbank VoxCeleb1&2 7205 0.21 0.03758 0.27625 加入知识星球获取
EcapaTdnn 6.1 Fbank VoxCeleb1&2 7205 0.27 0.02852 0.19432 加入知识星球获取
TDNN 2.6 Fbank VoxCeleb1&2 7205 0.25 0.03541 0.28130 加入知识星球获取
Res2Net 5.0 Fbank VoxCeleb1&2 7205 0.21 0.04749 0.44950 加入知识星球获取
CAM++ 6.8 Fbank 更大数据集 2W+ 0.28 0.03192 0.22032 加入知识星球获取
CAM++ 6.8 Fbank 其他数据集 20W+ 0.49 0.10331 0.71206 加入知识星球获取
ERes2Net 55.1 Fbank 其他数据集 20W+ 0.53 0.08903 0.62131 加入知识星球获取

说明:

  1. 评估的测试集为VoxCeleb1&2的测试集,包含158个说话人。
  2. 使用语速增强分类大小翻三倍speed_perturb_3_class: True
  3. 参数数量不包含了分类器的参数数量。

安装环境

  • 首先安装的是PaddlePaddle的GPU版本,如果已经安装过了,请跳过。
conda install paddlepaddle-gpu==2.4.1 cudatoolkit=10.2 --channel https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/Paddle/
  • 安装ppvector库。

使用pip安装,命令如下:

python -m pip install ppvector -U -i https://pypi.tuna.tsinghua.edu.cn/simple

建议源码安装,源码安装能保证使用最新代码。

git clone https://github.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle.git
cd VoiceprintRecognition-PaddlePaddle/
pip install .

创建数据

本教程笔者使用的是CN-Celeb,这个数据集一共有约3000个人的语音数据,有65W+条语音数据,下载之后要解压数据集到dataset目录,另外如果要评估,还需要下载CN-Celeb的测试集。如果读者有其他更好的数据集,可以混合在一起使用,但最好是要用python的工具模块aukit处理音频,降噪和去除静音。

首先是创建一个数据列表,数据列表的格式为<语音文件路径\t语音分类标签>,创建这个列表主要是方便之后的读取,也是方便读取使用其他的语音数据集,语音分类标签是指说话人的唯一ID,不同的语音数据集,可以通过编写对应的生成数据列表的函数,把这些数据集都写在同一个数据列表中。

执行create_data.py程序完成数据准备。

python create_data.py

执行上面的程序之后,会生成以下的数据格式,如果要自定义数据,参考如下数据列表,前面是音频的相对路径,后面的是该音频对应的说话人的标签,就跟分类一样。自定义数据集的注意,测试数据列表的ID可以不用跟训练的ID一样,也就是说测试的数据的说话人可以不用出现在训练集,只要保证测试数据列表中同一个人相同的ID即可。

dataset/CN-Celeb2_flac/data/id11999/recitation-03-019.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-10-023.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-06-025.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-04-014.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-06-030.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-10-032.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-06-028.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-10-031.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-05-003.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-04-017.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-10-016.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-09-001.flac      2795
dataset/CN-Celeb2_flac/data/id11999/recitation-05-010.flac      2795

修改预处理方法

配置文件中默认使用的是Fbank预处理方法,如果要使用其他预处理方法,可以修改配置文件中的安装下面方式修改,具体的值可以根据自己情况修改。如果不清楚如何设置参数,可以直接删除该部分,直接使用默认值。

# 数据预处理参数
preprocess_conf:
  # 音频预处理方法,支持:LogMelSpectrogram、MelSpectrogram、Spectrogram、MFCC、Fbank
  feature_method: 'Fbank'
  # 设置API参数,更参数查看对应API,不清楚的可以直接删除该部分,直接使用默认值
  method_args:
    sr: 16000
    n_mels: 80

训练模型

使用train.py训练模型,本项目支持多个音频预处理方式,通过configs/ecapa_tdnn.yml配置文件的参数preprocess_conf.feature_method可以指定,MelSpectrogram为梅尔频谱,Spectrogram为语谱图,MFCC梅尔频谱倒谱系数。通过参数augment_conf_path可以指定数据增强方式。训练过程中,会使用VisualDL保存训练日志,通过启动VisualDL可以随时查看训练结果,启动命令visualdl --logdir=log --host 0.0.0.0

# 单卡训练
CUDA_VISIBLE_DEVICES=0 python train.py
# 多卡训练
python -m paddle.distributed.launch --gpus '0,1' train.py

训练输出日志:

[2023-08-05 09:52:06.497988 INFO   ] utils:print_arguments:13 - ----------- 额外配置参数 -----------
[2023-08-05 09:52:06.498094 INFO   ] utils:print_arguments:15 - configs: configs/ecapa_tdnn.yml
[2023-08-05 09:52:06.498149 INFO   ] utils:print_arguments:15 - do_eval: True
[2023-08-05 09:52:06.498191 INFO   ] utils:print_arguments:15 - local_rank: 0
[2023-08-05 09:52:06.498230 INFO   ] utils:print_arguments:15 - pretrained_model: None
[2023-08-05 09:52:06.498269 INFO   ] utils:print_arguments:15 - resume_model: None
[2023-08-05 09:52:06.498306 INFO   ] utils:print_arguments:15 - save_model_path: models/
[2023-08-05 09:52:06.498342 INFO   ] utils:print_arguments:15 - use_gpu: True
[2023-08-05 09:52:06.498378 INFO   ] utils:print_arguments:16 - ------------------------------------------------
[2023-08-05 09:52:06.513761 INFO   ] utils:print_arguments:18 - ----------- 配置文件参数 -----------
[2023-08-05 09:52:06.513906 INFO   ] utils:print_arguments:21 - dataset_conf:
[2023-08-05 09:52:06.513957 INFO   ] utils:print_arguments:24 -         dataLoader:
[2023-08-05 09:52:06.513995 INFO   ] utils:print_arguments:26 -                 batch_size: 64
[2023-08-05 09:52:06.514031 INFO   ] utils:print_arguments:26 -                 num_workers: 4
[2023-08-05 09:52:06.514066 INFO   ] utils:print_arguments:28 -         do_vad: False
[2023-08-05 09:52:06.514101 INFO   ] utils:print_arguments:28 -         enroll_list: dataset/enroll_list.txt
[2023-08-05 09:52:06.514135 INFO   ] utils:print_arguments:24 -         eval_conf:
[2023-08-05 09:52:06.514169 INFO   ] utils:print_arguments:26 -                 batch_size: 1
[2023-08-05 09:52:06.514203 INFO   ] utils:print_arguments:26 -                 max_duration: 20
[2023-08-05 09:52:06.514237 INFO   ] utils:print_arguments:28 -         max_duration: 3
[2023-08-05 09:52:06.514274 INFO   ] utils:print_arguments:28 -         min_duration: 0.5
[2023-08-05 09:52:06.514308 INFO   ] utils:print_arguments:28 -         noise_aug_prob: 0.2
[2023-08-05 09:52:06.514342 INFO   ] utils:print_arguments:28 -         noise_dir: dataset/noise
[2023-08-05 09:52:06.514374 INFO   ] utils:print_arguments:28 -         num_speakers: 3242
[2023-08-05 09:52:06.514408 INFO   ] utils:print_arguments:28 -         sample_rate: 16000
[2023-08-05 09:52:06.514441 INFO   ] utils:print_arguments:28 -         speed_perturb: True
[2023-08-05 09:52:06.514475 INFO   ] utils:print_arguments:28 -         target_dB: -20
[2023-08-05 09:52:06.514508 INFO   ] utils:print_arguments:28 -         train_list: dataset/train_list.txt
[2023-08-05 09:52:06.514542 INFO   ] utils:print_arguments:28 -         trials_list: dataset/trials_list.txt
[2023-08-05 09:52:06.514575 INFO   ] utils:print_arguments:28 -         use_dB_normalization: True
[2023-08-05 09:52:06.514609 INFO   ] utils:print_arguments:21 - loss_conf:
[2023-08-05 09:52:06.514643 INFO   ] utils:print_arguments:24 -         args:
[2023-08-05 09:52:06.514678 INFO   ] utils:print_arguments:26 -                 easy_margin: False
[2023-08-05 09:52:06.514713 INFO   ] utils:print_arguments:26 -                 margin: 0.2
[2023-08-05 09:52:06.514746 INFO   ] utils:print_arguments:26 -                 scale: 32
[2023-08-05 09:52:06.514779 INFO   ] utils:print_arguments:24 -         margin_scheduler_args:
[2023-08-05 09:52:06.514814 INFO   ] utils:print_arguments:26 -                 final_margin: 0.3
[2023-08-05 09:52:06.514848 INFO   ] utils:print_arguments:28 -         use_loss: AAMLoss
[2023-08-05 09:52:06.514882 INFO   ] utils:print_arguments:28 -         use_margin_scheduler: True
[2023-08-05 09:52:06.514915 INFO   ] utils:print_arguments:21 - model_conf:
[2023-08-05 09:52:06.514950 INFO   ] utils:print_arguments:24 -         backbone:
[2023-08-05 09:52:06.514984 INFO   ] utils:print_arguments:26 -                 embd_dim: 192
[2023-08-05 09:52:06.515017 INFO   ] utils:print_arguments:26 -                 pooling_type: ASP
[2023-08-05 09:52:06.515050 INFO   ] utils:print_arguments:24 -         classifier:
[2023-08-05 09:52:06.515084 INFO   ] utils:print_arguments:26 -                 num_blocks: 0
[2023-08-05 09:52:06.515118 INFO   ] utils:print_arguments:21 - optimizer_conf:
[2023-08-05 09:52:06.515154 INFO   ] utils:print_arguments:28 -         learning_rate: 0.001
[2023-08-05 09:52:06.515188 INFO   ] utils:print_arguments:28 -         optimizer: Adam
[2023-08-05 09:52:06.515221 INFO   ] utils:print_arguments:28 -         scheduler: CosineAnnealingLR
[2023-08-05 09:52:06.515254 INFO   ] utils:print_arguments:28 -         scheduler_args: None
[2023-08-05 09:52:06.515289 INFO   ] utils:print_arguments:28 -         weight_decay: 1e-06
[2023-08-05 09:52:06.515323 INFO   ] utils:print_arguments:21 - preprocess_conf:
[2023-08-05 09:52:06.515357 INFO   ] utils:print_arguments:28 -         feature_method: MelSpectrogram
[2023-08-05 09:52:06.515390 INFO   ] utils:print_arguments:24 -         method_args:
[2023-08-05 09:52:06.515426 INFO   ] utils:print_arguments:26 -                 f_max: 14000.0
[2023-08-05 09:52:06.515460 INFO   ] utils:print_arguments:26 -                 f_min: 50.0
[2023-08-05 09:52:06.515493 INFO   ] utils:print_arguments:26 -                 hop_length: 320
[2023-08-05 09:52:06.515527 INFO   ] utils:print_arguments:26 -                 n_fft: 1024
[2023-08-05 09:52:06.515560 INFO   ] utils:print_arguments:26 -                 n_mels: 64
[2023-08-05 09:52:06.515593 INFO   ] utils:print_arguments:26 -                 sample_rate: 16000
[2023-08-05 09:52:06.515626 INFO   ] utils:print_arguments:26 -                 win_length: 1024
[2023-08-05 09:52:06.515660 INFO   ] utils:print_arguments:21 - train_conf:
[2023-08-05 09:52:06.515694 INFO   ] utils:print_arguments:28 -         log_interval: 100
[2023-08-05 09:52:06.515728 INFO   ] utils:print_arguments:28 -         max_epoch: 30
[2023-08-05 09:52:06.515761 INFO   ] utils:print_arguments:30 - use_model: EcapaTdnn
[2023-08-05 09:52:06.515794 INFO   ] utils:print_arguments:31 - ------------------------------------------------
----------------------------------------------------------------------------------------
        Layer (type)             Input Shape          Output Shape         Param #    
========================================================================================
          Conv1D-2              [[1, 64, 102]]        [1, 512, 98]         164,352    
          Conv1d-1              [[1, 64, 98]]         [1, 512, 98]            0       
           ReLU-1               [[1, 512, 98]]        [1, 512, 98]            0       
       BatchNorm1D-2            [[1, 512, 98]]        [1, 512, 98]          2,048     
       BatchNorm1d-1            [[1, 512, 98]]        [1, 512, 98]            0       
        TDNNBlock-1             [[1, 64, 98]]         [1, 512, 98]            0       
          Conv1D-4              [[1, 512, 98]]        [1, 512, 98]         262,656    
          Conv1d-3              [[1, 512, 98]]        [1, 512, 98]            0       
           ReLU-2               [[1, 512, 98]]        [1, 512, 98]            0       
       BatchNorm1D-4            [[1, 512, 98]]        [1, 512, 98]          2,048     
       BatchNorm1d-3            [[1, 512, 98]]        [1, 512, 98]            0       
        TDNNBlock-2             [[1, 512, 98]]        [1, 512, 98]            0       
··········································
         SEBlock-3           [[1, 512, 98], None]     [1, 512, 98]            0       
      SERes2NetBlock-3          [[1, 512, 98]]        [1, 512, 98]            0       
         Conv1D-70             [[1, 1536, 98]]       [1, 1536, 98]        2,360,832   
         Conv1d-69             [[1, 1536, 98]]       [1, 1536, 98]            0       
          ReLU-32              [[1, 1536, 98]]       [1, 1536, 98]            0       
       BatchNorm1D-58          [[1, 1536, 98]]       [1, 1536, 98]          6,144     
       BatchNorm1d-57          [[1, 1536, 98]]       [1, 1536, 98]            0       
        TDNNBlock-29           [[1, 1536, 98]]       [1, 1536, 98]            0       
         Conv1D-72             [[1, 4608, 98]]        [1, 128, 98]         589,952    
         Conv1d-71             [[1, 4608, 98]]        [1, 128, 98]            0       
          ReLU-33               [[1, 128, 98]]        [1, 128, 98]            0       
       BatchNorm1D-60           [[1, 128, 98]]        [1, 128, 98]           512      
       BatchNorm1d-59           [[1, 128, 98]]        [1, 128, 98]            0       
        TDNNBlock-30           [[1, 4608, 98]]        [1, 128, 98]            0       
           Tanh-1               [[1, 128, 98]]        [1, 128, 98]            0       
         Conv1D-74              [[1, 128, 98]]       [1, 1536, 98]         198,144    
         Conv1d-73              [[1, 128, 98]]       [1, 1536, 98]            0       
AttentiveStatisticsPooling-1   [[1, 1536, 98]]        [1, 3072, 1]            0       
       BatchNorm1D-62           [[1, 3072, 1]]        [1, 3072, 1]         12,288     
       BatchNorm1d-61           [[1, 3072, 1]]        [1, 3072, 1]            0       
         Conv1D-76              [[1, 3072, 1]]        [1, 192, 1]          590,016    
         Conv1d-75              [[1, 3072, 1]]        [1, 192, 1]             0       
        EcapaTdnn-1             [[1, 98, 64]]           [1, 192]              0       
  SpeakerIdentification-1         [[1, 192]]           [1, 9726]          1,867,392   
========================================================================================
Total params: 8,039,808
Trainable params: 8,020,480
Non-trainable params: 19,328
----------------------------------------------------------------------------------------
Input size (MB): 0.02
Forward/backward pass size (MB): 35.60
Params size (MB): 30.67
Estimated Total Size (MB): 66.30
----------------------------------------------------------------------------------------
[2023-08-05 09:52:08.084231 INFO   ] trainer:train:388 - 训练数据:874175
[2023-08-05 09:52:09.186542 INFO   ] trainer:__train_epoch:334 - Train epoch: [1/30], batch: [0/13659], loss: 11.95824, accuracy: 0.00000, learning rate: 0.00100000, speed: 58.09 data/sec, eta: 5 days, 5:24:08
[2023-08-05 09:52:22.477905 INFO   ] trainer:__train_epoch:334 - Train epoch: [1/30], batch: [100/13659], loss: 10.35675, accuracy: 0.00278, learning rate: 0.00100000, speed: 481.65 data/sec, eta: 15:07:15
[2023-08-05 09:52:35.948581 INFO   ] trainer:__train_epoch:334 - Train epoch: [1/30], batch: [200/13659], loss: 10.22089, accuracy: 0.00505, learning rate: 0.00100000, speed: 475.27 data/sec, eta: 15:19:12
[2023-08-05 09:52:49.249098 INFO   ] trainer:__train_epoch:334 - Train epoch: [1/30], batch: [300/13659], loss: 10.00268, accuracy: 0.00706, learning rate: 0.00100000, speed: 481.45 data/sec, eta: 15:07:11
[2023-08-05 09:53:03.716015 INFO   ] trainer:__train_epoch:334 - Train epoch: [1/30], batch: [400/13659], loss: 9.76052, accuracy: 0.00830, learning rate: 0.00100000, speed: 442.74 data/sec, eta: 16:26:16
[2023-08-05 09:53:18.258807 INFO   ] trainer:__train_epoch:334 - Train epoch: [1/30], batch: [500/13659], loss: 9.50189, accuracy: 0.01060, learning rate: 0.00100000, speed: 440.46 data/sec, eta: 16:31:08
[2023-08-05 09:53:31.618354 INFO   ] trainer:__train_epoch:334 - Train epoch: [1/30], batch: [600/13659], loss: 9.26083, accuracy: 0.01256, learning rate: 0.00100000, speed: 479.50 data/sec, eta: 15:10:12
[2023-08-05 09:53:45.439642 INFO   ] trainer:__train_epoch:334 - Train epoch: [1/30], batch: [700/13659], loss: 9.03548, accuracy: 0.01449, learning rate: 0.00099999, speed: 463.63 data/sec, eta: 15:41:08

启动VisualDL:visualdl --logdir=log --host 0.0.0.0,VisualDL页面如下:

VisualDL页面

评估模型

训练结束之后会保存预测模型,我们用预测模型来预测测试集中的音频特征,然后使用音频特征进行两两对比,计算EER和MinDCF。

python eval.py

输出类似如下:

······
------------------------------------------------
W0425 08:27:32.057426 17654 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 11.6, Runtime API Version: 10.2
W0425 08:27:32.065165 17654 device_context.cc:465] device: 0, cuDNN Version: 7.6.
[2023-03-16 20:20:47.195908 INFO   ] trainer:evaluate:341 - 成功加载模型:models/EcapaTdnn_Fbank/best_model/model.pth
100%|███████████████████████████| 84/84 [00:28<00:00,  2.95it/s]
开始两两对比音频特征...
100%|███████████████████████████| 5332/5332 [00:05<00:00, 1027.83it/s]
评估消耗时间:65s,threshold:0.26,EER: 0.14739, MinDCF: 0.41999

推理接口

下面给出了几个常用的接口,更多接口请参考mvector/predict.py,也可以往下看声纹对比声纹识别的例子。

from ppvector.predict import PPVectorPredictor

predictor = PPVectorPredictor(configs='configs/cam++.yml',
                              model_path='models/CAMPPlus_Fbank/best_model/')
# 获取音频特征
embedding = predictor.predict(audio_data='dataset/a_1.wav')
# 获取两个音频的相似度
similarity = predictor.contrast(audio_data1='dataset/a_1.wav', audio_data2='dataset/a_2.wav')

# 注册用户音频
predictor.register(user_name='夜雨飘零', audio_data='dataset/test.wav')
# 识别用户音频
name, score = predictor.recognition(audio_data='dataset/test1.wav')
# 获取所有用户
users_name = predictor.get_users()
# 删除用户音频
predictor.remove_user(user_name='夜雨飘零')

声纹对比

下面开始实现声纹对比,创建infer_contrast.py程序,首先介绍几个重要的函数,predict()函数是可以获取声纹特征,predict_batch()函数是可以获取一批的声纹特征,contrast()函数可以对比两条音频的相似度,register()函数注册一条音频到声纹库里面,recognition()函输入一条音频并且从声纹库里面对比识别,remove_user()函数移除你好。声纹库里面的注册人。我们输入两个语音,通过预测函数获取他们的特征数据,使用这个特征数据可以求他们的对角余弦值,得到的结果可以作为他们相识度。对于这个相识度的阈值threshold,读者可以根据自己项目的准确度要求进行修改。

python infer_contrast.py --audio_path1=audio/a_1.wav --audio_path2=audio/b_2.wav

输出类似如下:

[2023-04-02 18:30:48.009149 INFO   ] utils:print_arguments:13 - ----------- 额外配置参数 -----------
[2023-04-02 18:30:48.009149 INFO   ] utils:print_arguments:15 - audio_path1: dataset/a_1.wav
[2023-04-02 18:30:48.009149 INFO   ] utils:print_arguments:15 - audio_path2: dataset/b_2.wav
[2023-04-02 18:30:48.009149 INFO   ] utils:print_arguments:15 - configs: configs/ecapa_tdnn.yml
[2023-04-02 18:30:48.009149 INFO   ] utils:print_arguments:15 - model_path: models/EcapaTdnn_Fbank/best_model/
[2023-04-02 18:30:48.009149 INFO   ] utils:print_arguments:15 - threshold: 0.6
[2023-04-02 18:30:48.009149 INFO   ] utils:print_arguments:15 - use_gpu: True
[2023-04-02 18:30:48.009149 INFO   ] utils:print_arguments:16 - ------------------------------------------------
······································································
W0425 08:29:10.006249 21121 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 11.6, Runtime API Version: 10.2
W0425 08:29:10.008555 21121 device_context.cc:465] device: 0, cuDNN Version: 7.6.
成功加载模型参数和优化方法参数:models/ecapa_tdnn/model.pdparams
audio/a_1.wav 和 audio/b_2.wav 不是同一个人,相似度为:-0.09565544128417969

同时还提供了有GUI界面的声纹对比程序,执行infer_contrast_gui.py启动程序,界面如下,分别选择两个音频,点击开始判断,就可以判断它们是否是同一个人。

声纹对比界面

声纹识别

在新闻识别里面主要使用到register()函数和recognition()函数,首先使用register()函数函数来注册音频到声纹库里面,也可以直接把文件添加到audio_db文件夹里面,使用的时候通过recognition()函数来发起识别,输入一条音频,就可以从声纹库里面识别到所需要的说话人。

有了上面的声纹识别的函数,读者可以根据自己项目的需求完成声纹识别的方式,例如笔者下面提供的是通过录音来完成声纹识别。首先必须要加载语音库中的语音,语音库文件夹为audio_db,然后用户回车后录音3秒钟,然后程序会自动录音,并使用录音到的音频进行声纹识别,去匹配语音库中的语音,获取用户的信息。通过这样方式,读者也可以修改成通过服务请求的方式完成声纹识别,例如提供一个API供APP调用,用户在APP上通过声纹登录时,把录音到的语音发送到后端完成声纹识别,再把结果返回给APP,前提是用户已经使用语音注册,并成功把语音数据存放在audio_db文件夹中。

python infer_recognition.py

输出类似如下:

[2023-04-02 18:31:20.521040 INFO   ] utils:print_arguments:13 - ----------- 额外配置参数 -----------
[2023-04-02 18:31:20.521040 INFO   ] utils:print_arguments:15 - audio_db_path: audio_db/
[2023-04-02 18:31:20.521040 INFO   ] utils:print_arguments:15 - configs: configs/ecapa_tdnn.yml
[2023-04-02 18:31:20.521040 INFO   ] utils:print_arguments:15 - model_path: models/EcapaTdnn_Fbank/best_model/
[2023-04-02 18:31:20.521040 INFO   ] utils:print_arguments:15 - record_seconds: 3
[2023-04-02 18:31:20.521040 INFO   ] utils:print_arguments:15 - threshold: 0.6
[2023-04-02 18:31:20.521040 INFO   ] utils:print_arguments:15 - use_gpu: True
[2023-04-02 18:31:20.521040 INFO   ] utils:print_arguments:16 - ------------------------------------------------
······································································
W0425 08:30:13.257884 23889 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.5, Driver API Version: 11.6, Runtime API Version: 10.2
W0425 08:30:13.260191 23889 device_context.cc:465] device: 0, cuDNN Version: 7.6.
成功加载模型参数和优化方法参数:models/ecapa_tdnn/model.pdparams
Loaded 沙瑞金 audio.
Loaded 李达康 audio.
请选择功能,0为注册音频到声纹库,1为执行声纹识别:0
按下回车键开机录音,录音3秒中:
开始录音......
录音已结束!
请输入该音频用户的名称:夜雨飘零
请选择功能,0为注册音频到声纹库,1为执行声纹识别:1
按下回车键开机录音,录音3秒中:
开始录音......
录音已结束!
识别说话的为:夜雨飘零,相似度为:0.920434

同时还提供了有GUI界面的声纹识别程序,执行infer_recognition_gui.py启动,点击注册音频到声纹库按钮,理解开始说话,录制3秒钟,然后输入注册人的名称,之后可以执行声纹识别按钮,然后立即说话,录制3秒钟后,等待识别结果。删除用户按钮可以删除用户。实时识别按钮可以实时识别,可以一直录音,一直识别。

声纹识别界面

其他版本

打赏作者


打赏一块钱支持一下作者

打赏作者

参考资料

  1. https://github.com/PaddlePaddle/PaddleSpeech
  2. https://github.com/yeyupiaoling/PaddlePaddle-MobileFaceNets
  3. https://github.com/yeyupiaoling/PPASR
  4. https://github.com/alibaba-damo-academy/3D-Speaker
  5. https://github.com/wenet-e2e/wespeaker
Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

简介

本项目使用了EcapaTdnn、ResNetSE、ERes2Net、CAM++等多种先进的声纹识别模型,同时本项目也支持了MelSpectrogram、Spectrogram、MFCC、Fbank等多种数据预处理方法。 展开 收起
Python
Apache-2.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
Python
1
https://gitee.com/yeyupiaoling/VoiceprintRecognition-PaddlePaddle.git
git@gitee.com:yeyupiaoling/VoiceprintRecognition-PaddlePaddle.git
yeyupiaoling
VoiceprintRecognition-PaddlePaddle
VoiceprintRecognition-PaddlePaddle
develop

搜索帮助