语音中的MFCC和Fbank
时间: 2024-05-19 19:17:54 浏览: 100
的区别是什么?
MFCC(Mel-frequency cepstral coefficients)和Fbank(filter banks)都是音频信号处理中常用的特征提取方法。其中,MFCC在人类听觉模型中模拟了耳蜗的工作原理,将频率转换成Mel频率,再采用离散余弦变换(DCT)将信号投影到一个低维的空间中,得到一组提取出来的特征向量。而Fbank则是将音频信号通过一组不同宽度的三角滤波器进行滤波,再通过一些统计特征比如平均值、标准差等得到相应的特征向量。因此,两者的主要区别在于信号的转换方式和特征提取方式不同。
相关问题
python实现mfcc
MFCC (Mel Frequency Cepstral Coefficients,梅尔频率倒谱系数) 是一种常用的音频特征提取方法,可以用于语音识别、音乐信息检索等领域。下面是一个简单的 Python 实现:
首先,我们需要导入一些库:
```python
import numpy as np
from scipy.fftpack import dct
from scipy import signal
```
接下来,定义一个函数来实现 MFCC 特征提取:
```python
def mfcc(signal, samplerate=16000, winlen=0.025, winstep=0.01, numcep=13, nfilt=26, nfft=512, lowfreq=0, highfreq=None, preemph=0.97, ceplifter=22):
"""
Compute MFCC features from an audio signal.
"""
# Pre-emphasis
signal = signal - np.mean(signal)
signal = np.append(signal[0], signal[1:] - preemph * signal[:-1])
# Framing
winlen = int(winlen * samplerate)
winstep = int(winstep * samplerate)
frames = signal[:len(signal) - winlen: winstep] * np.hamming(winlen)
# Power spectrum
pspec = np.abs(np.fft.rfft(frames, nfft))**2
if highfreq is None:
highfreq = samplerate / 2
else:
highfreq = min(highfreq, samplerate / 2)
binfreq = np.fft.rfftfreq(nfft, 1/samplerate)
bins = np.arange(nfilt + 2) / (nfilt + 1) * (highfreq - lowfreq) + lowfreq
bwidth = bins[1:] - bins[:-1]
# Filterbank
fbank = np.zeros((nfilt, int(np.floor(nfft / 2 + 1))))
for i in range(nfilt):
low = bins[i]
center = bins[i + 1]
high = bins[i + 2]
lbin = np.floor(low * nfft / samplerate) + 1
cbin = np.floor(center * nfft / samplerate) + 1
hbin = np.floor(high * nfft / samplerate) + 1
fbank[i, int(lbin): int(cbin)] = (np.arange(cbin, int(lbin) - 1, -1) - lbin) / (cbin - lbin)
fbank[i, int(cbin): int(hbin)] = (hbin - np.arange(cbin, hbin)) / (hbin - cbin)
# Apply filterbank
feat = np.dot(pspec, fbank.T)
feat = np.where(feat == 0, np.finfo(float).eps, feat)
feat = np.log(feat)
# DCT
feat = dct(feat, type=2, axis=1, norm='ortho')[:, :numcep]
# Cepstral lifter
lifter = 1 + (ceplifter / 2) * np.sin(np.pi * np.arange(numcep) / ceplifter)
feat = feat * lifter
return feat
```
其中,`signal` 是输入的音频信号,`samplerate` 是采样率,`winlen` 是窗口长度,`winstep` 是窗口步长,`numcep` 是 MFCC 的维度,`nfilt` 是滤波器组数,`nfft` 是 FFT 的长度,`lowfreq` 和 `highfreq` 是滤波器组的频率范围,`preemph` 是预加重系数,`ceplifter` 是 cepstral lifter 系数。
该函数的返回值是一个二维数组,每行表示一个音频帧的 MFCC 特征。你可以将这些特征作为输入用于下游任务,比如说语音识别。
python实现MFCC计算
MFCC(Mel Frequency Cepstral Coefficients)是一种常用于语音识别和音频处理的特征提取方法,可以将音频信号转化为一组能够描述其特征的系数。以下是Python实现MFCC计算的示例代码:
```python
import librosa
import numpy as np
# Load audio file
y, sr = librosa.load('audio.wav', sr=16000)
# Pre-emphasis
pre_emphasis = 0.97
emphasized_signal = np.append(y[0], y[1:] - pre_emphasis * y[:-1])
# Framing
frame_size = 0.025 # 25ms
frame_stride = 0.01 # 10ms
frame_length, frame_step = frame_size * sr, frame_stride * sr
signal_length = len(emphasized_signal)
frame_length = int(round(frame_length))
frame_step = int(round(frame_step))
num_frames = int(np.ceil(float(np.abs(signal_length - frame_length)) / frame_step))
pad_signal_length = num_frames * frame_step + frame_length
z = np.zeros((pad_signal_length - signal_length))
pad_signal = np.append(emphasized_signal, z)
indices = np.tile(np.arange(0, frame_length), (num_frames, 1)) + np.tile(
np.arange(0, num_frames * frame_step, frame_step), (frame_length, 1)).T
frames = pad_signal[indices.astype(np.int32, copy=False)]
# Windowing
frames *= np.hamming(frame_length)
# Fourier Transform and Power Spectrum
NFFT = 512
mag_frames = np.absolute(np.fft.rfft(frames, NFFT)) # Magnitude of the FFT
pow_frames = ((1.0 / NFFT) * ((mag_frames) ** 2)) # Power Spectrum
# Filter Banks
nfilt = 40
low_freq_mel = 0
high_freq_mel = (2595 * np.log10(1 + (sr / 2) / 700)) # Convert Hz to Mel
mel_points = np.linspace(low_freq_mel, high_freq_mel, nfilt + 2) # Equally spaced in Mel scale
hz_points = (700 * (10 ** (mel_points / 2595) - 1)) # Convert Mel to Hz
bin = np.floor((NFFT + 1) * hz_points / sr)
fbank = np.zeros((nfilt, int(np.floor(NFFT / 2 + 1))))
for m in range(1, nfilt + 1):
f_m_minus = int(bin[m - 1]) # left
f_m = int(bin[m]) # center
f_m_plus = int(bin[m + 1]) # right
for k in range(f_m_minus, f_m):
fbank[m - 1, k] = (k - bin[m - 1]) / (bin[m] - bin[m - 1])
for k in range(f_m, f_m_plus):
fbank[m - 1, k] = (bin[m + 1] - k) / (bin[m + 1] - bin[m])
# Apply filter banks
filter_banks = np.dot(pow_frames, fbank.T)
filter_banks = np.where(filter_banks == 0, np.finfo(float).eps, filter_banks) # Numerical Stability
filter_banks = 20 * np.log10(filter_banks) # dB
# MFCCs
num_ceps = 12
mfcc = dct(filter_banks, type=2, axis=1, norm='ortho')[:, 1 : (num_ceps + 1)]
# Mean normalization
mfcc -= (np.mean(mfcc, axis=0) + 1e-8)
print(mfcc)
```
这里使用了Librosa库来加载音频文件和进行预加重,以及进行帧划分、加窗、傅里叶变换、滤波器组应用和离散余弦变换等步骤。最后,对MFCC系数进行均值归一化。
阅读全文
相关推荐














