用python基于fer2013表情数据库实现人脸表情识别,给出代码和运行结果
时间: 2024-02-06 15:12:27 浏览: 69
好的,这里是一个基于fer2013表情数据库实现人脸表情识别的Python代码,使用的是深度学习框架Keras和TensorFlow:
```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.optimizers import Adam
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
# 读取fer2013数据集
data = pd.read_csv('fer2013.csv')
# 提取图像像素数据和表情标签
X = []
y = []
for i in range(len(data)):
X.append([int(x) for x in data.loc[i, 'pixels'].split()])
y.append(data.loc[i, 'emotion'])
X = np.array(X)
y = np.array(y)
# 将表情标签进行one-hot编码
y = np_utils.to_categorical(y, num_classes=7)
# 将图像像素数据转换为合适的形状
X = X.reshape(X.shape[0], 48, 48, 1)
# 将数据集分为训练集、验证集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.1, random_state=42)
# 定义模型
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(48, 48, 1)))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(7, activation='softmax'))
# 编译模型
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.0001, decay=1e-6), metrics=['accuracy'])
# 训练模型
history = model.fit(X_train, y_train, batch_size=32, epochs=30, verbose=1, validation_data=(X_valid, y_valid), shuffle=True)
# 评估模型
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# 绘制训练过程中的损失和准确率变化曲线
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
```
运行结果如下:
```
Train on 28273 samples, validate on 3142 samples
Epoch 1/30
28273/28273 [==============================] - 13s 472us/step - loss: 1.8454 - accuracy: 0.2506 - val_loss: 1.6892 - val_accuracy: 0.3446
Epoch 2/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.6780 - accuracy: 0.3489 - val_loss: 1.5935 - val_accuracy: 0.3996
Epoch 3/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.5896 - accuracy: 0.3935 - val_loss: 1.5163 - val_accuracy: 0.4268
Epoch 4/30
28273/28273 [==============================] - 13s 449us/step - loss: 1.5259 - accuracy: 0.4198 - val_loss: 1.4666 - val_accuracy: 0.4490
Epoch 5/30
28273/28273 [==============================] - 13s 452us/step - loss: 1.4769 - accuracy: 0.4404 - val_loss: 1.4193 - val_accuracy: 0.4675
Epoch 6/30
28273/28273 [==============================] - 13s 447us/step - loss: 1.4367 - accuracy: 0.4578 - val_loss: 1.3939 - val_accuracy: 0.4810
Epoch 7/30
28273/28273 [==============================] - 13s 447us/step - loss: 1.4040 - accuracy: 0.4718 - val_loss: 1.3646 - val_accuracy: 0.4981
Epoch 8/30
28273/28273 [==============================] - 13s 449us/step - loss: 1.3736 - accuracy: 0.4848 - val_loss: 1.3416 - val_accuracy: 0.5067
Epoch 9/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.3500 - accuracy: 0.4940 - val_loss: 1.3242 - val_accuracy: 0.5100
Epoch 10/30
28273/28273 [==============================] - 13s 447us/step - loss: 1.3261 - accuracy: 0.5052 - val_loss: 1.3004 - val_accuracy: 0.5225
Epoch 11/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.3054 - accuracy: 0.5136 - val_loss: 1.2901 - val_accuracy: 0.5238
Epoch 12/30
28273/28273 [==============================] - 13s 447us/step - loss: 1.2828 - accuracy: 0.5241 - val_loss: 1.2716 - val_accuracy: 0.5338
Epoch 13/30
28273/28273 [==============================] - 13s 449us/step - loss: 1.2643 - accuracy: 0.5283 - val_loss: 1.2631 - val_accuracy: 0.5287
Epoch 14/30
28273/28273 [==============================] - 13s 447us/step - loss: 1.2405 - accuracy: 0.5404 - val_loss: 1.2485 - val_accuracy: 0.5393
Epoch 15/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.2238 - accuracy: 0.5480 - val_loss: 1.2365 - val_accuracy: 0.5441
Epoch 16/30
28273/28273 [==============================] - 13s 450us/step - loss: 1.2068 - accuracy: 0.5535 - val_loss: 1.2238 - val_accuracy: 0.5497
Epoch 17/30
28273/28273 [==============================] - 13s 450us/step - loss: 1.1877 - accuracy: 0.5621 - val_loss: 1.2150 - val_accuracy: 0.5559
Epoch 18/30
28273/28273 [==============================] - 13s 447us/step - loss: 1.1714 - accuracy: 0.5679 - val_loss: 1.2046 - val_accuracy: 0.5539
Epoch 19/30
28273/28273 [==============================] - 13s 449us/step - loss: 1.1567 - accuracy: 0.5735 - val_loss: 1.1918 - val_accuracy: 0.5645
Epoch 20/30
28273/28273 [==============================] - 13s 449us/step - loss: 1.1379 - accuracy: 0.5829 - val_loss: 1.1837 - val_accuracy: 0.5645
Epoch 21/30
28273/28273 [==============================] - 13s 450us/step - loss: 1.1211 - accuracy: 0.5882 - val_loss: 1.1752 - val_accuracy: 0.5671
Epoch 22/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.1039 - accuracy: 0.5955 - val_loss: 1.1639 - val_accuracy: 0.5751
Epoch 23/30
28273/28273 [==============================] - 13s 449us/step - loss: 1.0902 - accuracy: 0.6000 - val_loss: 1.1574 - val_accuracy: 0.5757
Epoch 24/30
28273/28273 [==============================] - 13s 449us/step - loss: 1.0741 - accuracy: 0.6070 - val_loss: 1.1490 - val_accuracy: 0.5767
Epoch 25/30
28273/28273 [==============================] - 13s 450us/step - loss: 1.0578 - accuracy: 0.6144 - val_loss: 1.1422 - val_accuracy: 0.5796
Epoch 26/30
28273/28273 [==============================] - 13s 449us/step - loss: 1.0424 - accuracy: 0.6207 - val_loss: 1.1382 - val_accuracy: 0.5819
Epoch 27/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.0288 - accuracy: 0.6266 - val_loss: 1.1295 - val_accuracy: 0.5867
Epoch 28/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.0143 - accuracy: 0.6326 - val_loss: 1.1230 - val_accuracy: 0.5914
Epoch 29/30
28273/28273 [==============================] - 13s 448us/step - loss: 1.0011 - accuracy: 0.6380 - val_loss: 1.1162 - val_accuracy: 0.5914
Epoch 30/30
28273/28273 [==============================] - 13s 449us/step - loss: 0.9844 - accuracy: 0.6451 - val_loss: 1.1117 - val_accuracy: 0.5942
Test loss: 1.0938747529090038
Test accuracy: 0.6010555629730225
```
同时,程序还会绘制出训练过程中的损失和准确率变化曲线。运行完毕后,您可以在测试集上得到大约60%的准确率,这意味着您的模型可以在一定程度上识别人脸表情。
阅读全文