用TensorFlow2.0将手写数字集(MNIST)分为训练集,验证集和测试集(6:2:2),进行Tensorflow2的数据加载、多神经网络模型的搭建、分类模型的交叉验证、模型保存和新数据预测(自己手写一个数字到保存的模型中验证)
时间: 2023-06-13 17:07:07 浏览: 134
基于TensorFlow使用四种算法实现mnist数据集手写数字识别仿真(源码+数据+说明文档).rar
5星 · 资源好评率100%
好的,我可以为您提供代码示例。下面是完整的代码实现:
```python
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint
from sklearn.model_selection import KFold
import numpy as np
# 加载数据集
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# 将数据集分为训练集、验证集和测试集(6:2:2)
train_size = int(len(x_train) * 0.6)
val_size = int(len(x_train) * 0.2)
x_val = x_train[train_size:train_size+val_size]
y_val = y_train[train_size:train_size+val_size]
x_train = x_train[:train_size]
y_train = y_train[:train_size]
x_test = np.concatenate((x_train[val_size:], x_test), axis=0)
y_test = np.concatenate((y_train[val_size:], y_test), axis=0)
# 将数据转换为模型需要的格式
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_val = x_val.reshape(x_val.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
input_shape = (28, 28, 1)
# 将像素值归一化到0~1之间
x_train = x_train.astype('float32')
x_val = x_val.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_val /= 255
x_test /= 255
# 将标签转换为one-hot编码
num_classes = 10
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_val = tf.keras.utils.to_categorical(y_val, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
# 定义模型
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
# 编译模型
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.Adadelta(),
metrics=['accuracy'])
# 定义回调函数,用于保存最好的模型
filepath = "best_model.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
# 定义交叉验证器
kfold = KFold(n_splits=5, shuffle=True)
# 训练模型并进行交叉验证
for train_index, test_index in kfold.split(x_train):
X_train, X_test = x_train[train_index], x_train[test_index]
Y_train, Y_test = y_train[train_index], y_train[test_index]
history = model.fit(X_train, Y_train,
batch_size=128,
epochs=10,
verbose=1,
validation_data=(X_test, Y_test),
callbacks=[checkpoint])
# 加载最好的模型
model.load_weights(filepath)
# 评估模型在测试集上的表现
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# 预测新数据
img = np.load('new_data.npy')
img = img.reshape(1, 28, 28, 1)
img = img.astype('float32')
img /= 255
prediction = model.predict(img)
print("Prediction:", np.argmax(prediction))
```
在以上代码中,首先我们使用`mnist.load_data()`函数加载了MNIST数据集,并将其分为训练集、验证集和测试集(6:2:2)。然后,我们将像素值归一化到0~1之间,并将标签转换为one-hot编码。接着,我们定义了一个卷积神经网络模型,并编译了模型。为了在训练过程中保存最好的模型,我们定义了一个回调函数`ModelCheckpoint`。接下来,我们使用`KFold`交叉验证器对模型进行交叉验证。在每一轮交叉验证结束后,我们都会使用`ModelCheckpoint`保存最好的模型。最后,我们加载最好的模型,并对测试集进行评估和新数据进行预测。
需要注意的是,如果您要运行以上代码,请确保您已经安装了TensorFlow 2.0和scikit-learn库,并且您已经将新数据保存为文件`new_data.npy`。
阅读全文