用python完成:前馈神经网络进行数字图像识别,注意要训练过程的损失及准测率
时间: 2024-10-23 08:14:47 浏览: 12
在Python中,我们可以使用深度学习库如TensorFlow或PyTorch来构建前馈神经网络(Feedforward Neural Network,FNN),用于数字图像识别任务,比如MNIST数据集。这里是一个基本的步骤:
1. 导入必要的库:
```python
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
```
2. 加载和预处理数据:
```python
(train_images, train_labels), (test_images, test_labels) = datasets.mnist.load_data()
train_images = train_images.reshape((60000, 28, 28, 1)) / 255.0
test_images = test_images.reshape((10000, 28, 28, 1)) / 255.0
```
3. 构建模型(简单的三层神经网络):
```python
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
layers.MaxPooling2D((2, 2)),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax') # 输出层的激活函数通常是Softmax用于多分类
])
```
4. 编译模型并设置优化器、损失函数和评估指标(准确率):
```python
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
5. 训练模型(这里演示训练1 epoch为例):
```python
history = model.fit(train_images, train_labels, epochs=1, validation_split=0.1)
```
6. 观察训练过程中的损失和准确率:
```python
loss, accuracy = model.evaluate(test_images, test_labels)
print("Loss:", loss)
print("Accuracy:", accuracy)
```
在这个过程中,`history.history`会记录每轮迭代的训练损失和验证损失,以及训练准确率和验证准确率。
阅读全文