def train_loop(): step = 0 ax = [] ay = [] plt.ion() PASS_NUM = n_epochs for pass_id in range(PASS_NUM): total_loss_pass = 0 for data in train_reader(): avg_loss_value, = exe.run( fluid.default_main_program(), feed= feeder.feed(data), fetch_list=[avg_loss]) total_loss_pass += avg_loss_value ax.append(pass_id) ay.append(total_loss_pass) plt.clf() plt.plot(ax, ay) plt.pause(0.01) plt.ioff() step += 1 display.clear_output(wait=True) fluid.io.save_inference_model(SAVE_DIRNAME, ['x'], [prediction], exe) train_loop()
时间: 2024-02-14 13:35:19 浏览: 161
这段代码是一个使用 PaddlePaddle 框架进行训练的循环。具体来说,它包含以下步骤:
1. 定义了一个名为 `train_loop` 的函数,表示训练循环。
2. 初始化一些变量,包括 `step` 记录当前训练步数,`ax` 和 `ay` 用于绘制训练过程中的损失曲线,`plt.ion()` 用于打开 pyplot 的交互模式。
3. 循环执行 `PASS_NUM` 次训练,其中 `PASS_NUM` 是训练轮数。在每一轮训练中,遍历训练数据集 `train_reader()` 中的每个样本,计算平均损失值 `avg_loss_value`,并累加到 `total_loss_pass` 中。
4. 将当前轮数 `pass_id` 和累加的平均损失值 `total_loss_pass` 分别添加到 `ax` 和 `ay` 中,用于绘制损失曲线。
5. 使用 `plt.clf()` 清空之前的图像,再使用 `plt.plot(ax, ay)` 绘制当前的损失曲线,使用 `plt.pause(0.01)` 暂停一段时间,使图像得以显示。
6. 使用 `plt.ioff()` 关闭交互模式,`step` 加 1,使用 `display.clear_output(wait=True)` 清空输出界面,进入下一轮训练。
7. 训练结束后,使用 `fluid.io.save_inference_model()` 将训练好的模型保存到指定路径 `SAVE_DIRNAME` 中。
相关问题
import numpy as np import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt ## Let us define a plt function for simplicity def plt_loss(x,training_metric,testing_metric,ax,colors = ['b']): ax.plot(x,training_metric,'b',label = 'Train') ax.plot(x,testing_metric,'k',label = 'Test') ax.set_xlabel('Epochs') ax.set_ylabel('Accuarcy')# ax.set_ylabel('Categorical Crossentropy Loss') plt.legend() plt.grid() plt.show() tf.keras.utils.set_random_seed(1) ## We import the Minist Dataset using Keras.datasets (train_data, train_labels), (test_data, test_labels) = keras.datasets.mnist.load_data() ## We first vectorize the image (28*28) into a vector (784) train_data = train_data.reshape(train_data.shape[0],train_data.shape[1]train_data.shape[2]) # 60000784 test_data = test_data.reshape(test_data.shape[0],test_data.shape[1]test_data.shape[2]) # 10000784 ## We next change label number to a 10 dimensional vector, e.g., 1->[0,1,0,0,0,0,0,0,0,0] train_labels = keras.utils.to_categorical(train_labels,10) test_labels = keras.utils.to_categorical(test_labels,10) ## start to build a MLP model N_batch_size = 5000 N_epochs = 100 lr = 0.01 ## we build a three layer model, 784 -> 64 -> 10 MLP_4 = keras.models.Sequential([ keras.layers.Dense(128, input_shape=(784,),activation='relu'), keras.layers.Dense(64,activation='relu'), keras.layers.Dense(10,activation='softmax') ]) MLP_4.compile( optimizer=keras.optimizers.Adam(lr), loss= 'categorical_crossentropy', metrics = ['accuracy'] ) History = MLP_4.fit(train_data[:10000],train_labels[:10000], batch_size = N_batch_size, epochs = N_epochs,validation_data=(test_data,test_labels), shuffle=False) train_acc = History.history['accuracy'] test_acc = History.history['val_accuracy']在该模型中加入early stopping,使用monitor='loss', patience = 2设置代码
Sure, here is the updated code with early stopping:
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
## Let us define a plt function for simplicity
def plt_loss(x,training_metric,testing_metric,ax,colors = ['b']):
ax.plot(x,training_metric,'b',label = 'Train')
ax.plot(x,testing_metric,'k',label = 'Test')
ax.set_xlabel('Epochs')
ax.set_ylabel('Accuarcy')# ax.set_ylabel('Categorical Crossentropy Loss')
plt.legend()
plt.grid()
plt.show()
tf.keras.utils.set_random_seed(1)
## We import the Minist Dataset using Keras.datasets
(train_data, train_labels), (test_data, test_labels) = keras.datasets.mnist.load_data()
## We first vectorize the image (28*28) into a vector (784)
train_data = train_data.reshape(train_data.shape[0],train_data.shape[1]*train_data.shape[2]) # 60000*784
test_data = test_data.reshape(test_data.shape[0],test_data.shape[1]*test_data.shape[2]) # 10000*784
## We next change label number to a 10 dimensional vector, e.g., 1->[0,1,0,0,0,0,0,0,0,0]
train_labels = keras.utils.to_categorical(train_labels,10)
test_labels = keras.utils.to_categorical(test_labels,10)
## start to build a MLP model
N_batch_size = 5000
N_epochs = 100
lr = 0.01
## we build a three layer model, 784 -> 64 -> 10
MLP_4 = keras.models.Sequential([
keras.layers.Dense(128, input_shape=(784,),activation='relu'),
keras.layers.Dense(64,activation='relu'),
keras.layers.Dense(10,activation='softmax')
])
MLP_4.compile(
optimizer=keras.optimizers.Adam(lr),
loss= 'categorical_crossentropy',
metrics = ['accuracy']
)
## Add early stopping
early_stop = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=2)
History = MLP_4.fit(train_data[:10000],train_labels[:10000],
batch_size = N_batch_size,
epochs = N_epochs,
validation_data=(test_data,test_labels),
shuffle=False,
callbacks=[early_stop])
train_acc = History.history['accuracy']
test_acc = History.history['val_accuracy']
```
In this code, we added the early stopping callback by creating an instance of `tf.keras.callbacks.EarlyStopping` and passing it to the `callbacks` argument of the `fit` method. We set the `monitor` parameter to `'loss'` and the `patience` parameter to `2`. This means that training will stop if the loss does not improve after 2 epochs.
import numpy as np import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt Let us define a plt function for simplicity def plt_loss(x,training_metric,testing_metric,ax,colors = ['b']): ax.plot(x,training_metric,'b',label = 'Train') ax.plot(x,testing_metric,'k',label = 'Test') ax.set_xlabel('Epochs') ax.set_ylabel('Accuracy') plt.legend() plt.grid() plt.show() tf.keras.utils.set_random_seed(1) We import the Minist Dataset using Keras.datasets (train_data, train_labels), (test_data, test_labels) = keras.datasets.mnist.load_data() We first vectorize the image (28*28) into a vector (784) train_data = train_data.reshape(train_data.shape[0],train_data.shape[1]train_data.shape[2]) # 60000784 test_data = test_data.reshape(test_data.shape[0],test_data.shape[1]test_data.shape[2]) # 10000784 We next change label number to a 10 dimensional vector, e.g., 1-> train_labels = keras.utils.to_categorical(train_labels,10) test_labels = keras.utils.to_categorical(test_labels,10) start to build a MLP model N_batch_size = 5000 N_epochs = 100 lr = 0.01 we build a three layer model, 784 -> 64 -> 10 MLP_3 = keras.models.Sequential([ keras.layers.Dense(128, input_shape=(784,),activation='relu'), keras.layers.Dense(64, activation='relu'), keras.layers.Dense(10,activation='softmax') ]) MLP_3.compile( optimizer=keras.optimizers.Adam(lr), loss= 'categorical_crossentropy', metrics = ['accuracy'] ) History = MLP_3.fit(train_data,train_labels, batch_size = N_batch_size, epochs = N_epochs,validation_data=(test_data,test_labels), shuffle=False) train_acc = History.history['accuracy'] test_acc = History.history对于该模型,使用不同数量的训练数据(5000,10000,15000,…,60000,公差=5000的等差数列),绘制训练集和测试集准确率(纵轴)关于训练数据大小(横轴)的曲线
Sure, here's the code:
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
# Define a plt function for simplicity
def plt_loss(x,training_metric,testing_metric,ax,colors = ['b']):
ax.plot(x,training_metric,'b',label = 'Train')
ax.plot(x,testing_metric,'k',label = 'Test')
ax.set_xlabel('Epochs')
ax.set_ylabel('Accuracy')
plt.legend()
plt.grid()
plt.show()
tf.keras.utils.set_random_seed(1)
# Import the MNIST Dataset using Keras.datasets
(train_data, train_labels), (test_data, test_labels) = keras.datasets.mnist.load_data()
# Vectorize the image (28*28) into a vector (784)
train_data = train_data.reshape(train_data.shape[0],train_data.shape[1]*train_data.shape[2]) # 60000*784
test_data = test_data.reshape(test_data.shape[0],test_data.shape[1]*test_data.shape[2]) # 10000*784
# Change label number to a 10 dimensional vector, e.g., 1-> [0,1,0,0,0,0,0,0,0,0]
train_labels = keras.utils.to_categorical(train_labels,10)
test_labels = keras.utils.to_categorical(test_labels,10)
# Start to build a MLP model
N_batch_size = 5000
N_epochs = 100
lr = 0.01
# Build a three layer model, 784 -> 64 -> 10
MLP_3 = keras.models.Sequential([
keras.layers.Dense(128, input_shape=(784,),activation='relu'),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(10,activation='softmax')
])
MLP_3.compile(
optimizer=keras.optimizers.Adam(lr),
loss= 'categorical_crossentropy',
metrics = ['accuracy']
)
# Store the training history
train_acc = []
test_acc = []
for i in range(5000,65000,5000):
print('Training on',i,'samples')
History = MLP_3.fit(train_data[:i],train_labels[:i], batch_size = N_batch_size, epochs = N_epochs,validation_data=(test_data,test_labels), shuffle=False)
train_acc.append(History.history['accuracy'][-1])
test_acc.append(History.history['val_accuracy'][-1])
# Plot the training and testing accuracy as a function of training data size
plt.figure(figsize=(8,5))
plt.plot(range(5000,65000,5000),train_acc,'b',label = 'Train')
plt.plot(range(5000,65000,5000),test_acc,'k',label = 'Test')
plt.xlabel('Training Data Size')
plt.ylabel('Accuracy')
plt.legend()
plt.grid()
plt.show()
```
This code trains a three-layer MLP model on the MNIST dataset using Keras. It then loops through different numbers of training samples (5000, 10000, 15000, ..., 60000) and trains the model on each subset of the data for 100 epochs. After each training run, it records the final training and testing accuracy. Finally, it plots a graph of accuracy as a function of training data size.
阅读全文
相关推荐














