acc_history.append(train_acc.item())这段代码的功能是什么
时间: 2024-04-05 17:30:40 浏览: 27
这段代码的功能是将模型在训练集上的准确率记录下来,并将其添加到一个列表`acc_history`中。其中,`train_acc`是一个记录模型在训练集上准确率的变量,`train_acc.item()`将其转换为Python标量,方便后续处理。通过使用列表的`append()`方法,可以将`train_acc.item()`添加到`acc_history`列表的末尾。这样,每经过一次训练,就可以将模型在训练集上的准确率记录下来,并最终绘制出准确率随训练次数变化的折线图,方便我们分析模型的训练效果。
相关问题
import numpy as np import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt Let us define a plt function for simplicity def plt_loss(x,training_metric,testing_metric,ax,colors = ['b']): ax.plot(x,training_metric,'b',label = 'Train') ax.plot(x,testing_metric,'k',label = 'Test') ax.set_xlabel('Epochs') ax.set_ylabel('Accuracy') plt.legend() plt.grid() plt.show() tf.keras.utils.set_random_seed(1) We import the Minist Dataset using Keras.datasets (train_data, train_labels), (test_data, test_labels) = keras.datasets.mnist.load_data() We first vectorize the image (28*28) into a vector (784) train_data = train_data.reshape(train_data.shape[0],train_data.shape[1]train_data.shape[2]) # 60000784 test_data = test_data.reshape(test_data.shape[0],test_data.shape[1]test_data.shape[2]) # 10000784 We next change label number to a 10 dimensional vector, e.g., 1-> train_labels = keras.utils.to_categorical(train_labels,10) test_labels = keras.utils.to_categorical(test_labels,10) start to build a MLP model N_batch_size = 5000 N_epochs = 100 lr = 0.01 we build a three layer model, 784 -> 64 -> 10 MLP_3 = keras.models.Sequential([ keras.layers.Dense(128, input_shape=(784,),activation='relu'), keras.layers.Dense(64, activation='relu'), keras.layers.Dense(10,activation='softmax') ]) MLP_3.compile( optimizer=keras.optimizers.Adam(lr), loss= 'categorical_crossentropy', metrics = ['accuracy'] ) History = MLP_3.fit(train_data,train_labels, batch_size = N_batch_size, epochs = N_epochs,validation_data=(test_data,test_labels), shuffle=False) train_acc = History.history['accuracy'] test_acc = History.history对于该模型,使用不同数量的训练数据(5000,10000,15000,…,60000,公差=5000的等差数列),绘制训练集和测试集准确率(纵轴)关于训练数据大小(横轴)的曲线
Sure, here's the code:
```
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
# Define a plt function for simplicity
def plt_loss(x,training_metric,testing_metric,ax,colors = ['b']):
ax.plot(x,training_metric,'b',label = 'Train')
ax.plot(x,testing_metric,'k',label = 'Test')
ax.set_xlabel('Epochs')
ax.set_ylabel('Accuracy')
plt.legend()
plt.grid()
plt.show()
tf.keras.utils.set_random_seed(1)
# Import the MNIST Dataset using Keras.datasets
(train_data, train_labels), (test_data, test_labels) = keras.datasets.mnist.load_data()
# Vectorize the image (28*28) into a vector (784)
train_data = train_data.reshape(train_data.shape[0],train_data.shape[1]*train_data.shape[2]) # 60000*784
test_data = test_data.reshape(test_data.shape[0],test_data.shape[1]*test_data.shape[2]) # 10000*784
# Change label number to a 10 dimensional vector, e.g., 1-> [0,1,0,0,0,0,0,0,0,0]
train_labels = keras.utils.to_categorical(train_labels,10)
test_labels = keras.utils.to_categorical(test_labels,10)
# Start to build a MLP model
N_batch_size = 5000
N_epochs = 100
lr = 0.01
# Build a three layer model, 784 -> 64 -> 10
MLP_3 = keras.models.Sequential([
keras.layers.Dense(128, input_shape=(784,),activation='relu'),
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(10,activation='softmax')
])
MLP_3.compile(
optimizer=keras.optimizers.Adam(lr),
loss= 'categorical_crossentropy',
metrics = ['accuracy']
)
# Store the training history
train_acc = []
test_acc = []
for i in range(5000,65000,5000):
print('Training on',i,'samples')
History = MLP_3.fit(train_data[:i],train_labels[:i], batch_size = N_batch_size, epochs = N_epochs,validation_data=(test_data,test_labels), shuffle=False)
train_acc.append(History.history['accuracy'][-1])
test_acc.append(History.history['val_accuracy'][-1])
# Plot the training and testing accuracy as a function of training data size
plt.figure(figsize=(8,5))
plt.plot(range(5000,65000,5000),train_acc,'b',label = 'Train')
plt.plot(range(5000,65000,5000),test_acc,'k',label = 'Test')
plt.xlabel('Training Data Size')
plt.ylabel('Accuracy')
plt.legend()
plt.grid()
plt.show()
```
This code trains a three-layer MLP model on the MNIST dataset using Keras. It then loops through different numbers of training samples (5000, 10000, 15000, ..., 60000) and trains the model on each subset of the data for 100 epochs. After each training run, it records the final training and testing accuracy. Finally, it plots a graph of accuracy as a function of training data size.
在SVM中,linear_svm.py、linear_classifier.py和svm.ipynb中相应的代码
linear_svm.py:
```python
import numpy as np
class LinearSVM:
def __init__(self, lr=0.01, reg=0.01, num_iters=1000, batch_size=32):
self.lr = lr
self.reg = reg
self.num_iters = num_iters
self.batch_size = batch_size
self.W = None
self.b = None
def train(self, X, y):
num_train, dim = X.shape
num_classes = np.max(y) + 1
if self.W is None:
self.W = 0.001 * np.random.randn(dim, num_classes)
self.b = np.zeros((1, num_classes))
loss_history = []
for i in range(self.num_iters):
batch_idx = np.random.choice(num_train, self.batch_size)
X_batch = X[batch_idx]
y_batch = y[batch_idx]
loss, grad_W, grad_b = self.loss(X_batch, y_batch)
loss_history.append(loss)
self.W -= self.lr * grad_W
self.b -= self.lr * grad_b
return loss_history
def predict(self, X):
scores = X.dot(self.W) + self.b
y_pred = np.argmax(scores, axis=1)
return y_pred
def loss(self, X_batch, y_batch):
num_train = X_batch.shape[0]
scores = X_batch.dot(self.W) + self.b
correct_scores = scores[range(num_train), y_batch]
margins = np.maximum(0, scores - correct_scores[:, np.newaxis] + 1)
margins[range(num_train), y_batch] = 0
loss = np.sum(margins) / num_train + 0.5 * self.reg * np.sum(self.W * self.W)
num_pos = np.sum(margins > 0, axis=1)
dscores = np.zeros_like(scores)
dscores[margins > 0] = 1
dscores[range(num_train), y_batch] -= num_pos
dscores /= num_train
grad_W = np.dot(X_batch.T, dscores) + self.reg * self.W
grad_b = np.sum(dscores, axis=0, keepdims=True)
return loss, grad_W, grad_b
```
linear_classifier.py:
```python
import numpy as np
class LinearClassifier:
def __init__(self, lr=0.01, reg=0.01, num_iters=1000, batch_size=32):
self.lr = lr
self.reg = reg
self.num_iters = num_iters
self.batch_size = batch_size
self.W = None
self.b = None
def train(self, X, y):
num_train, dim = X.shape
num_classes = np.max(y) + 1
if self.W is None:
self.W = 0.001 * np.random.randn(dim, num_classes)
self.b = np.zeros((1, num_classes))
loss_history = []
for i in range(self.num_iters):
batch_idx = np.random.choice(num_train, self.batch_size)
X_batch = X[batch_idx]
y_batch = y[batch_idx]
loss, grad_W, grad_b = self.loss(X_batch, y_batch)
loss_history.append(loss)
self.W -= self.lr * grad_W
self.b -= self.lr * grad_b
return loss_history
def predict(self, X):
scores = X.dot(self.W) + self.b
y_pred = np.argmax(scores, axis=1)
return y_pred
def loss(self, X_batch, y_batch):
num_train = X_batch.shape[0]
scores = X_batch.dot(self.W) + self.b
correct_scores = scores[range(num_train), y_batch]
margins = np.maximum(0, scores - correct_scores[:, np.newaxis] + 1)
margins[range(num_train), y_batch] = 0
loss = np.sum(margins) / num_train + 0.5 * self.reg * np.sum(self.W * self.W)
num_pos = np.sum(margins > 0, axis=1)
dscores = np.zeros_like(scores)
dscores[margins > 0] = 1
dscores[range(num_train), y_batch] -= num_pos
dscores /= num_train
grad_W = np.dot(X_batch.T, dscores) + self.reg * self.W
grad_b = np.sum(dscores, axis=0, keepdims=True)
return loss, grad_W, grad_b
```
svm.ipynb:
```python
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs, make_moons
from sklearn.model_selection import train_test_split
from linear_classifier import LinearClassifier
def plot_data(X, y, title):
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.RdBu)
plt.title(title)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.show()
def plot_decision_boundary(clf, X, y, title):
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.RdBu)
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
xx = np.linspace(xlim[0], xlim[1], 100)
yy = np.linspace(ylim[0], ylim[1], 100)
XX, YY = np.meshgrid(xx, yy)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.predict(xy).reshape(XX.shape)
plt.contour(XX, YY, Z, levels=[0], colors='k', linestyles='-')
plt.title(title)
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.show()
def main():
X, y = make_blobs(n_samples=200, centers=2, random_state=42)
plot_data(X, y, 'Linearly Separable Data')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
clf = LinearClassifier()
loss_history = clf.train(X_train, y_train)
train_acc = np.mean(clf.predict(X_train) == y_train)
test_acc = np.mean(clf.predict(X_test) == y_test)
print('Train accuracy: {:.3f}, Test accuracy: {:.3f}'.format(train_acc, test_acc))
plot_decision_boundary(clf, X, y, 'Linear SVM')
if __name__ == '__main__':
main()
```
以上的代码实现了一个简单的线性 SVM,可以用于二分类问题。在 `svm.ipynb` 文件中,我们使用 `make_blobs` 生成了一个线性可分的数据集,然后将其拆分为训练集和测试集。接着,我们使用 `LinearClassifier` 对训练集进行训练,并在测试集上评估模型性能。最后,我们绘制了模型的决策边界。
相关推荐
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)