要求动手从0实现 softmax 回归(只借助Tensor和Numpy相关的库)在Fashion-MNIST数据 集上进行训练和测试,并从loss、训练集以及测试集上的准确率等多个角度对结果进行分析 (要求从零实现交叉熵损失函数)
时间: 2024-04-14 10:31:04 浏览: 199
好的,以下是使用Tensor和Numpy库从零实现softmax回归并在Fashion-MNIST数据集上进行训练和测试的步骤:
首先,导入所需的库:
```python
import torch
import numpy as np
from torchvision import datasets, transforms
```
然后,加载Fashion-MNIST数据集并进行预处理:
```python
# 定义数据转换
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
# 加载训练集和测试集
train_dataset = datasets.FashionMNIST(root='./data', train=True, transform=transform, download=True)
test_dataset = datasets.FashionMNIST(root='./data', train=False, transform=transform, download=True)
# 创建数据加载器
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)
```
接下来,定义softmax函数:
```python
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x / np.sum(e_x, axis=1, keepdims=True)
```
然后,定义模型参数和优化器:
```python
# 初始化模型参数
num_features = 28 * 28
num_classes = 10
W = np.random.randn(num_features, num_classes) / np.sqrt(num_features)
b = np.zeros(num_classes)
# 定义学习率和迭代次数
learning_rate = 0.1
num_epochs = 10
```
接着,进行模型的训练:
```python
for epoch in range(num_epochs):
total_loss = 0
for images, labels in train_loader:
# 将图片转换为二维数组
images = images.view(-1, num_features)
# 前向传播
scores = np.dot(images.numpy(), W) + b
probs = softmax(scores)
# 计算损失函数
batch_size = images.shape[0]
y = np.zeros_like(probs)
y[np.arange(batch_size), labels.numpy()] = 1
loss = -np.sum(y * np.log(probs)) / batch_size
total_loss += loss
# 反向传播
grad_scores = (probs - y) / batch_size
grad_W = np.dot(images.numpy().T, grad_scores)
grad_b = np.sum(grad_scores, axis=0)
# 参数更新
W -= learning_rate * grad_W
b -= learning_rate * grad_b
# 打印每个epoch的损失函数值
print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, total_loss))
```
最后,对模型进行测试并分析结果:
```python
# 在训练集上进行预测
correct_train = 0
total_train = 0
for images, labels in train_loader:
images = images.view(-1, num_features)
scores = np.dot(images.numpy(), W) + b
probs = softmax(scores)
predicted_labels = np.argmax(probs, axis=1)
total_train += labels.size(0)
correct_train += (predicted_labels == labels.numpy()).sum().item()
train_accuracy = correct_train / total_train
# 在测试集上进行预测
correct_test = 0
total_test = 0
for images, labels in test_loader:
images = images.view(-1, num_features)
scores = np.dot(images.numpy(), W) + b
probs = softmax(scores)
predicted_labels = np.argmax(probs, axis=1)
total_test += labels.size(0)
correct_test += (predicted_labels == labels.numpy()).sum().item()
test_accuracy = correct_test / total_test
print('Train Accuracy: {:.2f}%'.format(train_accuracy * 100))
print('Test Accuracy: {:.2f}%'.format(test_accuracy * 100))
```
通过以上步骤,我们可以从零实现softmax回归并在Fashion-MNIST数据集上进行训练和测试,并从loss、训练集以及测试集上的准确率等多个角度对结果进行分析。可以观察每个epoch的损失函数逐渐减小,同时计算训练集和测试集上的准确率来评估模型的性能。
阅读全文