PyTorch实现Logistic回归对多元高斯分布进行分类实战代码
时间: 2023-05-22 13:03:21 浏览: 192
我可以为您提供这个问题的实战代码,这是一个使用PyTorch实现对多元高斯分布进行分类的Logistic回归代码:
```
import torch
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
#生成数据集
np.random.seed(1)
num_observations = 5000
x1 = np.random.multivariate_normal([0, 0], [[1, .75],[.75, 1]], num_observations)
x2 = np.random.multivariate_normal([1, 4], [[1, .75],[.75, 1]], num_observations)
simulated_separableish_features = np.vstack((x1, x2)).astype(np.float32)
simulated_labels = np.hstack((np.zeros(num_observations),
np.ones(num_observations)))
#绘制数据集
fig, ax = plt.subplots()
ax.scatter(simulated_separableish_features[:, 0], simulated_separableish_features[:, 1],
c = simulated_labels, alpha=.4)
#将数据集转换成Tensor
features = Variable(torch.from_numpy(simulated_separableish_features))
labels = Variable(torch.from_numpy(simulated_labels))
#定义模型
model = torch.nn.Sequential(
torch.nn.Linear(2, 1),
torch.nn.Sigmoid()
)
#定义误差函数和优化器
loss_fn = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
#训练模型
for epoch in range(500):
#向前传播
y_pred = model(features)
#计算误差
loss = loss_fn(y_pred, labels)
#清除梯度
optimizer.zero_grad()
#反向传播
loss.backward()
#更新参数
optimizer.step()
#预测
predicted = model(features).data.numpy().flatten()
plt.plot(simulated_separableish_features[:, 0], simulated_separableish_features[:, 1],
'go', alpha=.3)
plt.plot(simulated_separableish_features[:, 0], simulated_separableish_features[:, 1],
'ro', alpha=.3)
plt.legend(['False', 'True'])
plt.show()
```
希望这段代码能够帮助您解决问题。
阅读全文