pytorch实现高光谱异常检测完整代码
时间: 2023-06-24 21:05:17 浏览: 335
以下是使用 PyTorch 实现高光谱异常检测的完整代码。我们将使用一个基于卷积神经网络的模型来实现该任务。
首先,我们需要导入必要的库和模块:
```python
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from sklearn.model_selection import train_test_split
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
接下来,我们可以定义一个自定义的数据集类 `HyperspectralDataset` 来加载我们的训练数据和测试数据:
```python
class HyperspectralDataset(Dataset):
def __init__(self, data, labels):
self.data = data
self.labels = labels
def __len__(self):
return len(self.labels)
def __getitem__(self, idx):
return (self.data[idx], self.labels[idx])
```
现在,我们可以定义我们的模型。我们将使用一个具有多个卷积层和池化层的卷积神经网络。最后,我们添加一个全连接层和一个输出层来进行分类。
```python
class HyperspectralCNN(nn.Module):
def __init__(self):
super(HyperspectralCNN, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(128 * 6 * 6, 256)
self.fc2 = nn.Linear(256, 2)
def forward(self, x):
x = self.pool(nn.functional.relu(self.conv1(x)))
x = self.pool(nn.functional.relu(self.conv2(x)))
x = self.pool(nn.functional.relu(self.conv3(x)))
x = x.view(-1, 128 * 6 * 6)
x = nn.functional.relu(self.fc1(x))
x = self.fc2(x)
return x
```
我们还需要定义一些超参数,例如批大小、学习率、迭代次数等。
```python
batch_size = 64
learning_rate = 0.001
num_epochs = 20
```
接下来,我们可以加载我们的数据。在本文中,我们将使用一个名为 `Indian_pines_corrected` 的高光谱图像数据集。
```python
data = pd.read_csv('Indian_pines_corrected.csv', header=None).values.astype(float)
labels = pd.read_csv('Indian_pines_labels.csv', header=None).values.astype(int)
X_train, X_test, y_train, y_test = train_test_split(data, labels, test_size=0.2, random_state=42)
train_dataset = HyperspectralDataset(X_train, y_train)
test_dataset = HyperspectralDataset(X_test, y_test)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)
```
现在,我们可以定义我们的优化器和损失函数。
```python
model = HyperspectralCNN()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
```
最后,我们可以开始训练我们的模型。
```python
train_losses = []
test_losses = []
for epoch in range(num_epochs):
train_loss = 0.0
for i, (inputs, labels) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(inputs.unsqueeze(1))
loss = criterion(outputs, labels.squeeze())
loss.backward()
optimizer.step()
train_loss += loss.item() * inputs.size(0)
train_loss /= len(train_loader.dataset)
train_losses.append(train_loss)
test_loss = 0.0
with torch.no_grad():
for inputs, labels in test_loader:
outputs = model(inputs.unsqueeze(1))
loss = criterion(outputs, labels.squeeze())
test_loss += loss.item() * inputs.size(0)
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('Epoch: {}, Training Loss: {:.4f}, Validation Loss: {:.4f}'.format(epoch+1, train_loss, test_loss))
```
在训练完成后,我们可以使用测试数据来评估模型的性能。
```python
correct = 0
total = 0
with torch.no_grad():
for inputs, labels in test_loader:
outputs = model(inputs.unsqueeze(1))
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels.squeeze()).sum().item()
accuracy = 100 * correct / total
print('Test Accuracy: {:.2f}%'.format(accuracy))
```
最后,我们可以绘制训练和测试损失随时间的变化图。
```python
plt.plot(train_losses, label='Training Loss')
plt.plot(test_losses, label='Validation Loss')
plt.legend()
plt.show()
```
希望这篇文章能够帮助你用 PyTorch 实现高光谱异常检测。
阅读全文