数据集和transformer预测模型并且可视化python代码
时间: 2023-10-15 19:29:03 浏览: 123
下面是一个使用Transformer模型预测并可视化时间序列数据的Python代码示例:
```python
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
# 数据预处理
data = pd.read_csv('data.csv')
scaler = MinMaxScaler(feature_range=(-1, 1))
data['scaled'] = scaler.fit_transform(data['value'].values.reshape(-1, 1))
training_data = data['scaled'].values
# 定义超参数
input_size = 24
output_size = 12
num_epochs = 100
learning_rate = 0.0001
# 定义Transformer模型
class TransformerModel(nn.Module):
def __init__(self, input_size, output_size):
super(TransformerModel, self).__init__()
self.transformer = nn.Transformer(d_model=input_size, nhead=2, num_encoder_layers=2, num_decoder_layers=2)
self.fc = nn.Linear(input_size, output_size)
def forward(self, input):
input = input.unsqueeze(0)
output = self.transformer(input, input)
output = self.fc(output.squeeze(0))
return output
# 训练模型
model = TransformerModel(input_size, output_size)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
for epoch in range(num_epochs):
for i in range(len(training_data) - input_size - output_size):
input_seq = training_data[i:i+input_size]
output_seq = training_data[i+input_size:i+input_size+output_size]
input_tensor = torch.FloatTensor(input_seq).view(-1, 1, input_size)
output_tensor = torch.FloatTensor(output_seq).view(-1, 1, output_size)
optimizer.zero_grad()
predicted_output = model(input_tensor)
loss = criterion(predicted_output, output_tensor)
loss.backward()
optimizer.step()
print('Epoch: {}, Loss: {:.5f}'.format(epoch+1, loss.item()))
# 预测
model.eval()
with torch.no_grad():
future = 12
input_seq = training_data[-input_size:]
predicted_outputs = []
for i in range(future):
input_tensor = torch.FloatTensor(input_seq).view(-1, 1, input_size)
predicted_output = model(input_tensor)
predicted_outputs.append(predicted_output.squeeze().item())
input_seq = np.append(input_seq[1:], predicted_output.squeeze().item())
# 可视化预测结果
predicted_outputs = scaler.inverse_transform(np.array(predicted_outputs).reshape(-1, 1))
data_pred = data.iloc[-12:].copy()
data_pred['value'] = predicted_outputs
data_pred['scaled'] = scaler.fit_transform(data_pred['value'].values.reshape(-1, 1))
# 将原始数据和预测数据合并
data_all = pd.concat([data, data_pred], ignore_index=True)
data_all = data_all[['timestamp', 'value']]
# 可视化所有数据
plt.figure(figsize=(10, 6))
plt.plot(data_all['timestamp'][0:len(data)], data_all['value'][0:len(data)], label='Original data')
plt.plot(data_all['timestamp'][len(data):], data_all['value'][len(data):], label='Predicted data')
plt.xlabel('Time')
plt.ylabel('Value')
plt.legend()
plt.show()
```
这个代码示例使用了PyTorch深度学习库中的Transformer模型来预测时间序列数据,并将预测结果可视化出来。代码首先对原始数据进行了归一化处理,然后定义了一个Transformer模型类,其中包含了一个Transformer层和一个全连接层。接着使用MSELoss作为损失函数,Adam优化器作为优化器来训练模型。在训练完成后,使用模型来预测未来12个时间步长的数据,并将预测结果和原始数据合并,最后将所有数据可视化出来。
阅读全文