数据:两列 一列是每个四小时的日期时间,另一列是记录的气温。(只有这些数据),基于这写历史数据,通过神经网络预测未来第二天的气温。请用Python给出详细准确的代码
时间: 2024-04-01 19:38:44 浏览: 47
基于循环神经网络(RNN, LSTM或GRU)实现气象数据预测附python代码.zip
5星 · 资源好评率100%
以下是使用Python实现神经网络预测未来气温的详细代码:
首先,我们需要导入所需的库和数据集。我们将使用pandas库来加载和处理数据,使用scikit-learn库来拆分数据集,使用Keras库来构建和训练神经网络模型。
```python
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, LSTM
# 加载数据集
data = pd.read_csv('temperature.csv')
# 将日期时间转换为时间戳并将其设置为索引
data['Datetime'] = pd.to_datetime(data['Datetime'])
data.set_index('Datetime', inplace=True)
# 将数据集拆分为训练集和测试集
train_size = int(len(data) * 0.8)
train_data, test_data = data.iloc[:train_size,:], data.iloc[train_size:,:]
# 将训练集和测试集分别拆分为输入和输出
train_X, train_y = train_data.iloc[:, 0], train_data.iloc[:, 1]
test_X, test_y = test_data.iloc[:, 0], test_data.iloc[:, 1]
```
接下来,我们将准备数据以供神经网络使用。我们需要将输入数据转换为3D张量,以便它可以被馈送到LSTM层中。我们还需要对输出数据进行标准化,以便它在训练期间更好地进行优化。
```python
# 将输入数据转换为3D张量
def create_dataset(X, y, time_steps=1):
Xs, ys = [], []
for i in range(len(X) - time_steps):
Xs.append(X.iloc[i:(i+time_steps)].values)
ys.append(y.iloc[i+time_steps])
return np.array(Xs), np.array(ys)
time_steps = 24
X_train, y_train = create_dataset(train_X, train_y, time_steps)
X_test, y_test = create_dataset(test_X, test_y, time_steps)
# 对输出数据进行标准化
mean = y_train.mean()
std = y_train.std()
y_train = (y_train - mean) / std
y_test = (y_test - mean) / std
```
接下来,我们将构建和训练神经网络模型。我们将使用LSTM层来处理序列数据,并使用Dense层来输出预测值。我们将使用Adam优化器和均方误差损失函数进行模型训练。
```python
# 构建神经网络模型
model = Sequential()
model.add(LSTM(64, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
# 训练神经网络模型
model.fit(X_train, y_train, epochs=50, batch_size=16, validation_split=0.1, verbose=1)
```
最后,我们将使用训练好的模型来进行预测,并计算预测结果的均方根误差(RMSE)和平均绝对误差(MAE)。
```python
# 进行预测
y_pred = model.predict(X_test)
y_pred = y_pred * std + mean
# 计算预测结果的RMSE和MAE
from sklearn.metrics import mean_squared_error, mean_absolute_error
rmse = np.sqrt(mean_squared_error(test_y, y_pred))
mae = mean_absolute_error(test_y, y_pred)
print("RMSE:", rmse)
print("MAE:", mae)
```
完整代码如下:
```python
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense, LSTM
# 加载数据集
data = pd.read_csv('temperature.csv')
# 将日期时间转换为时间戳并将其设置为索引
data['Datetime'] = pd.to_datetime(data['Datetime'])
data.set_index('Datetime', inplace=True)
# 将数据集拆分为训练集和测试集
train_size = int(len(data) * 0.8)
train_data, test_data = data.iloc[:train_size,:], data.iloc[train_size:,:]
# 将训练集和测试集分别拆分为输入和输出
train_X, train_y = train_data.iloc[:, 0], train_data.iloc[:, 1]
test_X, test_y = test_data.iloc[:, 0], test_data.iloc[:, 1]
# 将输入数据转换为3D张量
def create_dataset(X, y, time_steps=1):
Xs, ys = [], []
for i in range(len(X) - time_steps):
Xs.append(X.iloc[i:(i+time_steps)].values)
ys.append(y.iloc[i+time_steps])
return np.array(Xs), np.array(ys)
time_steps = 24
X_train, y_train = create_dataset(train_X, train_y, time_steps)
X_test, y_test = create_dataset(test_X, test_y, time_steps)
# 对输出数据进行标准化
mean = y_train.mean()
std = y_train.std()
y_train = (y_train - mean) / std
y_test = (y_test - mean) / std
# 构建神经网络模型
model = Sequential()
model.add(LSTM(64, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dense(1))
model.compile(loss='mse', optimizer='adam')
# 训练神经网络模型
model.fit(X_train, y_train, epochs=50, batch_size=16, validation_split=0.1, verbose=1)
# 进行预测
y_pred = model.predict(X_test)
y_pred = y_pred * std + mean
# 计算预测结果的RMSE和MAE
from sklearn.metrics import mean_squared_error, mean_absolute_error
rmse = np.sqrt(mean_squared_error(test_y, y_pred))
mae = mean_absolute_error(test_y, y_pred)
print("RMSE:", rmse)
print("MAE:", mae)
```
阅读全文