写出EEMD+STL+XGBoost+LightGBM+ConvLSTM的读取csv数据的多输入单输出的时序训练及预测代码;其中,前80%数据为训练集,后20%为测试集。
时间: 2023-03-09 11:23:20 浏览: 121
eemd分解+ga-svm,eemd分解+woa-svm风电功率预测,集合经验模态分解+鲸鱼算法改进SVM(代码完整,数据齐全)
EEMD:以下代码可用于多输入单输出的时序训练及预测,其中前80%数据作为训练集,后20%作为测试集:from pyeeg import eemd
from sklearn.model_selection import train_test_split
import pandas as pd
import xgboost as xgb
import lightgbm as lgb# 读取数据
data = pd.read_csv('filename.csv')# 分割训练集和测试集
X = data.iloc[:,:-1]
y = data.iloc[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)# EEMD处理
X_train_eemd = eemd(X_train)
X_test_eemd = eemd(X_test)# 构建XGBoost模型
xgboost = xgb.XGBRegressor()
xgboost.fit(X_train_eemd, y_train)# 构建LightGBM模型
lightgbm = lgb.LGBMRegressor()
lightgbm.fit(X_train_eemd, y_train)# 构建ConvLSTM模型
from keras.models import Sequential
from keras.layers import ConvLSTM2Dseq_model = Sequential()
seq_model.add(ConvLSTM2D(filters=64, kernel_size=(1,1),
input_shape=(X_train_eemd.shape[1], 1, 1)))
seq_model.add(Dense(1))
seq_model.compile(loss='mse', optimizer='adam')
seq_model.fit(X_train_eemd, y_train, epochs=10, batch_size=32)# 预测
y_pred_xgboost = xgboost.predict(X_test_eemd)
y_pred_lightgbm = lightgbm.predict(X_test_eemd)
y_pred_convlstm = seq_model.predict(X_test_eemd)
阅读全文