lstm实现迁移学习的python代码
时间: 2023-10-03 20:05:48 浏览: 310
python迁移学习
下面是一个使用LSTM实现迁移学习的Python代码示例:
```python
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM, Dropout
from tensorflow.keras.utils import to_categorical
# 加载预训练模型的权重
pretrained_model = tf.keras.models.load_model('pretrained_model.h5')
# 加载训练数据和测试数据
train_data = np.load('train_data.npy')
test_data = np.load('test_data.npy')
train_labels = np.load('train_labels.npy')
test_labels = np.load('test_labels.npy')
# 将训练数据和测试数据转换为文本序列
tokenizer = Tokenizer(num_words=5000)
tokenizer.fit_on_texts(train_data)
train_sequences = tokenizer.texts_to_sequences(train_data)
test_sequences = tokenizer.texts_to_sequences(test_data)
# 对文本序列进行填充,使它们具有相同的长度
max_len = 100
train_data = pad_sequences(train_sequences, maxlen=max_len)
test_data = pad_sequences(test_sequences, maxlen=max_len)
# 将标签转换为独热编码
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# 将预训练模型的层添加到新的模型中
model = Sequential()
model.add(pretrained_model.layers[0])
model.add(pretrained_model.layers[1])
model.add(pretrained_model.layers[2])
model.add(pretrained_model.layers[3])
model.add(pretrained_model.layers[4])
# 添加新的LSTM层和输出层
model.add(LSTM(128))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
# 冻结预训练模型的权重
for layer in model.layers[:5]:
layer.trainable = False
# 编译模型
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
# 训练模型
model.fit(train_data, train_labels, epochs=10, batch_size=128, validation_data=(test_data, test_labels))
# 评估模型
loss, accuracy = model.evaluate(test_data, test_labels)
print('Test accuracy:', accuracy)
```
在这个示例中,我们首先加载了一个预训练的模型,并将它的层添加到一个新的模型中。然后,我们添加了一个新的LSTM层和一个输出层,并冻结了预训练模型的权重。最后,我们编译和训练了新的模型,并评估了它的性能。
阅读全文