Keras 框架搭建 textCNN 文本二分类模型的代码,我要用 word2vec 预训练词向量
时间: 2024-04-08 18:35:33 浏览: 120
Sure! Here's an example code for building a textCNN model for text binary classification using pre-trained word2vec embeddings in the Keras framework:
```python
from keras.layers import Conv1D, MaxPooling1D, Embedding, Flatten, Dense
from keras.models import Sequential
from gensim.models import KeyedVectors
# Load pre-trained word2vec embeddings
word2vec_path = 'path/to/word2vec.bin'
word_vectors = KeyedVectors.load_word2vec_format(word2vec_path, binary=True)
# Set the embedding dimension based on the pre-trained word2vec embeddings
embedding_dim = word_vectors.vector_size
# Define the model architecture
model = Sequential()
# Embedding layer with pre-trained word2vec weights
model.add(Embedding(input_dim=vocab_size, output_dim=embedding_dim, weights=[word_vectors.vectors], trainable=False, input_length=max_sequence_length))
# Convolutional layers with different filter sizes
filters = [3, 4, 5]
conv_blocks = []
for filter_size in filters:
conv = Conv1D(filters=128, kernel_size=filter_size, activation='relu')(model.layers[-1].output)
pool = MaxPooling1D(pool_size=2)(conv)
flatten = Flatten()(pool)
conv_blocks.append(flatten)
# Concatenate the outputs of the conv blocks
concat = concatenate(conv_blocks, axis=-1)
# Fully connected layer
model.add(Dense(128, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
# Print the model summary
model.summary()
```
Make sure to replace `vocab_size`, `word2vec_path`, and `max_sequence_length` with appropriate values for your dataset and provide the correct path to your pre-trained word2vec embeddings file.
阅读全文