写一个基于word2vec,用bp神经网络进行文本分类的代码
时间: 2024-02-21 09:01:49 浏览: 107
好的,以下是一个基于Word2Vec和BP神经网络进行文本分类的Python代码示例:
```python
import numpy as np
from gensim.models import Word2Vec
from sklearn.preprocessing import LabelEncoder
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
# 读取文本数据和标签
texts = ['This is an example', 'Another example']
labels = ['positive', 'negative']
# 训练Word2Vec模型
sentences = [text.lower().split() for text in texts]
model = Word2Vec(sentences, size=100, min_count=1)
# 将文本转换为Word2Vec向量
X = []
for sentence in sentences:
sentence_vec = []
for word in sentence:
if word in model.wv.vocab:
sentence_vec.append(model.wv[word])
X.append(np.mean(sentence_vec, axis=0))
# 将标签编码为数字
le = LabelEncoder()
y = le.fit_transform(labels)
# 构建BP神经网络模型
model = Sequential()
model.add(Dense(64, input_dim=100))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# 训练模型
model.fit(X, y, epochs=100, batch_size=32)
# 进行预测
test_text = 'This is a test'
test_vec = []
for word in test_text.lower().split():
if word in model.wv.vocab:
test_vec.append(model.wv[word])
test_vec = np.mean(test_vec, axis=0)
prediction = model.predict_classes(test_vec.reshape(1, -1))
print(le.inverse_transform(prediction))
```
需要注意的是,这只是一个简单的示例代码,实际使用时需要根据具体情况进行调整和优化。
阅读全文