DeepFM模型tensorflow代码
时间: 2023-07-26 15:04:00 浏览: 205
以下是 DeepFM 模型的 TensorFlow 2.0 实现代码:
```python
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Dropout, Concatenate
from tensorflow.keras.regularizers import l2
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
class DeepFM:
def __init__(self, feature_dim, embedding_dim=8, hidden_units=(32, 32), dropout_rate=0.5, l2_reg=0.01, learning_rate=0.001):
self.feature_dim = feature_dim
self.embedding_dim = embedding_dim
self.hidden_units = hidden_units
self.dropout_rate = dropout_rate
self.l2_reg = l2_reg
self.learning_rate = learning_rate
def build(self):
input_layer = Input((self.feature_dim,))
embedding_layer = tf.keras.layers.Embedding(self.feature_dim, self.embedding_dim)(input_layer)
embedding_layer = tf.keras.layers.Flatten()(embedding_layer)
linear_layer = tf.keras.layers.Dense(units=1, activation=None)(input_layer)
fm_layer = tf.keras.layers.GlobalAveragePooling1D()(embedding_layer)
deep_layer = tf.keras.layers.Dense(units=self.hidden_units[0], activation='relu')(embedding_layer)
deep_layer = tf.keras.layers.Dropout(self.dropout_rate)(deep_layer)
for units in self.hidden_units[1:]:
deep_layer = tf.keras.layers.Dense(units=units, activation='relu')(deep_layer)
deep_layer = tf.keras.layers.Dropout(self.dropout_rate)(deep_layer)
deep_fm_layer = Concatenate()([fm_layer, deep_layer])
output_layer = tf.keras.layers.Dense(units=1, activation='sigmoid', kernel_regularizer=l2(self.l2_reg))(deep_fm_layer)
model = Model(inputs=input_layer, outputs=output_layer)
optimizer = Adam(lr=self.learning_rate)
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])
return model
```
其中,`feature_dim` 表示特征的维度,`embedding_dim` 表示嵌入层的维度,`hidden_units` 表示深度部分的神经元数量,`dropout_rate` 表示 dropout 比例,`l2_reg` 表示 L2 正则项系数,`learning_rate` 表示学习率。在 `build()` 方法里,我们首先定义了输入层和嵌入层,然后分别计算了线性部分、FM 部分和深度部分的结果,最后将 FM 和深度部分的结果拼接起来,并通过一个全连接层输出最终结果。这里使用了 Adam 优化器和二分类交叉熵损失函数,并编译了模型。
阅读全文