给出AI模型从训练到部署的源码示例?
时间: 2023-02-13 13:59:16 浏览: 111
好的,这里是一个基于 TensorFlow 的机器学习模型从训练到部署的示例代码:
首先是训练模型的代码:
```
import tensorflow as tf
# 定义模型输入和输出
inputs = tf.keras.Input(shape=(28, 28, 1), name='mnist_input')
x = tf.keras.layers.Conv2D(32, 3, activation='relu')(inputs)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
outputs = tf.keras.layers.Dense(10, activation='softmax')(x)
# 将输入和输出封装成模型
model = tf.keras.Model(inputs=inputs, outputs=outputs)
# 编译模型,定义损失函数和优化器
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# 加载 MNIST 数据集
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
# 训练模型
model.fit(x_train, y_train, epochs=5)
# 在测试集上评估模型
model.evaluate(x_test, y_test)
```
然后是将模型部署到 TensorFlow Serving 的代码:
```
import tensorflow as tf
import tensorflow_serving as serving
# 将模型保存为 SavedModel
tf.saved_model.save(model, '/path/to/export/model')
# 创建 TensorFlow Serving 输入
request = serving.input_pb2.Input()
request.examples.extend([tf.compat.as_bytes(x.tostring()) for x in x_test])
# 发送请求到 TensorFlow Serving
channel = serving.channel.InsecureChannel('localhost:8500')
stub = serving.apis.PredictionServiceStub(channel)
result = stub.Predict(request, 5.0)
阅读全文