c++调tnn的代码
时间: 2023-08-24 20:17:51 浏览: 169
以下是使用C++调用TNN进行模型推理的代码示例:
```c++
#include <iostream>
#include "tnn/core/tnn.h"
#include "tnn/utils/blob_converter.h"
int main(int argc, char* argv[]) {
// 加载模型
std::string proto_content;
std::ifstream proto_stream("model.tnnproto", std::ios::binary);
proto_content.assign(std::istreambuf_iterator<char>(proto_stream), std::istreambuf_iterator<char>());
std::string model_content;
std::ifstream model_stream("model.tnnmodel", std::ios::binary);
model_content.assign(std::istreambuf_iterator<char>(model_stream), std::istreambuf_iterator<char>());
TNN_NS::ModelConfig config;
config.params.push_back(proto_content);
config.params.push_back(model_content);
auto predictor = std::make_shared<TNN_NS::TNNSDKSample>();
auto status = predictor->Init(config);
if (status != TNN_NS::TNN_OK) {
std::cerr << "Predictor init failed: " << TNN_NS::StatusDescription(status) << std::endl;
return -1;
}
// 准备输入数据
TNN_NS::DimsVector input_dims = {1, 3, 224, 224};
TNN_NS::Mat input_mat(TNN_NS::DEVICE_ARM, TNN_NS::N8UC3, input_dims);
auto input_data = input_mat.GetData();
for (int i = 0; i < input_mat.GetDataCount(); i++) {
input_data[i] = 1.0f;
}
// 执行推理
std::vector<TNN_NS::Mat> inputs = {input_mat};
auto output_blob = predictor->GetOutput("output");
std::vector<TNN_NS::Mat> outputs = {TNN_NS::Mat(output_blob->GetBlobDesc(), output_blob->GetHandle())};
status = predictor->Predict(inputs, outputs);
if (status != TNN_NS::TNN_OK) {
std::cerr << "Predictor predict failed: " << TNN_NS::StatusDescription(status) << std::endl;
return -1;
}
// 处理输出数据
auto output_data = outputs[0].GetData();
for (int i = 0; i < outputs[0].GetDataCount(); i++) {
std::cout << output_data[i] << " ";
}
std::cout << std::endl;
return 0;
}
```
这段代码中,我们首先使用`TNNSDKSample`类加载了一个模型,然后准备输入数据并通过`predictor->Predict()`执行推理,最后得到输出结果`output_data`。你需要将`model.tnnproto`和`model.tnnmodel`替换为你的TNN模型文件名,并且根据你的模型输入和输出维度修改输入数据`input_data`和输出数据的处理方式。
阅读全文