[TRT] [W] onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [TRT] [W] onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped [TRT] [W] input "images" with shape: (1, 3, 640, 640) dtype: DataType.FLOAT [TRT] [W] output "outputs" with shape: (1, 1) dtype: DataType.INT32 [TRT] [W] output "proto" with shape: (1, 100, 4) dtype: DataType.FLOAT [TRT] [W] output "461" with shape: (1, 100) dtype: DataType.FLOAT [TRT] [W] output "462" with shape: (1, 100) dtype: DataType.INT32
时间: 2024-04-18 11:26:27 浏览: 316
这些警告信息是由TensorRT引擎生成的。第一条警告表示您的ONNX模型使用了INT64类型的权重,而TensorRT不原生支持INT64类型。因此,TensorRT会尝试将权重转换为INT32类型。
第二条警告表示一些权重值超出了INT32类型的范围,因此TensorRT会对这些值进行截断。
接下来的几条警告描述了输入和输出的名称、形状和数据类型。例如,输入名称为"images",形状为(1, 3, 640, 640),数据类型为FLOAT。输出名称为"outputs",形状为(1, 1),数据类型为INT32。还有一些其他输出的详细信息。
这些警告信息通常是为了提醒您模型在TensorRT中的转换过程中可能遇到的问题,但并不一定表示存在错误。如果您的模型能够正常运行并产生预期的结果,可以忽略这些警告信息。如果您遇到了问题,可以进一步检查模型的输入、输出设置以及权重数值范围等方面是否正确。
相关问题
YOLOV8基于Opset-12导出的ONNX模型,使用TensorRT-8.2.1.8转换模型时,提示以下错误,请问如何修复这个错误?: [06/01/2023-17:17:23] [I] TensorRT version: 8.2.1 [06/01/2023-17:17:23] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1027 (MiB) [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1027 MiB [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1058 MiB [06/01/2023-17:17:24] [I] Start parsing network model [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-17:17:24] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-17:17:24] [I] [TRT] Opset version: 17 [06/01/2023-17:17:24] [I] [TRT] Producer name: pytorch [06/01/2023-17:17:24] [I] [TRT] Producer version: 2.0.0 [06/01/2023-17:17:24] [I] [TRT] Domain: [06/01/2023-17:17:24] [I] [TRT] Model version: 0 [06/01/2023-17:17:24] [I] [TRT] Doc string: [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:773: While parsing node number 267 [Range -> "/model.28/Range_output_0"]: [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/Constant_9_output_0" input: "/model.28/Cast_output_0" input: "/model.28/Constant_10_output_0" output: "/model.28/Range_output_0" name: "/model.28/Range" op_type: "Range" [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" [06/01/2023-17:17:24] [E] Failed to parse onnx file [06/01/2023-17:17:24] [I] Finish parsing network model [06/01/2023-17:17:24] [E] Parsing model failed [06/01/2023-17:17:24] [E] Failed to create engine from model. [06/01/2023-17:17:24] [E] Engine set up failed
这个错误提示是因为TensorRT不支持ONNX模型中的INT64类型权重。您需要将ONNX模型中的INT64类型权重转换为INT32类型,以便TensorRT能够正确地解析模型。可以使用ONNX官方提供的`onnx-tensorrt`工具来转换模型,该工具支持将INT64类型权重转换为INT32类型。具体步骤如下:
1. 安装`onnx-tensorrt`工具,可以使用`pip install onnx-tensorrt`命令来安装。
2. 使用`onnx-tensorrt`工具将ONNX模型转换为TensorRT引擎。具体命令如下:
```
import onnx
import onnx_tensorrt.backend as backend
from onnx_tensorrt.backend import prepare
model = onnx.load('/path/to/onnx/model')
engine = backend.prepare(model, device='CUDA:0', max_batch_size=max_batch_size)
```
其中,`/path/to/onnx/model`是您的ONNX模型文件路径,`max_batch_size`是您想要设置的最大批处理大小。这个命令会将ONNX模型转换为TensorRT引擎,并返回一个TensorRT引擎对象。
注意,在使用`onnx-tensorrt`工具时,需要确保您的ONNX模型中不包含任何INT64类型的权重。如果您的模型中包含INT64类型权重,您需要手动将其转换为INT32类型,或者使用其他工具将其转换为INT32类型。
2023-05-24 12:26:21.353318: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-05-24 12:26:23.180135: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Number of devices: 3
感谢提供的信息。根据你的描述,你似乎在使用 TensorFlow 和 TensorRT 进行深度学习推理。TensorFlow 是一个流行的深度学习框架,而 TensorRT 是 NVIDIA 为其 GPU 设计的深度学习推理加速库。你提到的警告是由 TF-TRT(TensorFlow TensorRT)引擎生成的。它是 TensorRT 的 TensorFlow 插件,用于在 TensorFlow 中优化和执行模型。警告表明,TF-TRT 在搜索可用设备时发现了 3 个设备,但它无法在系统中找到 TensorRT。如果你希望在 TensorFlow 中使用 TensorRT 进行推理,可以尝试安装 TensorRT 并将其配置为 TensorFlow 的后端。
阅读全文