Building engine, please wait for a while... [06/02/2023-21:46:54] [E] [TRT] 3: (Unnamed Layer* 0) [Convolution]:kernel weights has count 0 but 3456 was expected [06/02/2023-21:46:54] [E] [TRT] 4: (Unnamed Layer* 0) [Convolution]: count of 0 weights in kernel, but kernel dimensions (6,6) with 3 input channels, 32 output channels and 1 groups were specified. Expected Weights count is 3 * 6*6 * 32 / 1 = 3456 [06/02/2023-21:46:54] [E] [TRT] 4: [convolutionNode.cpp::computeOutputExtents::58] Error Code 4: Internal Error ((Unnamed Layer* 0) [Convolution]: number of kernel weights does not match tensor dimensions) [06/02/2023-21:46:54] [E] [TRT] 4: [network.cpp::validate::2956] Error Code 4: Internal Error (Could not compute dimensions for (Unnamed Layer* 0) [Convolution]_output, because the network is not valid.) Build engine successfully! yolov5-cls: /home/jm/桌面/tensorrtx-yolov5-v6.2/yolov5/yolov5_cls.cpp:151: void APIToModel(unsigned int, nvinfer1::IHostMemory**, float&, float&, std::__cxx11::string&): Assertion `engine != nullptr' failed. 已放弃 (核心已转储)
时间: 2024-02-06 19:02:32 浏览: 48
这个错误是由于在构建 TensorRT 引擎时,模型权重维度不匹配导致的。根据错误提示信息,在第 3 层卷积中期望有 3456 个权重,但实际上没有权重。
建议检查以下几点:
1. 确认模型是否正确加载,并且权重文件是否存在。
2. 检查模型的网络结构和权重是否对应,特别是在卷积层中,确保权重的维度与卷积核的大小、输入通道数、输出通道数、分组数等一致。
3. 如果是在使用 TensorRT 进行加速时,需要检查 TensorRT 版本是否与代码兼容。
如果以上检查都没有问题,可以尝试重新编译代码,并确保编译选项正确设置。如果问题仍然存在,可以考虑查看相关的日志信息以获取更多的信息。
相关问题
[06/01/2023-16:46:00] [I] TensorRT version: 8.2.1 [06/01/2023-16:46:01] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1071 (MiB) [06/01/2023-16:46:01] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1067 MiB [06/01/2023-16:46:01] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1101 MiB [06/01/2023-16:46:01] [I] Start parsing network model [06/01/2023-16:46:01] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-16:46:01] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-16:46:01] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-16:46:01] [I] [TRT] Opset version: 17 [06/01/2023-16:46:01] [I] [TRT] Producer name: pytorch [06/01/2023-16:46:01] [I] [TRT] Producer version: 2.0.0 [06/01/2023-16:46:01] [I] [TRT] Domain: [06/01/2023-16:46:01] [I] [TRT] Model version: 0 [06/01/2023-16:46:01] [I] [TRT] Doc string: [06/01/2023-16:46:01] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-16:46:01] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:773: While parsing node number 263 [Conv -> "/model.28/cv2.3/cv2.3.2/Conv_output_0"]: [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/cv2.3/cv2.3.1/act/Mul_output_0" input: "model.28.cv2.3.2.weight" input: "model.28.cv2.3.2.bias" output: "/model.28/cv2.3/cv2.3.2/Conv_output_0" name: "/model.28/cv2.3/cv2.3.2/Conv" op_type: "Conv" attribute { name: "dilations" ints: 1 ints: 1 type: INTS } attribute { name: "group" i: 1 type: INT } attribute { name: "kernel_shape" ints: 1 ints: 1 type: INTS } attribute { name: "pads" ints: 0 ints: 0 ints: 0 ints: 0 type: INTS } attribute { name: "strides" ints: 1 ints: 1 type: INTS } [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:647 In function importConv: [8] Assertion failed: inputs.at(2).is_weights() && "The bias tensor is required to be an initializer for the Conv operator." [06/01/2023-16:46:01] [E] Failed to parse onnx file
根据你的日志信息,看起来你在使用 TensorRT 进行 ONNX 模型的转换时出现了错误。具体来说,错误信息是:
```
ERROR: builtin_op_importers.cpp:647 In function importConv: [8] Assertion failed: inputs.at(2).is_weights() && "The bias tensor is required to be an initializer for the Conv operator."
```
从错误信息来看,它似乎是在解析 ONNX 模型文件过程中遇到了问题,具体是在解析 Conv 运算符时,缺少了偏置张量输入。你可以检查一下 ONNX 模型中 Conv 运算符的输入是否有问题。
YOLOV8基于Opset-12导出的ONNX模型,使用TensorRT-8.2.1.8转换模型时,提示以下错误,请问如何修复这个错误?: [06/01/2023-17:17:23] [I] TensorRT version: 8.2.1 [06/01/2023-17:17:23] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1027 (MiB) [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1027 MiB [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1058 MiB [06/01/2023-17:17:24] [I] Start parsing network model [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-17:17:24] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-17:17:24] [I] [TRT] Opset version: 17 [06/01/2023-17:17:24] [I] [TRT] Producer name: pytorch [06/01/2023-17:17:24] [I] [TRT] Producer version: 2.0.0 [06/01/2023-17:17:24] [I] [TRT] Domain: [06/01/2023-17:17:24] [I] [TRT] Model version: 0 [06/01/2023-17:17:24] [I] [TRT] Doc string: [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:773: While parsing node number 267 [Range -> "/model.28/Range_output_0"]: [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/Constant_9_output_0" input: "/model.28/Cast_output_0" input: "/model.28/Constant_10_output_0" output: "/model.28/Range_output_0" name: "/model.28/Range" op_type: "Range" [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" [06/01/2023-17:17:24] [E] Failed to parse onnx file [06/01/2023-17:17:24] [I] Finish parsing network model [06/01/2023-17:17:24] [E] Parsing model failed [06/01/2023-17:17:24] [E] Failed to create engine from model. [06/01/2023-17:17:24] [E] Engine set up failed
这个错误提示是因为TensorRT不支持ONNX模型中的INT64类型权重。您需要将ONNX模型中的INT64类型权重转换为INT32类型,以便TensorRT能够正确地解析模型。可以使用ONNX官方提供的`onnx-tensorrt`工具来转换模型,该工具支持将INT64类型权重转换为INT32类型。具体步骤如下:
1. 安装`onnx-tensorrt`工具,可以使用`pip install onnx-tensorrt`命令来安装。
2. 使用`onnx-tensorrt`工具将ONNX模型转换为TensorRT引擎。具体命令如下:
```
import onnx
import onnx_tensorrt.backend as backend
from onnx_tensorrt.backend import prepare
model = onnx.load('/path/to/onnx/model')
engine = backend.prepare(model, device='CUDA:0', max_batch_size=max_batch_size)
```
其中,`/path/to/onnx/model`是您的ONNX模型文件路径,`max_batch_size`是您想要设置的最大批处理大小。这个命令会将ONNX模型转换为TensorRT引擎,并返回一个TensorRT引擎对象。
注意,在使用`onnx-tensorrt`工具时,需要确保您的ONNX模型中不包含任何INT64类型的权重。如果您的模型中包含INT64类型权重,您需要手动将其转换为INT32类型,或者使用其他工具将其转换为INT32类型。