[06/01/2023-16:46:00] [I] TensorRT version: 8.2.1 [06/01/2023-16:46:01] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1071 (MiB) [06/01/2023-16:46:01] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1067 MiB [06/01/2023-16:46:01] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1101 MiB [06/01/2023-16:46:01] [I] Start parsing network model [06/01/2023-16:46:01] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-16:46:01] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-16:46:01] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-16:46:01] [I] [TRT] Opset version: 17 [06/01/2023-16:46:01] [I] [TRT] Producer name: pytorch [06/01/2023-16:46:01] [I] [TRT] Producer version: 2.0.0 [06/01/2023-16:46:01] [I] [TRT] Domain: [06/01/2023-16:46:01] [I] [TRT] Model version: 0 [06/01/2023-16:46:01] [I] [TRT] Doc string: [06/01/2023-16:46:01] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-16:46:01] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:773: While parsing node number 263 [Conv -> "/model.28/cv2.3/cv2.3.2/Conv_output_0"]: [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/cv2.3/cv2.3.1/act/Mul_output_0" input: "model.28.cv2.3.2.weight" input: "model.28.cv2.3.2.bias" output: "/model.28/cv2.3/cv2.3.2/Conv_output_0" name: "/model.28/cv2.3/cv2.3.2/Conv" op_type: "Conv" attribute { name: "dilations" ints: 1 ints: 1 type: INTS } attribute { name: "group" i: 1 type: INT } attribute { name: "kernel_shape" ints: 1 ints: 1 type: INTS } attribute { name: "pads" ints: 0 ints: 0 ints: 0 ints: 0 type: INTS } attribute { name: "strides" ints: 1 ints: 1 type: INTS } [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:647 In function importConv: [8] Assertion failed: inputs.at(2).is_weights() && "The bias tensor is required to be an initializer for the Conv operator." [06/01/2023-16:46:01] [E] Failed to parse onnx file
时间: 2024-01-31 08:04:55 浏览: 33
根据你的日志信息,看起来你在使用 TensorRT 进行 ONNX 模型的转换时出现了错误。具体来说,错误信息是:
```
ERROR: builtin_op_importers.cpp:647 In function importConv: [8] Assertion failed: inputs.at(2).is_weights() && "The bias tensor is required to be an initializer for the Conv operator."
```
从错误信息来看,它似乎是在解析 ONNX 模型文件过程中遇到了问题,具体是在解析 Conv 运算符时,缺少了偏置张量输入。你可以检查一下 ONNX 模型中 Conv 运算符的输入是否有问题。
相关问题
YOLOV8基于Opset-12导出的ONNX模型,使用TensorRT-8.2.1.8转换模型时,提示以下错误,请问如何修复这个错误?: [06/01/2023-17:17:23] [I] TensorRT version: 8.2.1 [06/01/2023-17:17:23] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1027 (MiB) [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1027 MiB [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1058 MiB [06/01/2023-17:17:24] [I] Start parsing network model [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-17:17:24] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-17:17:24] [I] [TRT] Opset version: 17 [06/01/2023-17:17:24] [I] [TRT] Producer name: pytorch [06/01/2023-17:17:24] [I] [TRT] Producer version: 2.0.0 [06/01/2023-17:17:24] [I] [TRT] Domain: [06/01/2023-17:17:24] [I] [TRT] Model version: 0 [06/01/2023-17:17:24] [I] [TRT] Doc string: [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:773: While parsing node number 267 [Range -> "/model.28/Range_output_0"]: [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/Constant_9_output_0" input: "/model.28/Cast_output_0" input: "/model.28/Constant_10_output_0" output: "/model.28/Range_output_0" name: "/model.28/Range" op_type: "Range" [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" [06/01/2023-17:17:24] [E] Failed to parse onnx file [06/01/2023-17:17:24] [I] Finish parsing network model [06/01/2023-17:17:24] [E] Parsing model failed [06/01/2023-17:17:24] [E] Failed to create engine from model. [06/01/2023-17:17:24] [E] Engine set up failed
这个错误提示是因为TensorRT不支持ONNX模型中的INT64类型权重。您需要将ONNX模型中的INT64类型权重转换为INT32类型,以便TensorRT能够正确地解析模型。可以使用ONNX官方提供的`onnx-tensorrt`工具来转换模型,该工具支持将INT64类型权重转换为INT32类型。具体步骤如下:
1. 安装`onnx-tensorrt`工具,可以使用`pip install onnx-tensorrt`命令来安装。
2. 使用`onnx-tensorrt`工具将ONNX模型转换为TensorRT引擎。具体命令如下:
```
import onnx
import onnx_tensorrt.backend as backend
from onnx_tensorrt.backend import prepare
model = onnx.load('/path/to/onnx/model')
engine = backend.prepare(model, device='CUDA:0', max_batch_size=max_batch_size)
```
其中,`/path/to/onnx/model`是您的ONNX模型文件路径,`max_batch_size`是您想要设置的最大批处理大小。这个命令会将ONNX模型转换为TensorRT引擎,并返回一个TensorRT引擎对象。
注意,在使用`onnx-tensorrt`工具时,需要确保您的ONNX模型中不包含任何INT64类型的权重。如果您的模型中包含INT64类型权重,您需要手动将其转换为INT32类型,或者使用其他工具将其转换为INT32类型。
jdk1.8,检查下如下配置有何问题。VM Flags: Non-default VM flags: -XX:CICompilerCount=12 -XX:CMSInitiatingOccupancyFraction=80 -XX:InitialHeapSize=17179869184 -XX:MaxHeapSize=68719476736 -XX:MaxNewSize=22906142720 -XX:MinHeapDeltaBytes=524288 -XX:NewSize=5726273536 -XX:OldSize=11453595648 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -XX:+UseCMSInitiatingOccupancyOnly -XX:-UseCompressedOops -XX:+UseFastUnorderedTimeStamps -XX:+UseParallelGC Command line: -D[Standalone] -XX:+UseCompressedOops -XX:+UseCompressedOops -Xms16384m -Xmx65536m -XX:MaxPermSize=8192m -XX:CMSInitiatingOccupancyFraction=80 -XX:+UseCMSInitiatingOccupancyOnly -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -Xloggc:gc.log -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true -Dorg.jboss.boot.log.file=/usr/Wildfly/wildfly-8.2.1.Final/standalone/log/server.log -Dlogging.configuration=file:/usr/Wildfly/wildfly-8.2.1.Final/standalone/configuration/logging.properties
根据你提供的配置,有以下几个问题需要注意:
1. 在 VM Flags 中重复了两次 `-XX:+UseCompressedOops`。建议移除其中一个重复的配置。
2. `-Xms16384m -Xmx65536m` 设置了初始堆大小为 16GB,最大堆大小为 64GB。根据你的需求和服务器的可用内存,这个配置可能是合理的。但请确保服务器有足够的内存来支持这个堆大小。
3. `-XX:MaxPermSize=8192m` 设置了永久代最大大小为 8GB。在 JDK 1.8 中,永久代已被元空间(Metaspace)取代。因此,这个配置对于 JDK 1.8 是无效的,可以移除。
4. `-Xloggc:gc.log` 配置了 GC 日志的输出路径为 `gc.log`。请确保指定的路径在文件系统中是可写的,并且有足够的磁盘空间来存储 GC 日志。
5. 其他的配置项看起来没有问题,但请确保这些配置与你的应用程序和服务器的需求相符合。
需要注意的是,这只是对配置文件的简单检查,具体的配置是否合适还需要根据你的应用程序和服务器的实际情况来确定。