Python库快速安装指南:fast_cli_rany-0.0.8使用教程

版权申诉
0 下载量 30 浏览量 更新于2024-10-07 收藏 16KB ZIP 举报
资源摘要信息: "Python库 | fast_cli_rany-0.0.8-py3-none-any.whl" 是一个Python语言编写的软件包,该文件遵循PEP 440标准命名规范,通常用于Python的第三方库安装,通过pip等包管理工具进行安装。该库的版本号为0.0.8,支持Python 3版本,不依赖于任何操作系统,表明其为跨平台兼容的纯Python包。 知识点详细说明: 1. Python库: Python库是一组预编译好的代码集合,用于实现特定的功能,如科学计算、数据分析、图形绘制等。Python库可以极大地提升开发效率,使得开发者能够重用已有的代码来实现复杂的功能,无需从零开始编写。库通常包含多个模块,开发者可以根据需要调用这些模块。 2. fast_cli_rany-0.0.8-py3-none-any.whl文件格式: 这是一个Python Wheel格式的文件,是Python的二进制分发格式之一。Wheel文件类似于Java的.jar文件或者.NET的.dll文件,主要目的就是为了简化安装过程。Wheel文件的扩展名通常以.whl结尾,这种文件格式包含了编译好的代码,能够加速安装过程,因为它们避免了在安装过程中重复编译代码的需要。 3. 解压后可用: 通常情况下,开发者需要将下载的Wheel文件放置到项目的虚拟环境中,然后通过pip工具进行安装。安装完成后,库中的函数、类和模块就可以被Python项目导入和使用。 4. PEP 440: PEP代表Python Enhancement Proposal,是Python改进提案的意思。PEP 440是Python版本号规范的具体提案,定义了Python版本号的规则。该提案详细说明了如何命名和版本化Python包,以便于维护和分发。 5. pip: pip是Python的包安装器,是一个命令行工具,用于安装和管理Python包。pip会自动处理下载、解压、编译和安装包,并且它也是Python标准库的一部分。使用pip可以轻松地从Python包索引(PyPI)安装包。 6. 跨平台兼容性: 根据文件命名规范中的"none-any"部分,意味着这个包不依赖于特定的操作系统,适用于Windows、Linux、macOS等主流操作系统。这表明该Python库是设计成可以在多个操作系统上运行的。 7. Python版本兼容性: 文件名中包含"py3",说明该库只能在Python 3的版本上运行,不支持Python 2.x版本。Python 2在2020年已经到达官方的维护结束点,目前推荐使用Python 3。 综上所述,"fast_cli_rany-0.0.8-py3-none-any.whl" 是一个适用于Python 3且可以在多个操作系统上运行的Python库,它遵循PEP 440版本命名规范,以Wheel格式提供,方便快速安装和使用。开发者可以使用pip等工具将其添加到项目中,从而利用库中提供的功能和模块来增强应用的能力。

YOLOV8基于Opset-12导出的ONNX模型,使用TensorRT-8.2.1.8转换模型时,提示以下错误,请问如何修复这个错误?: [06/01/2023-17:17:23] [I] TensorRT version: 8.2.1 [06/01/2023-17:17:23] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1027 (MiB) [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1027 MiB [06/01/2023-17:17:24] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1058 MiB [06/01/2023-17:17:24] [I] Start parsing network model [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-17:17:24] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-17:17:24] [I] [TRT] Opset version: 17 [06/01/2023-17:17:24] [I] [TRT] Producer name: pytorch [06/01/2023-17:17:24] [I] [TRT] Producer version: 2.0.0 [06/01/2023-17:17:24] [I] [TRT] Domain: [06/01/2023-17:17:24] [I] [TRT] Model version: 0 [06/01/2023-17:17:24] [I] [TRT] Doc string: [06/01/2023-17:17:24] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-17:17:24] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:773: While parsing node number 267 [Range -> "/model.28/Range_output_0"]: [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/Constant_9_output_0" input: "/model.28/Cast_output_0" input: "/model.28/Constant_10_output_0" output: "/model.28/Range_output_0" name: "/model.28/Range" op_type: "Range" [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-17:17:24] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:3352 In function importRange: [8] Assertion failed: inputs.at(0).isInt32() && "For range operator with dynamic inputs, this version of TensorRT only supports INT32!" [06/01/2023-17:17:24] [E] Failed to parse onnx file [06/01/2023-17:17:24] [I] Finish parsing network model [06/01/2023-17:17:24] [E] Parsing model failed [06/01/2023-17:17:24] [E] Failed to create engine from model. [06/01/2023-17:17:24] [E] Engine set up failed

2023-06-02 上传

[06/01/2023-16:46:00] [I] TensorRT version: 8.2.1 [06/01/2023-16:46:01] [I] [TRT] [MemUsageChange] Init CUDA: CPU +323, GPU +0, now: CPU 335, GPU 1071 (MiB) [06/01/2023-16:46:01] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 335 MiB, GPU 1067 MiB [06/01/2023-16:46:01] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 470 MiB, GPU 1101 MiB [06/01/2023-16:46:01] [I] Start parsing network model [06/01/2023-16:46:01] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-16:46:01] [I] [TRT] Input filename: /opt/projects/ultralytics/runs/detect/train/weights/best.onnx [06/01/2023-16:46:01] [I] [TRT] ONNX IR version: 0.0.8 [06/01/2023-16:46:01] [I] [TRT] Opset version: 17 [06/01/2023-16:46:01] [I] [TRT] Producer name: pytorch [06/01/2023-16:46:01] [I] [TRT] Producer version: 2.0.0 [06/01/2023-16:46:01] [I] [TRT] Domain: [06/01/2023-16:46:01] [I] [TRT] Model version: 0 [06/01/2023-16:46:01] [I] [TRT] Doc string: [06/01/2023-16:46:01] [I] [TRT] ---------------------------------------------------------------- [06/01/2023-16:46:01] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:773: While parsing node number 263 [Conv -> "/model.28/cv2.3/cv2.3.2/Conv_output_0"]: [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:775: input: "/model.28/cv2.3/cv2.3.1/act/Mul_output_0" input: "model.28.cv2.3.2.weight" input: "model.28.cv2.3.2.bias" output: "/model.28/cv2.3/cv2.3.2/Conv_output_0" name: "/model.28/cv2.3/cv2.3.2/Conv" op_type: "Conv" attribute { name: "dilations" ints: 1 ints: 1 type: INTS } attribute { name: "group" i: 1 type: INT } attribute { name: "kernel_shape" ints: 1 ints: 1 type: INTS } attribute { name: "pads" ints: 0 ints: 0 ints: 0 ints: 0 type: INTS } attribute { name: "strides" ints: 1 ints: 1 type: INTS } [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:776: --- End node --- [06/01/2023-16:46:01] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:647 In function importConv: [8] Assertion failed: inputs.at(2).is_weights() && "The bias tensor is required to be an initializer for the Conv operator." [06/01/2023-16:46:01] [E] Failed to parse onnx file

2023-06-02 上传