Traceback (most recent call last): File "DT_001_X01_P01.py", line 150, in DT_001_X01_P01.Module.load_model File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmdet/apis/inference.py", line 42, in init_detector checkpoint = load_checkpoint(model, checkpoint, map_location=map_loc) File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmcv/runner/checkpoint.py", line 529, in load_checkpoint checkpoint = _load_checkpoint(filename, map_location, logger) File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmcv/runner/checkpoint.py", line 467, in _load_checkpoint return CheckpointLoader.load_checkpoint(filename, map_location, logger) File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmcv/runner/checkpoint.py", line 244, in load_checkpoint return checkpoint_loader(filename, map_location) File "/home/kejia/Server/tf/Bin_x64/DeepLearning/DL_Lib_02/mmcv/runner/checkpoint.py", line 261, in load_from_local checkpoint = torch.load(filename, map_location=map_location) File "torch/serialization.py", line 594, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "torch/serialization.py", line 853, in _load result = unpickler.load() File "torch/serialization.py", line 845, in persistent_load load_tensor(data_type, size, key, _maybe_decode_ascii(location)) File "torch/serialization.py", line 834, in load_tensor loaded_storages[key] = restore_location(storage, location) File "torch/serialization.py", line 175, in default_restore_location result = fn(storage, location) File "torch/serialization.py", line 157, in _cuda_deserialize return obj.cuda(device) File "torch/_utils.py", line 71, in _cuda with torch.cuda.device(device): File "torch/cuda/__init__.py", line 225, in __enter__ self.prev_idx = torch._C._cuda_getDevice() File "torch/cuda/__init__.py", line 164, in _lazy_init "Cannot re-initialize CUDA in forked subprocess. " + msg) RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method ('异常抛出', None) DT_001_X01_P01 load_model ret=1, version=V1.0.0.0
时间: 2024-04-22 07:24:17 浏览: 207
根据你提供的错误信息,看起来是在使用多进程时遇到了 CUDA 初始化错误。错误信息中指出,无法在 fork 的子进程中重新初始化 CUDA。为了在多进程中使用 CUDA,你需要使用 'spawn' 启动方法。
你可以尝试在主模块中添加以下代码来设置 'spawn' 启动方法:
```python
import multiprocessing as mp
if __name__ == '__main__':
mp.set_start_method('spawn')
# 在这里编写你的主程序逻辑
```
将你的主程序逻辑放在 `if __name__ == '__main__':` 块中,并在该块中调用 `mp.set_start_method('spawn')` 来设置 'spawn' 启动方法。
在上述代码中,`mp.set_start_method('spawn')` 将设置 'spawn' 启动方法为当前进程的默认启动方法。这样,在创建子进程时就会使用 'spawn' 启动方法,避免 CUDA 初始化错误。
希望这个解决方案对你有所帮助。如果你还有其他问题,请随时提问。
阅读全文