设置max_split_size_mb参数
时间: 2023-08-19 17:09:27 浏览: 90
设置max_split_size_mb参数是为了控制PyTorch模型在GPU上分配内存的方式。该参数用于指定一个阈值,当需要分配的内存大小小于该阈值时,PyTorch会将内存拆分成较小的块进行管理和分配。而当需要分配的内存大小大于该阈值时,PyTorch会将内存保留,以便下次分配给较大的请求。默认情况下,max_split_size_mb参数的值为INT_MAX,即所有的内存块都可以进行拆分。通过设置max_split_size_mb参数,可以避免内存碎片化问题,提高内存的利用效率。\[1\]\[3\]
#### 引用[.reference_title]
- *1* [如何设置 max_split_size_mb 以避免 Pytorch 中的碎片?](https://blog.csdn.net/qq_52010446/article/details/131362334)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item]
- *2* [深度学习环境配置](https://blog.csdn.net/qq_40265393/article/details/121611927)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item]
- *3* [一文读懂 PyTorch 显存管理机制](https://blog.csdn.net/zandaoguang/article/details/124054203)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control,239^v3^insert_chatgpt"}} ] [.reference_item]
[ .reference_list ]