max_split_size_mb
时间: 2023-10-07 10:07:07 浏览: 128
Hadoop的block Size和split Size究竟是什么关系_1
max_split_size_mb是一个阈值,用于指定可以被拆分的Block的最大大小。根据引用和引用,max_split_size_mb的设定是小于这一阈值的Block才会进行拆分。这是因为PyTorch认为大部分内存申请的大小都小于这个阈值,对于这些较小的Block,按照常规处理进行拆分和碎片管理。而对于大于阈值的Block,PyTorch认为它们的申请开销较大,不适合进行拆分,可以留待分配给下次较大的请求。默认情况下,max_split_size_mb的值为INT_MAX,即所有的Block都可以拆分。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
#### 引用[.reference_title]
- *1* *2* [一文读懂 PyTorch 显存管理机制](https://blog.csdn.net/zandaoguang/article/details/124054203)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"]
- *3* [通过设置PYTORCH_CUDA_ALLOC_CONF中的max_split_size_mb解决Pytorch的显存碎片化导致的CUDA:Out Of Memory...](https://blog.csdn.net/MirageTanker/article/details/127998036)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 50%"]
[ .reference_list ]
阅读全文