torch fp16
时间: 2023-10-10 09:11:52 浏览: 34
在PyTorch中,FP16(Half-Precision Floating Point)是指使用16位浮点数进行计算的一种精度模式。使用FP16可以在一定程度上减少内存占用和计算开销,加快模型训练的速度。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
#### 引用[.reference_title]
- *1* *3* [pytorch模型训练之fp16、apm、多GPU模型、梯度检查点(gradient checkpointing)显存优化等](https://blog.csdn.net/u013250861/article/details/130405613)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"]
- *2* [Pytorch混合精度(FP16&FP32)(AMP自动混合精度)/半精度 训练(一) —— 原理(torch.half)](https://blog.csdn.net/hxxjxw/article/details/119798535)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"]
[ .reference_list ]