optimizer.zero_grad()
时间: 2024-05-18 17:16:30 浏览: 169
PyTorch中model.zero_grad()和optimizer.zero_grad()用法
在PyTorch中,optimizer.zero_grad()的作用是清空优化器中的梯度。具体来说,它会将优化器中所有可学习参数的梯度设为0,以便在下一次前向传播计算和反向传播计算时,之前的梯度不会对当前的梯度产生影响。这个操作通常在每个训练批次开始之前被调用,以确保每次计算梯度时都从零开始。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
#### 引用[.reference_title]
- *1* *3* [Pytorch反向传播——optimizer.zero_grad(), loss.backward(), optimizer.step() 作用](https://blog.csdn.net/ding_programmer/article/details/131413639)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"]
- *2* [PyTorch中model.zero_grad()和optimizer.zero_grad()用法](https://download.csdn.net/download/weixin_38544075/12850338)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"]
[ .reference_list ]
阅读全文