torch.optim.LBFGS
时间: 2023-10-14 13:00:58 浏览: 333
torch.optim.LBFGS是PyTorch中的一个优化器,用于使用LBFGS算法对模型进行参数优化。它的参数包括学习率lr,每个优化步骤的最大迭代次数max_iter,每次优化函数计算的最大数量max_eval,一阶最优终止公差tolerance_grad,函数值/参数变化的终止容差tolerance_change,更新历史记录大小history_size,以及使用的线搜索算法line_search_fn。其中,max_iter指的是LBFGS内部优化迭代时的步数,而max_eval是通常理解的最大迭代步数,即在LBFGS迭代了max_eval步后,外部的程序才会终止。通过设置这些参数,我们可以根据实际情况对LBFGS算法进行调优。<span class="em">1</span><span class="em">2</span><span class="em">3</span>
#### 引用[.reference_title]
- *1* [pytorch 使用 torch.optim.LBFGS() 优化神经网络](https://blog.csdn.net/awslyyds/article/details/127503167)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT3_1"}}] [.reference_item style="max-width: 33.333333333333336%"]
- *2* [Pytorch优化器全总结(三)牛顿法、BFGS、L-BFGS 含代码](https://blog.csdn.net/xian0710830114/article/details/128419401)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT3_1"}}] [.reference_item style="max-width: 33.333333333333336%"]
- *3* [记录 torch.optim.LBFGS](https://blog.csdn.net/mifangdebaise/article/details/126380817)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT3_1"}}] [.reference_item style="max-width: 33.333333333333336%"]
[ .reference_list ]
阅读全文