Adaptive Normalized Risk-Averting Training for Deep Neural Networks
时间: 2024-05-31 12:08:10 浏览: 126
Adaptive Normalized Risk-Averting Training (ANRAT) is a method for training deep neural networks that aims to improve the robustness and generalization of the model. The main idea behind ANRAT is to incorporate risk aversion into the training process, which encourages the model to make more conservative predictions that are less likely to result in high losses.
ANRAT achieves this by introducing a penalty term into the loss function that penalizes the model for making risky predictions. The penalty term is based on the normalized risk, which is a measure of the expected loss associated with a particular prediction.
During training, ANRAT adapts the penalty term based on the current state of the model. Specifically, the penalty term is increased if the model is making risky predictions, and decreased if the model is making more conservative predictions. This adaptive approach helps the model to learn to balance between making accurate predictions and avoiding risky predictions.
ANRAT has been shown to improve the robustness and generalization of deep neural networks across a range of tasks, including image classification and natural language processing. It has also been shown to be effective in mitigating the effects of adversarial attacks.
Overall, ANRAT is a promising approach for training deep neural networks that takes into account the risk associated with different predictions and encourages the model to make more conservative and robust predictions.
阅读全文
相关推荐

















