使用中文回答:Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForQuestionAnswering: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForQuestionAnswering were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
时间: 2024-04-13 22:25:37 浏览: 835
bert-base-uncased-pytorch_model.bin
5星 · 资源好评率100%
这个警告信息表明,在初始化 BertForQuestionAnswering 模型时,有一些模型权重没有被使用。这是预期的情况,如果你正在从一个在另一个任务或使用其他架构的模型的检查点中初始化 BertForQuestionAnswering(例如,从 BertForPreTraining 模型中初始化 BertForSequenceClassification 模型)。
但如果你希望从一个完全相同的模型检查点中初始化 BertForQuestionAnswering(例如,从 BertForSequenceClassification 模型中初始化 BertForSequenceClassification 模型),那么这就是意外的情况。
BertForQuestionAnswering 的一些权重没有从 bert-base-uncased 模型的检查点中初始化,并且是新初始化的权重,包括 'qa_outputs.weight' 和 'qa_outputs.bias'。
为了能够使用该模型进行预测和推理,你可能需要在一个下游任务上对该模型进行训练。
阅读全文