bert预训练模型代码
时间: 2024-03-02 11:46:16 浏览: 123
中文BERT预训练模型(可调用)
5星 · 资源好评率100%
根据提供的引用内容,Bert是一种预训练模型,可以用于自然语言处理任务。下面是使用PyTorch和TensorFlow实现Bert预训练模型的代码示例:
1. PyTorch代码示例:
```python
import torch
from transformers import BertModel, BertTokenizer
# 加载Bert模型和分词器
model = BertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# 输入文本
text = "Hello, how are you?"
# 分词和编码
input_ids = tokenizer.encode(text, add_special_tokens=True)
input_ids = torch.tensor([input_ids])
# 获取Bert模型的输出
outputs = model(input_ids)
# 输出Bert模型的隐藏状态和池化状态
hidden_states = outputs[0]
pooled_output = outputs[1]
# 打印结果
print("Hidden states shape:", hidden_states.shape)
print("Pooled output shape:", pooled_output.shape)
```
2. TensorFlow代码示例:
```python
import tensorflow as tf
from transformers import TFBertModel, BertTokenizer
# 加载Bert模型和分词器
model = TFBertModel.from_pretrained('bert-base-uncased')
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
# 输入文本
text = "Hello, how are you?"
# 分词和编码
input_ids = tokenizer.encode(text, add_special_tokens=True)
input_ids = tf.constant([input_ids])
# 获取Bert模型的输出
outputs = model(input_ids)
# 输出Bert模型的隐藏状态和池化状态
hidden_states = outputs[0]
pooled_output = outputs[1]
# 打印结果
print("Hidden states shape:", hidden_states.shape)
print("Pooled output shape:", pooled_output.shape)
```
阅读全文