Pytorch nn.embedding
时间: 2023-07-13 11:07:25 浏览: 61
PyTorch's `nn.Embedding` is a module that allows you to create an embedding layer in your neural network. An embedding layer takes as input a tensor of integer indices, and returns as output a tensor of learnable embeddings corresponding to those indices.
For example, if you have a vocabulary of size `V`, you can create an embedding layer that will map each word in your vocabulary to a `d`-dimensional vector. To do this, you would create an `nn.Embedding` module with input size `V` and output size `d`.
Here is an example of how you can use `nn.Embedding` to create an embedding layer:
```
import torch
import torch.nn as nn
vocab_size = 10000
embedding_dim = 300
input_indices = torch.LongTensor([[1, 4, 6], [2, 3, 0]])
embedding_layer = nn.Embedding(vocab_size, embedding_dim)
embeddings = embedding_layer(input_indices)
```
In this example, we create an embedding layer with input size `vocab_size` (which is 10000 in this case) and output size `embedding_dim` (which is 300 in this case). We then create a tensor `input_indices` with shape `(2, 3)` that contains integer indices corresponding to words in our vocabulary. Finally, we pass `input_indices` through the embedding layer to obtain a tensor `embeddings` with shape `(2, 3, 300)` containing the learned embeddings for each word index.