model.parameters()例子
时间: 2023-10-28 13:04:32 浏览: 32
假设我们有一个简单的神经网络模型,包含两个全连接层和一个输出层,代码如下:
```
import torch.nn as nn
class SimpleNet(nn.Module):
def __init__(self, input_dim, hidden_dim, output_dim):
super(SimpleNet, self).__init__()
self.fc1 = nn.Linear(input_dim, hidden_dim)
self.fc2 = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
```
这个模型包含两个全连接层,分别是`fc1`和`fc2`,它们都包含权重和偏置两个参数。现在我们想要获取这些参数的值,可以使用`model.parameters()`方法来获取它们。
```
input_dim = 10
hidden_dim = 20
output_dim = 2
model = SimpleNet(input_dim, hidden_dim, output_dim)
params = model.parameters()
for p in params:
print(p)
```
运行上面的代码,输出如下:
```
Parameter containing:
tensor([[ 0.0507, -0.0884, -0.2751, ..., -0.2614, -0.0449, -0.1757],
[ 0.1365, -0.2425, -0.1338, ..., 0.1423, 0.0905, 0.1928],
[ 0.1931, 0.2649, -0.0458, ..., 0.2813, -0.2455, -0.1564],
...,
[-0.0859, 0.1638, -0.0925, ..., 0.1150, -0.1893, 0.2850],
[ 0.2227, -0.1422, 0.1984, ..., -0.0067, 0.1591, -0.0748],
[ 0.1015, -0.1682, -0.1962, ..., -0.0190, -0.1611, 0.2010]],
requires_grad=True)
Parameter containing:
tensor([ 0.2801, -0.0468, -0.0278, 0.2086, -0.0574, -0.0982, -0.0377, 0.1693,
0.1028, 0.2137, 0.2364, 0.0749, -0.1513, -0.1597, 0.1284, 0.0893,
0.1071, -0.1616, 0.0387, -0.2858], requires_grad=True)
Parameter containing:
tensor([[-0.0854, -0.2316, 0.1520, 0.1363, -0.1445, -0.0520, -0.0918, 0.0085,
-0.1135, 0.0359, -0.1102, 0.0605, 0.0371, 0.0317, -0.0474, -0.0234,
-0.0968, 0.0138, -0.1421, 0.1959],
[ 0.1149, 0.2692, -0.1461, -0.2037, -0.0412, -0.2561, -0.2025, -0.2939,
-0.2512, 0.1610, -0.2573, -0.2980, -0.1596, -0.2169, -0.1446, 0.0114,
0.2298, -0.2892, -0.1347, 0.2887]], requires_grad=True)
Parameter containing:
tensor([0.0502, 0.0562], requires_grad=True)
```
这里我们遍历了`params`迭代器,输出了每个参数的值。我们可以看到,`params`迭代器返回的是一个包含所有参数的迭代器,每个参数都是一个`torch.Tensor`对象,包含该参数的值和相关的属性。