计算卷积神经网络参数和输出尺寸python代码 揫揰每层输揰尺寸揦大小以及需揱训揧揦参数(提示只有卷积层与全连接层有训揧参数)
时间: 2023-10-09 22:05:19 浏览: 92
以下是计算卷积神经网络参数和输出尺寸的Python代码示例。假设我们有一个简单的卷积神经网络,包含一个卷积层,一个池化层和一个全连接层。
```python
import numpy as np
# 定义卷积层
class ConvLayer:
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0):
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.weights = np.random.randn(out_channels, in_channels, kernel_size, kernel_size)
self.bias = np.zeros((out_channels, 1))
def forward(self, x):
n, c, h, w = x.shape
out_h = int((h + 2 * self.padding - self.kernel_size) / self.stride) + 1
out_w = int((w + 2 * self.padding - self.kernel_size) / self.stride) + 1
# 对输入进行填充
x_pad = np.pad(x, ((0, 0), (0, 0), (self.padding, self.padding), (self.padding, self.padding)), mode='constant')
# 初始化输出
out = np.zeros((n, self.out_channels, out_h, out_w))
# 计算卷积
for i in range(out_h):
for j in range(out_w):
x_slice = x_pad[:, :, i * self.stride:i * self.stride + self.kernel_size, j * self.stride:j * self.stride + self.kernel_size]
for k in range(self.out_channels):
out[:, k, i, j] = np.sum(x_slice * self.weights[k, :, :, :], axis=(1, 2, 3)) + self.bias[k, :]
return out, (n, c, h, w), (self.out_channels, out_h, out_w), (self.weights.shape, self.bias.shape)
# 定义池化层
class PoolLayer:
def __init__(self, kernel_size, stride=None):
self.kernel_size = kernel_size
self.stride = stride if stride is not None else kernel_size
def forward(self, x):
n, c, h, w = x.shape
out_h = int((h - self.kernel_size) / self.stride) + 1
out_w = int((w - self.kernel_size) / self.stride) + 1
# 初始化输出
out = np.zeros((n, c, out_h, out_w))
# 计算池化
for i in range(out_h):
for j in range(out_w):
x_slice = x[:, :, i * self.stride:i * self.stride + self.kernel_size, j * self.stride:j * self.stride + self.kernel_size]
out[:, :, i, j] = np.max(x_slice, axis=(2, 3))
return out, (n, c, h, w), (c, out_h, out_w), None
# 定义全连接层
class FC:
def __init__(self, in_features, out_features):
self.in_features = in_features
self.out_features = out_features
self.weights = np.random.randn(out_features, in_features)
self.bias = np.zeros((out_features, 1))
def forward(self, x):
n, c, h, w = x.shape
x_reshape = x.reshape(n, c * h * w)
# 计算全连接
out = np.dot(self.weights, x_reshape.T).T + self.bias.T
out = out.reshape(n, self.out_features)
return out, (n, c, h, w), (self.out_features,), (self.weights.shape, self.bias.shape)
# 定义网络结构
class Net:
def __init__(self):
self.conv1 = ConvLayer(3, 16, 5, padding=2)
self.pool1 = PoolLayer(2)
self.fc1 = FC(16 * 14 * 14, 10)
def forward(self, x):
out, _, _, _ = self.conv1.forward(x)
out, _, _, _ = self.pool1.forward(out)
out, _, _, _ = self.fc1.forward(out)
return out
# 测试网络
net = Net()
x = np.random.randn(1, 3, 28, 28)
out = net.forward(x)
print(out.shape) # (1, 10)
```
在上面的代码中,我们定义了一个包含一个卷积层、一个池化层和一个全连接层的简单卷积神经网络。在计算每一层的输出尺寸和参数时,我们在各自的类中实现了 `forward` 方法,并在方法中计算输出尺寸和参数的值。对于卷积层和全连接层,我们使用了随机初始化的权重和偏差,并在方法中进行了计算。注意,只有卷积层和全连接层有训练参数。最后,我们使用测试数据 `x` 对网络进行了简单的前向传播,并打印了输出的形状。
阅读全文