torch.split通道划分
时间: 2023-11-21 08:52:10 浏览: 195
torch.split可以用来对张量进行划分,包括数量上的划分和维度上的划分。如果要进行通道划分,可以选择在维度上进行划分,将通道维度作为划分的维度。例如,对于一个形状为[batch_size, channel, height, width]的张量,可以使用torch.split(tensor, split_size=1, dim=1)来对通道进行划分,每个分块的形状为[batch_size, 1, height, width]。
相关问题
解释下这段代码:class gnconv(nn.Module): def __init__(self, dim, order=5, gflayer=None, h=14, w=8, s=1.0): super().__init__() self.order = order self.dims = [dim // 2 ** i for i in range(order)] self.dims.reverse() self.proj_in = nn.Conv2d(dim, 2*dim, 1) if gflayer is None: self.dwconv = get_dwconv(sum(self.dims), 7, True) else: self.dwconv = gflayer(sum(self.dims), h=h, w=w) self.proj_out = nn.Conv2d(dim, dim, 1) self.pws = nn.ModuleList( [nn.Conv2d(self.dims[i], self.dims[i+1], 1) for i in range(order-1)] ) self.scale = s print('[gnconv]', order, 'order with dims=', self.dims, 'scale=%.4f'%self.scale) def forward(self, x, mask=None, dummy=False): B, C, H, W = x.shape fused_x = self.proj_in(x) pwa, abc = torch.split(fused_x, (self.dims[0], sum(self.dims)), dim=1) dw_abc = self.dwconv(abc) * self.scale dw_list = torch.split(dw_abc, self.dims, dim=1) x = pwa * dw_list[0] for i in range(self.order -1): x = self.pws[i](x) * dw_list[i+1] x = self.proj_out(x) return x
这段代码定义了一个名为 gnconv 的类,它是一个继承自 nn.Module 的 PyTorch 模型。该类包含一个构造函数和一个前向传播函数 forward。
在构造函数中,该类接受以下参数:
- dim:输入特征图的通道数。
- order:gnconv 操作的阶数,默认为 5。
- gflayer:gnconv 操作中使用的可分离卷积层,如果为 None 则使用 get_dwconv 函数得到的可分离卷积层。
- h:gnconv 操作中可分离卷积层的高度,默认为 14。
- w:gnconv 操作中可分离卷积层的宽度,默认为 8。
- s:缩放因子,用于调整可分离卷积层的输出,默认为 1.0。
构造函数中首先调用了父类 nn.Module 的构造函数,然后根据输入维度和 gnconv 阶数计算出各个阶段的维度。接着定义了一个 1x1 的卷积层 proj_in,用于将输入特征图的通道数扩展为原来的两倍。如果 gflayer 为 None,则调用 get_dwconv 函数得到一个可分离卷积层 dwconv;否则,使用 gflayer。
接下来定义了一个 1x1 的卷积层 proj_out,将输出特征图的通道数缩减回原来的维度。同时,定义了一个 nn.ModuleList,其中包含了多个 1x1 的卷积层,用于将各个阶段的特征图进行降维。最后将缩放因子保存在 scale 变量中。
在前向传播函数 forward 中,首先获取输入特征图 x 的形状。将输入特征图通过 proj_in 卷积层,将通道数扩展为原来的两倍,并将输出张量沿着通道数的维度划分为两部分,分别是 pwa 和 abc。其中,pwa 的通道数为 self.dims[0],abc 的通道数为 sum(self.dims)。接着将 abc 作为输入,通过 dwconv 可分离卷积层得到特征图 dw_abc,然后将 dw_abc 沿通道数的维度划分为多个小张量,每个小张量的通道数分别为 self.dims 中的元素,得到一个张量列表 dw_list。将 pwa 和 dw_list[0] 进行逐元素相乘,得到特征图 x。
接下来,将 x 分别通过前面定义的多个 1x1 的卷积层,将其通道数逐步降维。每次降维后,将结果乘以 dw_list 中对应的张量,得到新的特征图 x。最后,将 x 通过 proj_out 卷积层,将通道数缩减为原来的维度,得到输出特征图。
基于pytorch对excel数据进行预处理,建立多通道一维卷积神经网络模型,包括数据导入,数据集划分,数据标准化,模型的训练,测试集和验证集的导入,准确率、丢失以及混淆矩阵图的绘制
首先,你需要安装以下的库:
```python
pip install pandas
pip install numpy
pip install torch
pip install torchvision
pip install sklearn
pip install matplotlib
```
然后,我们可以开始处理excel数据:
```python
import pandas as pd
import numpy as np
# 读取excel文件
data = pd.read_excel('data.xlsx', header=None)
# 拆分特征和标签
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
# 数据集划分
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 数据标准化
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# 转换成张量
import torch
X_train = torch.from_numpy(X_train).float()
X_test = torch.from_numpy(X_test).float()
y_train = torch.from_numpy(y_train).long()
y_test = torch.from_numpy(y_test).long()
```
接下来,我们可以建立多通道一维卷积神经网络模型:
```python
import torch.nn as nn
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv1d(in_channels=1, out_channels=32, kernel_size=3)
self.pool1 = nn.MaxPool1d(kernel_size=2)
self.conv2 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
self.pool2 = nn.MaxPool1d(kernel_size=2)
self.fc1 = nn.Linear(in_features=64 * 23, out_features=128)
self.fc2 = nn.Linear(in_features=128, out_features=1)
self.drop = nn.Dropout(p=0.5)
self.relu = nn.ReLU()
def forward(self, x):
x = self.conv1(x)
x = self.relu(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.relu(x)
x = self.pool2(x)
x = x.view(-1, 64 * 23)
x = self.fc1(x)
x = self.relu(x)
x = self.drop(x)
x = self.fc2(x)
return x
net = Net()
```
然后,我们可以训练模型:
```python
import torch.optim as optim
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
epochs = 50
for epoch in range(epochs):
running_loss = 0.0
net.train()
for i, data in enumerate(train_loader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs.unsqueeze(1))
loss = criterion(outputs.squeeze(), labels.float())
loss.backward()
optimizer.step()
running_loss += loss.item()
net.eval()
correct = 0
total = 0
with torch.no_grad():
for data in val_loader:
inputs, labels = data
outputs = net(inputs.unsqueeze(1))
predicted = torch.round(torch.sigmoid(outputs.squeeze()))
total += labels.size(0)
correct += (predicted == labels.float()).sum().item()
print('[%d, %5d] loss: %.3f val_acc: %.3f' %
(epoch + 1, i + 1, running_loss / len(train_loader), 100 * correct / total))
```
最后,我们可以测试模型并绘制混淆矩阵图:
```python
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
net.eval()
test_outputs = net(X_test.unsqueeze(1))
test_predicted = torch.round(torch.sigmoid(test_outputs.squeeze()))
test_total = y_test.size(0)
test_correct = (test_predicted == y_test.float()).sum().item()
print('test_acc: %.3f' % (100 * test_correct / test_total))
cm = confusion_matrix(y_test, test_predicted)
plt.imshow(cm, cmap='binary')
plt.show()
```
完整代码如下:
```python
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
# 读取excel文件
data = pd.read_excel('data.xlsx', header=None)
# 拆分特征和标签
X = data.iloc[:, :-1].values
y = data.iloc[:, -1].values
# 数据集划分
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 数据标准化
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# 转换成张量
X_train = torch.from_numpy(X_train).float()
X_test = torch.from_numpy(X_test).float()
y_train = torch.from_numpy(y_train).long()
y_test = torch.from_numpy(y_test).long()
# 建立数据集和数据加载器
from torch.utils.data import TensorDataset, DataLoader
train_data = TensorDataset(X_train, y_train)
train_loader = DataLoader(train_data, batch_size=64, shuffle=True)
val_data = TensorDataset(X_test, y_test)
val_loader = DataLoader(val_data, batch_size=64, shuffle=False)
# 建立多通道一维卷积神经网络模型
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv1d(in_channels=1, out_channels=32, kernel_size=3)
self.pool1 = nn.MaxPool1d(kernel_size=2)
self.conv2 = nn.Conv1d(in_channels=32, out_channels=64, kernel_size=3)
self.pool2 = nn.MaxPool1d(kernel_size=2)
self.fc1 = nn.Linear(in_features=64 * 23, out_features=128)
self.fc2 = nn.Linear(in_features=128, out_features=1)
self.drop = nn.Dropout(p=0.5)
self.relu = nn.ReLU()
def forward(self, x):
x = self.conv1(x)
x = self.relu(x)
x = self.pool1(x)
x = self.conv2(x)
x = self.relu(x)
x = self.pool2(x)
x = x.view(-1, 64 * 23)
x = self.fc1(x)
x = self.relu(x)
x = self.drop(x)
x = self.fc2(x)
return x
net = Net()
# 训练模型
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
epochs = 50
for epoch in range(epochs):
running_loss = 0.0
net.train()
for i, data in enumerate(train_loader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs.unsqueeze(1))
loss = criterion(outputs.squeeze(), labels.float())
loss.backward()
optimizer.step()
running_loss += loss.item()
net.eval()
correct = 0
total = 0
with torch.no_grad():
for data in val_loader:
inputs, labels = data
outputs = net(inputs.unsqueeze(1))
predicted = torch.round(torch.sigmoid(outputs.squeeze()))
total += labels.size(0)
correct += (predicted == labels.float()).sum().item()
print('[%d, %5d] loss: %.3f val_acc: %.3f' %
(epoch + 1, i + 1, running_loss / len(train_loader), 100 * correct / total))
# 测试模型并绘制混淆矩阵图
net.eval()
test_outputs = net(X_test.unsqueeze(1))
test_predicted = torch.round(torch.sigmoid(test_outputs.squeeze()))
test_total = y_test.size(0)
test_correct = (test_predicted == y_test.float()).sum().item()
print('test_acc: %.3f' % (100 * test_correct / test_total))
cm = confusion_matrix(y_test, test_predicted)
plt.imshow(cm, cmap='binary')
plt.show()
```
阅读全文
相关推荐
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)