Best model on epoch: 0, val_mse
时间: 2024-05-30 22:07:51 浏览: 10
I'm sorry, but I need more information about the context of your question. Can you please provide more details about the model and the problem you are working on?
相关问题
给你提供了完整代码,但在运行以下代码时出现上述错误,该如何解决?Batch_size = 9 DataSet = DataSet(np.array(x_train), list(y_train)) train_size = int(len(x_train)*0.8) test_size = len(y_train) - train_size train_dataset, test_dataset = torch.utils.data.random_split(DataSet, [train_size, test_size]) TrainDataloader = Data.DataLoader(train_dataset, batch_size=Batch_size, shuffle=False, drop_last=True) TestDataloader = Data.DataLoader(test_dataset, batch_size=Batch_size, shuffle=False, drop_last=True) model = Transformer(n_encoder_inputs=3, n_decoder_inputs=3, Sequence_length=1).to(device) epochs = 10 optimizer = torch.optim.Adam(model.parameters(), lr=0.0001) criterion = torch.nn.MSELoss().to(device) val_loss = [] train_loss = [] best_best_loss = 10000000 for epoch in tqdm(range(epochs)): train_epoch_loss = [] for index, (inputs, targets) in enumerate(TrainDataloader): inputs = torch.tensor(inputs).to(device) targets = torch.tensor(targets).to(device) inputs = inputs.float() targets = targets.float() tgt_in = torch.rand((Batch_size, 1, 3)) outputs = model(inputs, tgt_in) loss = criterion(outputs.float(), targets.float()) print("loss", loss) loss.backward() optimizer.step() train_epoch_loss.append(loss.item()) train_loss.append(np.mean(train_epoch_loss)) val_epoch_loss = _test() val_loss.append(val_epoch_loss) print("epoch:", epoch, "train_epoch_loss:", train_epoch_loss, "val_epoch_loss:", val_epoch_loss) if val_epoch_loss < best_best_loss: best_best_loss = val_epoch_loss best_model = model print("best_best_loss ---------------------------", best_best_loss) torch.save(best_model.state_dict(), 'best_Transformer_trainModel.pth')
根据你提供的完整代码,错误信息可能是在以下部分出现问题:
```python
for index, (inputs, targets) in enumerate(TrainDataloader):
# ...
tgt_in = torch.rand((Batch_size, 1, 3))
outputs = model(inputs, tgt_in)
# ...
```
你在每次循环中都使用 `torch.rand` 函数生成了一个形状为 `(Batch_size, 1, 3)` 的随机张量 `tgt_in`,然后将其作为模型的输入之一。然而,这个随机张量的维度不匹配模型的期望输入维度。
为了解决这个问题,你可以将 `tgt_in` 的维度调整为与模型期望的维度相匹配。根据你之前提供的信息,`n_decoder_inputs` 是模型期望的输入维度,因此可以使用 `torch.rand` 生成一个形状为 `(Batch_size, n_decoder_inputs, Sequence_length)` 的随机张量。根据你的代码,`n_decoder_inputs` 和 `Sequence_length` 都是 3,所以你可以将以下行:
```python
tgt_in = torch.rand((Batch_size, 1, 3))
```
更改为:
```python
tgt_in = torch.rand((Batch_size, 3, 1))
```
这样,`tgt_in` 的维度就与模型的期望输入维度匹配了。请尝试进行这个修改并重新运行代码,看看是否能够解决错误。
现在我有三张DGL异质图,train_g_data = { ('drug', 'interacts', 'drug'): (train_edge_idx[0], train_edge_idx[1]), ('drug', 'interacts_reverse', 'drug'): (train_edge_idx[1], train_edge_idx[0]), ('drug', 'acts_on', 'protein'): (drug_protein[0], drug_protein[1]), } train_g = dgl.heterograph(train_g_data),val_g和test_g也是类似的结构,图的特征也已经添加,请你用python写一段代码,用DGL中的graphSAGE模型训练,用以预测interacts的属性DDI_feat即药物相互作用分数,这是一个回归任务,并解释一下原理
首先,需要导入相关的库,包括DGL和PyTorch:
```
import dgl
import torch
import torch.nn as nn
import dgl.function as fn
from dgl.nn.pytorch import GraphConv, GATConv
```
接下来就可以定义GraphSAGE模型了:
```
class GraphSAGE(nn.Module):
def __init__(self, in_feats, out_feats, n_layers, activation, aggregator_type):
super(GraphSAGE, self).__init__()
self.n_layers = n_layers
self.activation = activation
self.layers = nn.ModuleList()
self.aggregator_type = aggregator_type
# input layer
self.layers.append(GraphConv(in_feats, out_feats, aggregator_type=self.aggregator_type))
# hidden layers
for i in range(1, n_layers):
self.layers.append(GraphConv(out_feats, out_feats, aggregator_type=self.aggregator_type))
def forward(self, blocks, x):
h = x
for l, (layer, block) in enumerate(zip(self.layers, blocks)):
h_dst = h[:block.number_of_dst_nodes()]
h = layer(block, (h, h_dst))
if l != self.n_layers - 1:
h = self.activation(h)
return h
```
这里我们定义了一个GraphSAGE类,包含输入特征的维度、输出特征的维度、隐藏层的数量、激活函数以及aggregator类型。在初始化函数中,我们定义了一个ModuleList,包含了所有的图卷积层。第一层的输入维度为in_feats,输出维度为out_feats,aggregator类型为我们在定义图时指定的类型;其他的隐藏层的输入和输出维度都为out_feats。
在forward函数中,我们首先将输入特征x赋值给h,然后依次对每一层进行计算。对于每一层,我们先将h切分成两部分,分别对应于当前block中的源节点和目标节点。然后对切分后的h进行图卷积操作,并将结果再次赋值给h。如果当前不是最后一层,我们还需要对h进行激活函数的处理。最后返回最后一层的结果h。
接下来,我们可以定义训练函数,包括数据加载、模型训练、模型评估等步骤:
```
def train(model, optimizer, loss_fn, train_loader, val_loader, device, epochs):
best_val_loss = float('inf')
best_model = None
for epoch in range(epochs):
model.train()
train_loss = 0
for batch, data in enumerate(train_loader):
subgraph, features, labels = data
features = features.to(device)
labels = labels.to(device)
blocks = dgl.to_block(subgraph, subgraph.nodes())
output = model(blocks, features)
loss = loss_fn(output.squeeze(), labels.float())
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item()
train_loss /= (batch + 1)
val_loss = evaluate(model, loss_fn, val_loader, device)
print('Epoch {:d} | Train Loss {:.4f} | Val Loss {:.4f}'.format(epoch+1, train_loss, val_loss))
if val_loss < best_val_loss:
best_val_loss = val_loss
best_model = model
return best_model
def evaluate(model, loss_fn, loader, device):
model.eval()
loss = 0
with torch.no_grad():
for batch, data in enumerate(loader):
subgraph, features, labels = data
features = features.to(device)
labels = labels.to(device)
blocks = dgl.to_block(subgraph, subgraph.nodes())
output = model(blocks, features)
loss += loss_fn(output.squeeze(), labels.float()).item()
return loss / (batch + 1)
```
在训练函数中,我们首先定义了一个best_val_loss变量,用于记录最好的验证集误差和对应的模型。然后对于每个epoch,我们首先将模型设置为训练模式,然后遍历所有的训练数据,对于每个batch,我们首先将数据转移到设备上,然后将子图、特征和标签分别赋值给subgraph、features和labels变量。接下来,我们使用model对blocks和features进行计算,并计算loss。将梯度清零,执行反向传播和参数更新操作。最后,将该batch的loss加入train_loss中,并计算平均train_loss。
在每个epoch结束时,我们调用evaluate函数对模型进行验证集上的评估。在evaluate函数中,我们首先将模型设置为评估模式,然后遍历所有的验证数据,对于每个batch,我们首先将数据转移到设备上,然后将子图、特征和标签分别赋值给subgraph、features和labels变量。接下来,我们使用model对blocks和features进行计算,并计算loss。最后,将该batch的loss加入loss中,并计算平均loss。
接下来,我们可以加载数据集并进行训练:
```
from dgl.data.utils import load_graphs, save_graphs
train_g, val_g, test_g = load_graphs('data.dgl')
train_loader = dgl.dataloading.EdgeDataLoader(train_g, 'train', batch_size=1024, shuffle=True, drop_last=False)
val_loader = dgl.dataloading.EdgeDataLoader(val_g, 'val', batch_size=1024, shuffle=False, drop_last=False)
test_loader = dgl.dataloading.EdgeDataLoader(test_g, 'test', batch_size=1024, shuffle=False, drop_last=False)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = GraphSAGE(256, 1, 2, nn.ReLU(), 'mean').to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
loss_fn = nn.MSELoss()
best_model = train(model, optimizer, loss_fn, train_loader, val_loader, device, epochs=100)
test_loss = evaluate(best_model, loss_fn, test_loader, device)
print('Test Loss {:.4f}'.format(test_loss))
```
在加载数据集时,我们使用load_graphs函数从指定路径加载三张图,分别对应于训练集、验证集和测试集。接下来,我们使用EdgeDataLoader对训练集、验证集和测试集进行数据加载。我们使用cuda进行训练,如果不可用则使用cpu。接下来,我们定义了一个GraphSAGE模型,优化器和损失函数。然后调用train函数进行训练,并使用evaluate函数在测试集上进行评估。最后输出测试集的loss。
GraphSAGE模型的原理是通过对每个节点的邻居节点进行聚合,来生成每个节点的特征表示。在模型训练中,我们首先将每个子图切分成源节点和目标节点,然后对源节点和目标节点分别进行图卷积操作,得到它们的特征表示。接下来,我们对源节点和目标节点进行聚合,得到每个节点的邻居节点的特征表示。最后,我们将源节点和目标节点进行拼接,并将结果传递给下一层进行计算。在每一层中,我们还可以使用不同的聚合方式,例如mean、max或LSTM等。
相关推荐
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pth](https://img-home.csdnimg.cn/images/20210720083646.png)
![pth](https://img-home.csdnimg.cn/images/20210720083646.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)