Y_train = X_train.unsqueeze(1), Y_train.unsqueeze(1)作用
时间: 2024-05-30 16:16:39 浏览: 13
这两个代码的作用是将数据集中的每个样本的维度从一维度扩展到二维度。具体来说,X_train.unsqueeze(1)将原来的大小为(N,)的训练数据张量X_train变为了大小为(N,1)的张量,其中N是训练数据的样本数量。同理,Y_train.unsqueeze(1)将原来的大小为(N,)的训练标签张量Y_train变为了大小为(N,1)的张量。这样做的目的是为了方便后续深度学习模型的训练和预测,因为通常情况下深度学习模型需要输入二维度数据,其中第一维度表示样本数量,第二维度表示每个样本的特征或标签。
相关问题
Y_train = X_train.unsqueeze(1), Y_train.unsqueeze(1)
This code seems to be converting the 1-dimensional tensors X_train and Y_train into 2-dimensional tensors by adding an extra dimension at the beginning of each tensor.
The .unsqueeze(1) method is used to add a new dimension at index 1 for each tensor.
For example, if X_train was originally a tensor of shape (100,), this code would convert it into a tensor of shape (100,1). Similarly, if Y_train was originally a tensor of shape (100,), this code would convert it into a tensor of shape (100,1).
This conversion may be useful if you want to perform operations that require tensors of the same shape.
import torch import torch.nn as nn import numpy as np import matplotlib.pyplot as plt from torch import autograd """ 用神经网络模拟微分方程,f(x)'=f(x),初始条件f(0) = 1 """ class Net(nn.Module): def __init__(self, NL, NN): # NL n个l(线性,全连接)隐藏层, NN 输入数据的维数, # NL是有多少层隐藏层 # NN是每层的神经元数量 super(Net, self).__init__() self.input_layer = nn.Linear(1, NN) self.hidden_layer = nn.Linear(NN,int(NN/2)) ## 原文这里用NN,我这里用的下采样,经过实验验证,“等采样”更优。更多情况有待我实验验证。 self.output_layer = nn.Linear(int(NN/2), 1) def forward(self, x): out = torch.tanh(self.input_layer(x)) out = torch.tanh(self.hidden_layer(out)) out_final = self.output_layer(out) return out_final net=Net(4,20) # 4层 20个 mse_cost_function = torch.nn.MSELoss(reduction='mean') # Mean squared error 均方误差求 optimizer = torch.optim.Adam(net.parameters(),lr=1e-4) # 优化器 def ode_01(x,net): y=net(x) y_x = autograd.grad(y, x,grad_outputs=torch.ones_like(net(x)),create_graph=True)[0] return y-y_x # y-y' = 0 # requires_grad=True).unsqueeze(-1) plt.ion() # 动态图 iterations=200000 for epoch in range(iterations): optimizer.zero_grad() # 梯度归0 ## 求边界条件的损失函数 x_0 = torch.zeros(2000, 1) y_0 = net(x_0) mse_i = mse_cost_function(y_0, torch.ones(2000, 1)) # f(0) - 1 = 0 ## 方程的损失函数 x_in = np.random.uniform(low=0.0, high=2.0, size=(2000, 1)) pt_x_in = autograd.Variable(torch.from_numpy(x_in).float(), requires_grad=True) # x 随机数 pt_y_colection=ode_01(pt_x_in,net) pt_all_zeros= autograd.Variable(torch.from_numpy(np.zeros((2000,1))).float(), requires_grad=False) mse_f=mse_cost_function(pt_y_colection, pt_all_zeros) # y-y' = 0 loss = mse_i + mse_f loss.backward() # 反向传播 optimizer.step() # 优化下一步。This is equivalent to : theta_new = theta_old - alpha * derivative of J w.r.t theta if epoch%1000==0: y = torch.exp(pt_x_in) # y 真实值 y_train0 = net(pt_x_in) # y 预测值 print(epoch, "Traning Loss:", loss.data) print(f'times {epoch} - loss: {loss.item()} - y_0: {y_0}') plt.cla() plt.scatter(pt_x_in.detach().numpy(), y.detach().numpy()) plt.scatter(pt_x_in.detach().numpy(), y_train0.detach().numpy(),c='red') plt.pause(0.1)
这是一段 Python 代码,主要是在导入一些库和定义一个神经网络模型类 "Net"。代码中导入了 torch、torch.nn、numpy 和 matplotlib.pyplot 库,并在 "Net" 类中进行了一些初始化。代码还提到了一个微分方程:f(x)' = f(x), 初始条件f(0) = 1, 用神经网络模拟。