def train(self, X, y, learning_rate, num_epochs): loss_history=[] for i in range(num_epochs): y_hat = self.forward(X) loss = np.mean(np.square(y_hat - y)) loss_history.append(loss) self.backward(X, y, learning_rate, num_epochs) if i % 100 == 0: print('Epoch', i, '- Loss:', loss) return loss_history # 加载数据 X = data.iloc[:, :-1].values y = data.iloc[:, -1:].values X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # 训练模型 input_dim=13 hidden_dim=25 output_dim=1 nn=NeuralNetwork(input_dim, hidden_dim, output_dim) learning_rate=0.05 num_epochs=2000 loss_history=nn.train(X, y, learning_rate, num_epochs) plt.plot(loss_history)b报错ypeError: NeuralNetwork.backward() takes 4 positional arguments but 5 were given
时间: 2024-03-17 14:44:36 浏览: 92
在你调用`NeuralNetwork.backward()`函数的时候,传入了5个参数`X, y, learning_rate, num_epochs`和函数本身的一个参数`self`。但是`NeuralNetwork.backward()`函数只需要4个参数,即`X, y, learning_rate, num_epochs`。
你可以将`num_epochs`这个参数从`NeuralNetwork.backward()`函数中删除,这样错误就可以被解决了。
修改后的代码如下所示:
```
def train(self, X, y, learning_rate, num_epochs):
loss_history=[]
for i in range(num_epochs):
y_hat = self.forward(X)
loss = np.mean(np.square(y_hat - y))
loss_history.append(loss)
self.backward(X, y, learning_rate)
if i % 100 == 0:
print('Epoch', i, '- Loss:', loss)
return loss_history
# 加载数据
X = data.iloc[:, :-1].values
y = data.iloc[:, -1:].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 训练模型
input_dim=13
hidden_dim=25
output_dim=1
nn=NeuralNetwork(input_dim, hidden_dim, output_dim)
learning_rate=0.05
num_epochs=2000
loss_history=nn.train(X, y, learning_rate, num_epochs)
plt.plot(loss_history)
```
希望能够帮到你!
阅读全文
相关推荐



















