To clarify, let us rewrite the above objective function in the form
时间: 2024-05-25 16:14:41 浏览: 105
$$
\min_{w,b} \frac{1}{2} ||w||^2 + C \sum_{i=1}^n \max\left(0, 1 - y_i(w^T x_i + b)\right)
$$
where $w$ and $b$ are the weight vector and bias term, respectively, $C$ is a hyperparameter that controls the trade-off between maximizing the margin and minimizing the classification error, and $x_i$ and $y_i$ are the features and labels of the $i$-th training instance, respectively. The first term $\frac{1}{2} ||w||^2$ is the regularization term that penalizes large values of $w$ and helps to avoid overfitting. The second term is the hinge loss function, which measures the distance between the predicted score $w^T x_i + b$ and the true label $y_i$, and penalizes the model if the predicted score is not on the correct side of the decision boundary (i.e., if $y_i(w^T x_i + b) < 1$). The objective of the SVM is to find the optimal values of $w$ and $b$ that minimize this objective function.
阅读全文