2. Maximum margin hyperplane for linearly separable classes
Suppose we have two linearly separable classes of training vectors. Support vector machine
defined on such a training set is a classifier with decision function
f(x) = sgn{hw, xi + b}, (2.1)
where hw, xi + b is an equat i on of hyperplane that separates the two classes (see (1.3)),
has maximum margin, and i s equidistant from both classes (Figure 5). In this section
we consider an optimization problem which is being solved in order to obtain parameters
w,b of this hyper p lan e, and explain where this problem comes from. We also consider
important prope rt i es of the maximum margin hyperplane. Some concepts from calculus
and optimization theory th at are used in this section are briefly reviewed in Appendix.
Figure 5: Maximum margin hyperplane for two lin ear l y separable classes: d = ⇢(⇡, C
1
)=
⇢(⇡, C
2
) is maximized.
2.1. P ri ma ry optimization problem
Parameters w,b of the SVM hyperplane can be found as a s ol ut i on to the following op t i -
mization problem:
1
2
||w||
2
! min
w,b
(2.2)
subject to
hw, xi + b 1, 8x 2 C
1
(2.3)
hw, xi + b 1, 8x 2 C
2
, (2.4)
where C
1
and C
2
are two classes of training examples.
6