Experiment 1: Linear Regression
August 27, 2018
1 Description
This first exercise will give you practice with linear regression. These exercises
have been extensively tested with Matlab, but they should also work in Octave,
which has been called a “free version of Matlab”. If you are using Octave, be
sure to install the Image package as well (available for Windows as an option in
the installer, and available for Linux from Octave-Forge ).
2 Linear Regression
Recall that the linear regression model is
h
θ
(x) = θ
T
x =
n
X
j=0
θ
j
x
j
, (1)
where θ is the parameter which we need to optimize and x is the (n + 1)-
dimensional feature vector
1
. Given a training set {x
(i)
}
i=1,··· ,m
, our goal is to
find the optimal value of θ such that the objective function J(θ), as shown in
Equation (2), can be minimized.
J(θ) =
1
2m
m
X
i=1
(h
θ
(x
(i)
) − y
(i)
)
2
(2)
One of the optimization approach is gradient descent algorithm. The algorithm
is performed iteratively, and in each iteration, we update parameter θ according
to the the following rule
θ
j
:= θ
j
− α
1
m
m
X
i=1
(h
θ
(x
(i)
) − y
(i)
)x
(i)
j
(3)
where α is so-called “learning rate” based on which we can tune the convergence
of the gradient descent.
1
A training data is actually n-dimensional, i.e., x = [x
1
, x
2
, · · · , x
n
]. For each training
data, we have an extra intercept item x
0
= 1. Therefore, the resulting feature vector is
(n + 1)-dimensional.
1