uments [22]. However, these auxiliary resource based learning methods are typi-
cally designed for specific applications and may have difficulty to be applied on
other application tasks.
3 Semi-Supervised Multi-Class Heterogeneous Domain
Adaptation
In this paper, we focus on multi-class heterogeneous domain adaptation prob-
lems. We assume in the source domain we have plenty of labeled instances while
in the target domain we only have a small number of labeled instances. The
two domains have disjoint input feature spaces, X
s
= R
d
s
and X
t
= R
d
t
, where
d
s
is the dimensionality of the source domain feature space and d
t
is the di-
mensionality of the target domain feature space, but share the same multi-class
output label space Y = {−1, 1}
L
, where L is the number of classes. In particu-
lar, let X
s
= [X
`
s
; X
u
s
] ∈ R
n
s
×d
s
denote the data matrix in the source domain,
where each instance is represented as a row vector. X
`
s
∈ R
`
s
×d
s
is the labeled
source data matrix with a corresponding label matrix Y
s
∈ {−1, 1}
`
s
×L
, and
X
u
s
∈ R
u
s
×d
s
is the unlabeled source data matrix. Each row of the label matrix
contains only one positive 1, which indicates the class membership of the corre-
sponding instance. Similarly, let X
t
= [X
`
t
; X
u
t
] ∈ R
n
t
×d
t
denote the data matrix
in the target domain, where X
`
t
∈ R
`
t
×d
t
is the labeled target data matrix with
a corresponding label matrix Y
t
∈ {−1, 1}
`
t
×L
and X
u
t
∈ R
u
t
×d
t
is the unlabeled
target data matrix. The number of labeled target domain instances `
t
is small
and the number of labeled source domain instances `
s
is much larger than `
t
.
In this section, we present a semi-supervised subspace co-projection method
to address heterogeneous multi-class domain adaptation under the setting de-
scribed above. We formulate a co-projection based discriminative subspace learn-
ing method to simultaneously project the instances from both domains into a
co-located subspace and train a multi-class classification model in the projected
subspace, while exploiting the available unlabeled data to enforce a maximum
mean discrepancy criterion across domains in the projected subspace. We fur-
ther exploit ECOC schemes to enhance the discriminative informativeness of the
projected subspace while directly addressing multi-class classification problems.
3.1 Semi-Supervised Learning Framework
With the disjoint feature spaces across domains, traditional machine learning
methods and homogeneous domain adaptation methods cannot be directly ap-
plied in the heterogeneous domain adaptation setting. However, if we can trans-
form the two disjoint feature spaces X
s
and X
t
into a common subspace Z = R
m
with two transformation functions ψ
s
: X
s
−→ Z and ψ
t
: X
t
−→ Z, we can then
build a unified prediction model in the common subspace to adapt information
across domains. Since the same multi-class prediction task is shared across the
source domain and the target domain, i.e., the two domains have the same output
label space, we can identify a useful common subspace representation of the data