The computed gradient is normalized so that the largest gra-
dient in the scene has a value of 1.0 to ensure that MIRGS
scales properly for scenes with different dynamic ranges. Once
the watershed (Vincent and Soille, 1991) is generated from the
normalized image gradient, the image is represented by a re-
gion adjacency graph (RAG) (Li, 2001) data structure where
each node represents a watershed region and where each graph
edge connects spatially adjacent regions.
Each watershed region is assigned an initial label via a K-
means algorithm (Duda et al., 2001) to initialize MIRGS. The
region-based K-means algorithm used in MIRGS is described
in (Qin and Clausi, 2010). MIRGS then enters an iterative
phase to find a configuration of labels for the regions that
globally minimizes a cost function. At each iteration, a label-
ing process is performed with Gibbs sampling (Geman and
Geman, 1984) to move the segmentation towards the optimal
configuration. After each iteration, regions with the same la-
bels are merged to reduce the number of nodes in the RAG
by combining adjacent regions, which makes subsequent iter-
ations more efficient as fewer nodes have to be considered.
The cost function that MIRGS minimizes to produce the
optimal segmentation consists of a feature space model and a
spatial context model (Qin and Clausi, 2010). The cost func-
tion considers a segmentation more likely to be “true” when
the regions assigned to each class are similar to each other in
feature space and when spatially adjacent regions belong to
the same class if the edge strength between them is weak. This
is similar to the Markov random field (MRF) based multi-
level logistic (MLL) segmentation model (Derin and Elliott,
1987) but MLL does not consider the edge strength in its spa-
tial context model. The MIRGS model agrees more closely
with intuition: if there is a strong edge between two regions,
they are more likely to be from different classes than when
there is no edge. MLL, in contrast, makes no such distinction
and favours results where adjacent regions are assigned to the
same class regardless of the edge strength between them.
3. Research objectives
To generate an accurate and consistent segmentation, the
MIRGS algorithm requires data with sufficient feature space
separability for different ice classes, i.e. it should be possible
to discern a difference in feature space values between differ-
ent classes (for example, one class might appear darker than
another class in the image). Additionally, MIRGS requires
the proper generation of the initial watershed and an image
gradient that presents strong boundaries between regions of
different ice classes. As will be seen in Section 4, information
from both dual-polarization channels (HH and HV) are neces-
sary. Many strategies exist to use dual-polarization RS-2 data
to satisfy these requirements. The objective of this study is
to determine which of these strategies is the most effective.
The following three strategies will be tested:
1. Direct MIRGS implementation: The most basic strat-
egy is to use the backscatter values from the HH and
HV channels directly in the multivariate formulation of
MIRGS, using the VFG gradient method that is already
implemented to create the watershed and image gradient.
2. Gradient combination: While the feature space separabil-
ity provided by the dual-polarization data is fully utilized
by Strategy 1, the VFG image gradient was not designed
with domain knowledge of dual-polarization data. VFG
tends to assign the highest strength only to edges that are
strongest in both the HH and HV channels while strong
edges that appear in only one of the two channels are as-
signed a lower value. However, strong edges that appear
in at least one of the channels are equally meaningful as
they denote a boundary between ice classes. Thus, there
is motivation for testing and comparing various gradi-
ent generation strategies that combine information from
both channels.
3. Feature extraction and image fusion: Another strategy
for making use of dual-polarization data is to fuse the
information from both channels into a single image first
with feature extraction or image fusion techniques be-
fore segmentation in MIRGS. If feature space separability
can be maintained between all ice classes after mapping
each two dimensional feature vector to a one dimensional
value, then both the separability and the image gradient
requirements can be satisfied: all ice classes will have a
different brightness in the fused image, which will nat-
urally cause edges between them to appear in the fused
image.
The experiments will test whether the basic multivariate
strategy, a modified gradient combination approach or feature
extraction will give the best results for the RS-2 data.
4. Data
ScanSAR Wide A has a pixel resolution of 100 m × 100 m,
with a pixel spacing of 50 m × 50 m. The full 500km swath
width spans approximately 10000 × 10000 pixels. The CIS
expects to use data from the co-polarization (σ
◦
HH
) and the
cross-polarization (σ
◦
HV
) channels for their operations and has
provided real-valued RS-2 imagery for testing. Each pixel in
the image is represented by a two dimensional feature vector
whose elements are σ
◦
HH
and σ
◦
HV
. The HH channel con-
tains the same information as that available from the single-
polarization RADARSAT-1 (RS-1) satellite. Complex-valued
images are not considered in this paper because these are not
used operationally by CIS.
A Gulf of St. Lawrence scene recorded on February 25,
2008 was tested in this paper. CIS provided operational ice
charts for the area on this date, which were created from RS-
1 data since CIS had not yet integrated RS-2 imagery into
their operational pipeline at that time. A manually segmented
ground-truth image was produced based on the ice charts for
a small part of the RS-2 scene (depicting an area north of
Anticosti Island) to use for validation purposes (Fig. 2). This
image represents ice appearance for an incidence angle range
of less than 10
◦
.
There are still ambiguities in the manual segmentation be-
cause certain ice types cannot always be reliably identified
from the backscatter images alone and because each polygon
in the CIS ice chart contains a mix of ice types but not the
exact pixel location of each type. However, there are small
patches of the original, full RS-2 scene where the ice type is
known, such as within polygons that have only one ice type
or a mix of distinctive ice types. Although the ice type in
3