没有合适的资源?快使用搜索试试~ 我知道了~
首页Extracting Straight Lines
Extracting Straight Lines

Extracting Straight Lines---line detection edge detection
资源详情
资源评论
资源推荐

IEEE
TRANSACTIONS
ON
PATTERN
ANALYSIS
AND
MACHINE
INTELLIGENCE,
VOL.
PAMI-8,
NO.
4,
JULY
1986
Extracting
Straight
Lines
J.
BRIAN
BURNS,
ALLEN
R.
HANSON,
MEMBER,
IEEE,
AND
EDWARD
M.
RISEMAN,
MEMBER,
IEEE
Abstract-This
paper
presents
a
new
approach
to
the
extraction
of
straight
lines
in
intensity
images.
Pixels
are
grouped
into
line-support
regions
of
similar
gradient
orientation,
and
then
the
structure
of
the
associated
intensity
surface
is
used
to
determine
the
location
and
prop-
erties
of
the
edge.
The
resulting
regions
and
extracted
edge
parameters
form
a
low-level
representation
of
the
intensity
variations
in
the
image
that
can
be
used
for
a
variety
of
purposes.
The
algorithm
appears
to
be
more
effective
than
previous
techniques
for
two
key
reasons:
1)
the
gradient
orientation
(rather
than
gradient
magnitude)
is
used
as
the
initial
organizing
criterion
prior
to
the
extraction
of
straight
lines,
and
2)
the
global
context
of
the
intensity
variations
associated
with
a
straight
line
is
determined
prior
to
any
local
decisions
about
participating
edge
elements.
Index
Terms-Boundary
extraction,
edge-analysis,
gradient-based
segmentation,
image
processing,
line
parameters,
line
representation,
plane-fitting,
straight
lines.
I.
INTRODUCTION
T
HE
organization
of
significant
local
intensity
changes
into
the
more
global
abstractions
called
"lines"
or
"boundaries"
is
an
early,
but
important,
step
in
the
trans-
formation
of
the
visual
signal
into
useful
intermediate
constructs
for
interpretation
processes.
Despite
the
large
amount
of
research
appearing
in
the
literature,
effective
extraction
of
straight
lines
has
remained
a
difficult
prob-
lem
in
many
image
domains.
There
are
two
goals
of
this
paper:
1)
the
development
of
mechanisms
for
extracting
straight
lines
from
complex
images,
including
lines
of
ar-
bitrarily
low
contrast;
and
2)
the
construction
of
an
inter-
mediate
representation
of
edge/line
information
through
which
high-level
interpretation
mechanisms
have
efficient
access
to
relevant
lines.
To
the
degree
that
straight
lines
may
be
effectively
ex-
tracted
and
efficiently
represented,
a
variety
of
other
in-
termediate
processing
goals
are
greatly
facilitated.
Curved
lines
can
be
approximated
reasonably
well
as
aggregates
of
piecewise-linear
segments.
In
many
cases,
continuous
representations
of
a
boundary
may
be
derived
from
adja-
cent
linear
segments
by
treating
differences
in
their
ori-
entations
as
local
curvature
estimates.
In
addition,
tex-
tured
regions
can
be
extracted
as
aggregates
of
line
elements
with
specific
common
properties
of
length,
con-
trast,
orientation,
etc.
Manuscript
received
February
1,
1985;
revised
August
14,
1985.
Rec-
ommended
for
acceptance
by
W.
E.
L.
Grimson.
This
work
was
supported
in
part
by
the
Air
Force
Office
of
Scientific
Research
under
Contract
F49620-83-C-0099.
The
authors
are
with
the
Department
of
Computer
and
Information
Sci-
ence,
University
of
Massachusetts,
Amherst,
MA
01003.
IEEE
Log
Number
8406515
A.
Problems
in
Edge
Extraction
Edges
are
usually
defined
as
local
discontinuities
or
rapid
changes
in
some
image
feature,
such
as
image
lu-
minance
or
texture.
These
changes
are
detected
by
a
local
operator,
usually
of
small
spatial
extent
with
respect
to
the
image,
that
measures
the
magnitude
of
the
change
and,
in
many
cases,
its
orientation
as
well.
Lines
are
com-
monly
defined
as
collections
of
local
edges
that
are
con-
tiguous
in
the
image.
Thus,
many
algorithms
rely
on
a
two-step
process
for
line
extraction:
detection
of
local
edges
that
are
then
aggregated
into
the
more
globally
de-
fined
lines
on
the
basis
of
various
grouping
criteria.
In
the
one-dimensional
case,
an
ideal
edge
is
a
step
change
in
the
value
of
the
underlying
feature.
In
two
di-
mensions,
the
ideal
edge
may
be
viewed
as
a
step
discon-
tinuity
in
the
values
of
the
image
feature
in
a
direction
perpendicular
to
the
spatial
orientation
of
the
edge.
We
will
refer
to
a
straight
line
as
a
set
of
collinear
and
con-
tiguous
edges;
i.e.,
a
straight
line
has
a
length
associated
with
a
continuous
discontinuity.
Shortly,
we
will
discuss
the
additional
constraints
that
we
impose
on
the
intensity
change
to
organize
them
into
straight
lines.
Since
ideal
step
changes
are
rarely
found
in
real
images,
the
magni-
tude
of
the
feature
change
across
a
line
is
usually
distrib-
uted
over
an
area.
Hence,
the
underlying
image
structure
supporting
a
line
has
a
width
measured
perpendicular
to
the
line
orientation
in
addition
to
its
length.
We
refer
to
the
collection
of
pixels
so
defined
as
a
line-support
re-
gion.
Note
that
our
use
of
the
term
"line"
differs
from
some
researchers
[26],
[7],
[13]
who
use
the
term
"line"
to
refer
to
image
events
in
which
the
intensity
surface
forms
a
ridge,
possibly
of
narrow
width,
for
which
there
is
no
distinct
location
for
the
boundaries
on
either
side
of
the
ridge.
This
view
is
related
to
the
"roof"
intensity
profile
of
edges
in
the
Binford-Horn
line
tracker
[15].
In
our
view,
these
narrow
linear
image
events
will
have
a
width
formed
by
two
locally
parallel
lines
of
opposite
contrast.
It
is
only
the
location
of
the
lines
that
is
ambiguous,
not
their
existence.
In
the
case
where
the
ridge
in
the
intensity
surface
is
very
narrow,
even
to
a
subpixel
level,
we
are
taking
the
position
that
if
the
difference
in
adjacent
pixels
is
meaningful,
it
can
be
used
to
define
a
narrow
region
with
parallel
lines
delimiting
this
image
event.
The
problems
encountered
with
local
edge
operators
are
widely
known
and
are
related
to
1)
the
possibly
small
spa-
tial
extent
of
the
operator
relative
to
the
events
they
are
designed
to
detect,
2)
the
deviation
of
actual
image
data
from
assumed
models,
and
3)
aliasing
due
to
the
discrete
0162-8828/86/0700-0425$01.00
©
1986
IEEE
425

IEEE
TRANSACTIONS
ON
PATTERN
ANALYSIS
AND
MACHINE
INTELLIGENCE,
VOL.
PAMI-8,
NO.
4,
JULY
1986
nature
of
the
digitization
process.
The
intensity
variation
representing
a
local
edge
is
often
spatially
distributed
over
an
extended
area
due
to
complex
scene
lighting
conditions
interacting
with
scene
surfaces
exhibiting
varying
surface
orientation
and
reflectances.
In
real
images,
edges
usually
do
not
consist
of
step
functions,
but
rather
are
formed
by
wider
and
more
irregular
changes
in
measured
intensity.
In
most
practical
situations,
the
image
data
are
noisy
and,
since
edges
are
high
spatial-frequency
events,
edge
detec-
tors
enhance
the
noise.
The
edge
maps
resulting
from
ap-
plication
of
a
local
edge
detector
are
usually
very
dense
and
do
not
distinguish
between
edges
resulting
from
ob-
ject
boundaries,
shadows,
and
changes
in
surface
reflec-
tance
and/or
orientation.
When
the
intensities
on
one
side
of
a
line
change
(e.g.,
a
changing
background
behind
an
occluding
surface),
then
there
may
be
significant
variation
in
edge
contrast
down
the
length
of
the
line.
In
order
to
overcome
the
problems
caused
by
the
mis-
match
between
gradient
widths
and
operator
spatial
ex-
tents,
a
family
of
approaches
involving
hierarchical
edge
masks
have
been
proposed
[28].
The
most
well
known
of
these
is
the
Marr-Hildreth
zero-crossing
operator
[21],
defined
as
the
Laplacian
of
a
Gaussian
over
increasingly
larger
spatial
extents.
However,
since
fine-detail
(high-
frequency)
image
events
and
coarse-structured
(low-fre-
quency)
image
events
respond
optimally
to
different
size
operators,
the
appropriate
size
of
the
operator
must
be
de-
termined
in
each
different
area
of
the
image.
Related
al-
gorithms
involve
the
application
of
a
set
of
hierarchical
edge
masks
of
varying
resolution
at
selected
orientations
at
all
image
locations.
Following
the
initial
edge
extraction
process,
various
techniques
have
been
proposed
to
aggregate
the
local
in-
formation
into
more
global
line-like
structures
and
to
dis-
card
unimportant
or
redundant
information,
a
difficult
task
in
many
domains.
These
methods
include
Hough
trans-
forms
[5]
that
may
be
generalized
to
detect
nonlinear
boundaries
and
specific
shapes
[1];
edge
tracking
and
contour
following
[23],
curve
fitting
[25],
graph
theoretic
methods
[22],
relaxation
algorithms
[30],
hierarchical-re-
finement
techniques
[16],
[8],
[14],
and
high-level
model-
based
processes
[29].
The
problems
cited
above
in
the
discussion
on
edge
op-
erators
pose
difficulties
for
the
aggregration
processes
as
well.
In
many
cases,
the
local
operators
misplace
or
en-
tirely
miss
edges,
a
single
real
edge
may
result
in
several
strong
operator
responses
at
different
(often
parallel)
lo-
cations,
and
the
underlying
data
may
not
conform
to
ex-
pectations
built
into
the
grouping
process.
Low-contrast
lines,
because
of
their
low
signal-to-noise
ratio,
are
often
troublesome.
B.
Gradient
Magnitude
versus
Gradient
Orientation
The
straight-line
extraction
technique
developed
in
Section
II
is
based
on
two
observations
about
many
line
extraction
algorithms:
1)
they
lack
a
global
view
of
the
underlying
image
structure
prior
to
making
local
deci-
sions
about
edge
events
and
2)
they
relegate
information
about
edge
orientation
to
a
secondary
role
in
the
process-
ing.
In
most
edge
and
line
extraction
algorithms,
the
mag-
nitude
of
the
intensity
change
is
used
in
some
manner
as
a
measure
of
the
importance
of
the
local
edge.
While
edge-
orientation
information
may
be
used
to
modulate
the
grouping
process
applied
to
the
strong
edges,
the
edge
magnitude
usually
has
the
central
and
dominating
influ-
ence.
It
is
our
view
that
edge
orientation
carries
important
information
about
the
set
of
pixels
that
participate
in
the
intensity
variation
that
underlies
the
straight
line,
partic-
ularly
its
spatial
extent.
Gradient
orientation
is
defined
as
the
direction
of
max-
imum
gray-level
change
as
measured
over
a
small
area
around
a
pixel,
or
equivalently,
as
the
local
direction
of
steepest
ascent
(or
descent)
on
the
intensity
surface.
Our
model
of
the
pixels
comprising
the
intensity
surfaces
as-
sociated
with
straight
lines
in
digitized
images
has
two
characteristics:
1)
the
local
gradient
magnitude
(measured
over
a
small
local
window)
will
vary
significantly
over
the
intensity
surface,
for
reasons
cited
earlier,
particularly
in
the
direc-
tion
orthogonal
to
the
line;
and
2)
the
local
gradient
orientation
will
vary
relatively
lit-
tle
throughout
the
entire
intensity
surface.
It
is
our
observation
that
these
characteristics
are
true
of
most
of
the
straight
lines
that
we
wish
to
extract
in
digitized
images.
Based
upon
the
consistency
of
the
local
gradient
orientation,
we
have
developed
a
simple
algo-
rithm
for
extracting
the
"line-support
region,"
the
entire
set
of
pixels
comprising
each
such
intensity
surface.
In
this
way,
the
difficult
step
of
extracting
whole
lines
can,
to
a
large
extent,
be
reduced
to
a
simple
grouping
and
connected-components
process.
The
additional
benefit
of
isolating
these
support
regions
is
that
other
aspects
of
the
line,
such
as
contrast
and
width
(or
fuzziness),
can
be
more
accurately
measured.
Surprisingly,
global
approaches
for
straight-line
extrac-
tion,
such
as
Hough
transform
methods
[5],
[1],
do
not
exploit
orientation
as
much
as
one
might
think.
Although
the
histogram
buckets
in
(r,
theta)
coordinates
encode
edge
orientation
in
terms
of
collinear
sets
of
edges,
once
again
the
magnitudes
of
edges
are
likely
to
dominate.
The
global
process
for
extracting
lines
is
dependent
upon
find-
ing
strong
peaks
in
the
transform.
All
Hough
techniques
use
edge
magnitude
in
the
voting
process
in
some
manner,
either
via
a
proportional
weight
or
via
thresholding
so
that
only
strong
edges
vote.
Thus,
it
is
very
difficult
to
extract
long,
coherent,
low-contrast
lines
in
a
general
manner
be-
cause
their
response
in
(r,
theta)-space
is
reduced
by
the
voting
process,
they
may
be
hidden
by
high-contrast
in-
formation
and
there
may
be
other
types
of
noise
present.
C.
A
New
Approach-Organizing
Line-Support
Contexts
The
technique
presented
here
was
motivated
by
a
need
for
a
straight-line
extraction
method
that
would
find
straight
lines
in
reasonably
complex
images,
particularly
those
lines
that
are
long
but
not
necessarily
of
high
con-
426

BURNS
et
al.:
EXTRACTING
STRAIGHT
LINES
trast.
A
key
characteristic
of
the
approach
that
distin-
guishes
it
from
most
previous
work
is
the
global
organi-
zation
of
the
supporting
line
context
prior
to
any
decisions
about
the
relevance
of
local
intensity
changes.
An
estimate
of
the
local
gradient
orientation
at
each
pixel
is
the
basis
of
these
first
organizing
processes.
Grouping
pixels
into
line-support
regions
avoids
the
plethora
of
responses
from
masks
of
varying
sizes
and
ori-
entations,
as
well
as
unnecessary
complexity
in
the
sub-
sequent
organizing
mechanisms.
It
allows
the
extraction
of
straight
lines
despite
weaknesses
in
line
clarity
due
to
local
variations
in
width,
contrast,
and
orientation.
It
di-
rectly
addresses
the
problems
associated
with
the
size
of
the
edge
operators
and
determines
the
extent
of
support
to
be
given
to
edges
and
lines
directly
from
the
underlying
data.
The
approach
has
its
roots
in
the
"gradient-collection"
processes
of
Hanson,
Riseman,
and
Glazer
[10].
In
the
terms
discussed
in
this
paper,
the
gradient-collection
pro-
cess
utilized
a
data-directed
mechanism
to
organize
the
full
context
of
the
edge
in
one
direction
at
a
time
(the
horizontal
and
vertical
components)
over
the
width
of
a
monotonically
increasing
or
decreasing
section
of
the
in-
tensity
profile
contributing
to the
edge
(i.e.,
where
the
gradient
sign
was
constant).
The
total
gradient
contrast
was
then
distributed
around
the
location
of
the
centroid
of
the
local
gradient
magnitudes
in
the
edge
profile.
This
process
organized
contrast
information
across
the
width
of
an
edge
without
committing
to
any
fixed
size
or
set
of
sizes
for
the
edge
operator.
In
a
similar
vein,
Ehrich
and
Foith
[6]
organized
one-dimensional
intensity
profiles
into
a
hierarchical
data
structure
before
interpreting
the
infor-
mation
and
making
decisions
about
what
constitutes
a
meaningful
edge.
Both
of
these
techniques
capture
global
gradient
information
that
results
in
a
more
accurate
as-
sessment
of
total
edge
magnitude
across
its
width.
Haralick
[12]
also
processes
the
intensity
surface
in
or-
der
to
make
decisions
about
lines,
but
the
key
difference
is
that
his
surface
patches
are
local,
and
one
faces
the
same
sort
of
difficulties
in
organizing
this
information
as
one
does
in
the
output
of
local
edge
operators.
The
approach
in
this
paper
has
generalized
the
global,
contextual
organizing
processes
to
two
dimensions,
grouping
image
pixels
across
the
width
of
an
edge
as
well
as
down
the
length
of
the
edge,
to
form
the
basis
for
ex-
tracting
a
straight
line.
All
pixels
in
these
line-support
regions
contribute
to
both
the
final
representation
of
the
line
and
the
generation
of
a
set
of
descriptive
attributes
that
are
useful
for further
processing
of
the
line
data.
The
line-support
regions
might
also
be
useful
in
separating
the
straight
lines
into
intrinsic
images
[3]
representing
edges
and
lines
of
different
types,
such
as
illumination,
texture,
reflectance,
orientation,
etc.
II.
A
REPRESENTATION
AND
PROCESS
FOR
EXTRACTING
STRAIGHT
LINES
A.
Overview
The
general
approach
to
extracting
straight
lines
is
to
group
the
pixels
into
line-support
regions
on
the
basis
of
gradient
orientation,
and
then
to
extract
from
each
re-
gion
a
straight-line
segment.
Note
that
every
intensity
variation,
including
very
low
magnitude
changes,
will
initially
be
extracted
as
a
weak
line
segment
(sometimes
of
great
width).
During
the
interpretation
of
these
lines,
adjacent
low-contrast
support
regions
can
be
grouped
into
homogeneous
regions
and
filtered
so
that
they
are
not
viewed
as
weak
straight
lines.
There
are
four
basic
steps
in
extracting
straight
lines.
1)
Group
pixels
into
line-support
regions
based
on
sim-
ilarity
of
gradient
orientation.
This
allows
data-directed
organization
of
edge
contexts
without
commitment
to
masks
of
a
particular
size.
2)
Approximate
the
intensity
surface
by
a
planar
sur-
face.
The
planar
fit
is
weighted
by
the
gradient
magnitude
associated
with
the
pixels
so
that
intensities
in
the
steepest
part
of
the
edge
will
dominate.
3)
Extract
attributes
from
the
line-support
region
and
the
planar
fit.
The
attributes
extracted
include
the
repre-
sentative
line
and
its
length,
contrast,
width,
location,
orientation,
and
straightness.
4)
Filter
lines
on
the
attributes
to
isolate
various
image
events
such
as
long
straight
lines
of
any
contrast;
high-
contrast
short
lines
(heavy
texture);
low-contrast
short
lines
(light
texture);
homogeneous
regions
of
adjacent
very
low
contrast
lines;
and
lines
at
particular
orientations
and
positions.
B.
Grouping
Pixels
into
Line-Support
Regions
via
Gradient
Orientation
Fig.
1
shows
four
representative
images
used
to
illus-
trate
the
grouping
and
straight-line
extraction
process.
Fig.
2(a)
is
a
32
x
32
intensity
subimage
used
to
illustrate
the
details
of
the
algorithm;
results
are
shown
for
the
full
images
in
subsequent
sections.
Fig.
2(b)
shows
the
inten-
sity
surface
of
this
subimage,
while
Fig.
2(d)
depicts
the
corresponding
gradient
image
in
which
the
length
of
the
vector
encodes
gradient
magnitude.
Gradient
magnitude
and
orientation
have
been
estimated
by
convolving
the
image
with
the
two
masks
shown
in
Fig.
2(c).
Note
that
the
sign
of
the
gradient
encodes
dark-to-light
or
light-to-
dark
intensity
changes
that
are
180
degrees
apart.
Thus,
intensity
surfaces
that
form
a
ridge
will
be
detected
as
two
different
line-support
regions.
1)
Choice
of
Mask
for
Computing
the
Gradient:
There
are
a
variety
of
masks
that
can
be
employed
in
the
com-
putation
of
the
gradient,
including
those
organized
hier-
archically
according
to
mask
resolution.
Large
masks
tend
to
smooth
the
image
and
reduce
the
clarity
of
fine
detail,
or
even
remove
it
completely.
Since
one
of
our
primary
goals
is
the
recovery
of
lines
corresponding
to
fine
detail,
we
wish
to
select
the
smallest
possible
masks
that
will
produce
estimates
of
gradient
orientation.
The
mask
se-
lected
must
maintain
lines
associated
with
alternating
one-
pixel-wide
regions
[such
as
parts
of
the
rain-gutter,
siding
and
window-trim
in
Fig.
1(a)
and
(b)]
and
also
provide
symmetric
responses
with
respect
to
rotation
of
the
line
in
the
image.
427

IEEE
TRANSACTIONS
ON
PATTERN
ANALYSIS
AND
MACHINE
INTELLIGENCE,
VOL.
PAMI-8.
NO.
4,
JULY
1986
Fig.
1.
Four
natural
images
used
to
demonstrate
straight-line
extraction.
The
sensitivity
to
detail
and
rotational
symmetry
of
four
small
edge
masks,
I
x
2,
1
x
3,
2
x
2,
and
3
x
3,
shown
in
Fig.
3(a)
will
be
compared
by
applying
them
to
two
test
images.
Note
that
all
masks
are
no
larger
than
a
3
x
3
window,
and one
of
them
is
the
smallest
possible
edge
operator,
a
I
x
2
mask.
The
first
test
image
shown
in
Fig.
3(b)
is
composed
of
a
field
of
alternating
horizontal
black
and
white
strips
of
1
pixel
width
and
is
intended
to
test
the
ability
of
the
mask
to
respond
to
fine
detail.
Fig.
4(a)-(d)
shows
the
results
of
applying
the
four
masks
to
the
dense
field
of
strips.
Note
that
the
1
x
3
and
3
x
3
completely
fail
to
detect
any
intensity
variation
at
all!
Thus,
these
masks
will
be
re-
jected
because
high-constrast
1-pixel-wide
regions
can
be
missed.
The
test
image
of
Fig.
3(c)
is
composed
of
a
diagonal
edge
reflected
about
the
vertical
axis.
This
test
image
will
give a
sense
of
edge
responses
to
rotated
lines.
Fig.
4(e)-
(h)
demonstrates
the
symmetric
response
of
the
2
x
2
mask
to
the
two
diagonal
lines
versus
the
nonsymmetric
response
of
1
x
2
mask
that
is
the
smallest
possible
mask.
On
the
basis
of
the
criterion
described,
the
2
x
2
mask
appears
to
be
the
best
choice.
In
addition,
Haralick
has
shown
that
this
particular
mask
is
optimal
among
2
x
2
operators
[121.
Thus,
the
2
x
2
mask
was
chosen
as
our
operator
to
estimate
gradient
magnitude
and
orientation.
All
results
shown
in
the
following
sections
were
obtained
using
the
2
x
2
mask.
The
local
gradient
orientation
was
computed
by
tan-'
Gv(i,
J)/G,H(i,
j)
where
Gv(i,
j)
and
GH(i,
j)
are
the
vertical
and
horizontal
components
of
the
gradient
obtained
from
the
mask
ap-
plied
at
pixel
i,
j.
Further
studies
will
be
required
to
de-
termine
the
impact
of
the
size
and
form
of
the
edge
op-
erator
on
the
overall
process.
2)
Segmentation
of
the
Gradient-Orientation
Image
Using
Fixed
Partitions:
Once
local
gradient
orientations
428

BURNS
et
al.:
EXTRACTING
STRAIGHT
LINES
(a)
(b)
(c)
(d)
Fig.
2.
The
first
step
in
forming
gradient
regions
involves
estimating
the
gradient
direction
(orientation)
at
all
points
in
the
image.
(a)
A
32
x
32
subarea
of
a
house
image
that will
be
used
to
illustrate
the
process.
(b)
An
intensity-profile
representation
of
the
intensity
array.
(c)
The
2
x
2
operators
used
to
estimate
dlIdx
and
dlldy.
from
which
the
local
gradient
orientation
is
obtained.
(d)
The
resulting
gradient
vectors
encoding
mag-
nitude
(vector
length)
and
orientation.
have
been
estimated,
they
are
grouped
into
regions.
The
problem
can
be
viewed
as
one
of
segmenting
the
gradient-
orientation
image,
and
the
usual
difficulties
of
region-seg-
mentation
algorithms
are
encountered.
Although
local
groupings
can
assure
local
similarity,
regions
can
be
formed
that
include
pixels
with
very
dissimilar
orientation
attributes
due
to
a
slow
drift
in
the
orientation
from
pixel
to
pixel.
Thus,
region-growing
techniques
[5],
[21
are
not
applicable
because
even
occasional
over-grouping
errors
can
cause
disastrous
results.
Changes
in
line
orientation
at
corners
and
junctions
of
straight
lines
[as
in
the
image
in
Fig.
5(b)]
can
produce
intermediate
gradient
orienta-
tions
instead
of
a
clear
discontinuity
in
gradient
orienta-
tion;
the
result
could
be
undesirable
pixel
groupings
if
local
region-growing
is
employed.
A
grouping
process
was
employed
that
avoids
some
of
429
剩余30页未读,继续阅读

















安全验证
文档复制为VIP权益,开通VIP直接复制

评论2