没有合适的资源？快使用搜索试试~ 我知道了~

首页2001.Random Forests.Machine Learning

资源详情

资源评论

资源推荐

1

RANDOM FORESTS

Leo Breiman

Statistics Department

University of California

Berkeley, CA 94720

September 1999

Abstract

Random forests are a combination of tree predictors

such that each tree depends on the values of a

random vector sampled independently and with the

same distribution for all trees in the forest. The

generalization error for forests converges a.s. to a

limit as the number of trees in the forest becomes

large. The generalization error of a forest of tree

classifiers depends on the strength of the individual

trees in the forest and the correlation between them.

Using a random selection of features to split each

node yields error rates that compare favorably to

Adaboost (Freund and Schapire[1996]), but are more

robust with respect to noise. Internal estimates

monitor error, strength, and correlation and these are

used to show the response to increasing the number

of features used in the splitting. Internal estimates

are also used to measure variable importance. These

ideas are also applicable to regression.

2

1.

Random Forests

1.1

Introduction

Significant improvements in classification accuracy have resulted from

growing an ensemble of trees and letting them vote for the most popular

class. In order to grow these ensembles, often random vectors are generated

that govern the growth of each tree in the ensemble. An early example is

bagging (Breiman [1996]), where to grow each tree a random selection

(without replacement) is made from the examples in the training set.

Another example is random split selection (Dietterich [1998]) where at each

node the split is selected at random from among the K best splits. Breiman

[1999] generates new training sets by randomizing the outputs in the original

training set. Another approach is to select the training set from a random set

of weights on the examples in the training set. Ho [1998] has written a

number of papers on "the random subspace" method which does a random

selection of a subset of features to use to grow each tree.

In an important paper on written character recognition, Amit and Geman

[1997] define a large number of geometric features and search over a random

selection of these for the best split at each node. This latter paper has been

influential in my thinking.

The common element in all of these procedures is that for the kth tree, a

random vector Θ

k

is generated, independent of the past random vectors

Θ

1, ... ,

Θ

k−1

but with the same distribution; and a tree is grown using the

training set and Θ

k

, resulting in a classifier h(x,Θ

k

) where x is an input

vector. For instance, in bagging the random vector Θ is generated as the

counts in N boxes resulting from N darts thrown at random at the boxes,

where N is number of examples in the training set. In random split selection

Θ consists of a number of independent random integers between 1 and K.

The nature and dimensionality of Θ depends on its use in tree construction.

After a large number of trees is generated, they vote for the most popular

class. We call these procedures random forests.

Definition 1.1 A random forest is a classifier consisting of a collection of tree-

structured classifiers

{h(x,Θ

k

), k=1, ...} where the {Θ

k

} are independent

identically distributed random vectors and each tree casts a unit vote for the

most popular class at input x .

3

1.2 Outline of Paper

Section 2 gives some theoretical background for random forests. Use of the

Strong Law of Large Numbers shows that they always converge so that

overfitting is not a problem. We give a simplified and extended version of

the Amit and Geman [1997] analysis to show that the accuracy of a random

forest depends on the strength of the individual tree classifiers and a measure

of the dependence between them (see Section 2 for definitions).

Section 3 introduces forests using the random selection of features at each

node to determine the split. An important question is how many features

to select at each node. For guidance, internal estimates of the generalization

error, classifier strength and dependence are computed. These are called out-

of-bag estimates and are reviewed in Section 4. Sections 5 and 6 give

empirical results for two different forms of random features. The first uses

random selection from the original inputs; the second uses random linear

combinations of inputs. The results compare favorably to Adaboost.

The results turn out to be insensitive to the number of features selected to

split each node. Usually, selecting one or two features gives near optimum

results. To explore this and relate it to strength and correlation, an empirical

study is carried out in Section 7.

Adaboost has no random elements and grows an ensemble of trees by

successive reweightings of the training set where the current weights depend

on the past history of the ensemble formation. But just as a deterministic

random number generator can give a good imitation of randomness, my

belief is that in its later stages Adaboost is emulating a random forest.

Evidence for this conjecture is given in Section 8.

Important recent problems, i.e.. medical diagnosis and document retrieval ,

often have the property that there are many input variables, often in the

hundreds or thousands, with each one containing only a small amount of

information. A single tree classifier will then have accuracy only slightly

better than a random choice of class. But combining trees grown using

random features can produce improved accuracy. In Section 9 we experiment

on a simulated data set with 1,000 input variables, 1,000 examples in the

training set and a 4,000 example test set. Accuracy comparable to the Bayes

rate is achieved.

In many applications, understanding of the mechanism of the random forest

"black box" is needed. Section 10 makes a start on this by computing internal

estimates of variable importance and binding these together by reuse runs.

Section 11 looks at random forests for regression. A bound for the mean

squared generalization error is derived that shows that the decrease in error

4

from the individual trees in the forest depends on the correlation between

residuals and the mean squared error of the individual trees. Empirical

results for regression are in Section 12. Concluding remarks are given in

Section 13.

2 Characterizing the Accuracy of Random Forests

2.1 Random Forests Converge

Given an ensemble of classifiers h

1

(x),h

2

(x), ... ,h

K

(x), and with the training set

drawn at random from the distribution of the random vector Y,X, define the

margin function as

mg(X,Y) = av

k

I(h

k

(X)=Y)−max

j≠Y

av

k

I(h

k

(X)= j).

where

I(•) is the indicator function. The margin measures the extent to

which the average number of votes at X,Y for the right class exceeds the

average vote for any other class. The larger the margin, the more confidence

in the classification. The generalization error is given by

PE* = P

X,Y

(mg(X,Y) <0)

where the subscripts X,Y indicate that the probability is over the X,Y space.

In random forests, h

k

(X) = h(X,Θ

k

). For a large number of trees, it follows

from the Strong Law of Large Numbers and the tree structure that:

Theorem 1.2 As the number of trees increases, for almost surely all sequences

Θ

1

,...

PE* converges to

P

X,Y

(P

Θ

(h(X,Θ)=Y)−max

j≠Y

P

Θ

(h(X,Θ)=j) < 0) (1)

Proof: see Appendix I.

This result explains why random forests do not overfit as more trees are

added, but produce a limiting value of the generalization error.

2.2 Strength and Correlation

For random forests, an upper bound can be derived for the generalization

error in terms of two parameters that are measures of how accurate the

individual classifiers are and of the dependence between them. The interplay

between these two gives the foundation for understanding the workings of

random forests. We build on the analysis in Amit and Geman [1997].

5

Definition 2.1 The margin function for a random forest is

mr(X,Y) = P

Θ

(h(X,Θ)=Y)−max

j≠Y

P

Θ

(h(X,Θ)= j) (2)

and the strength of the set of classifiers {h(x,Θ)} is

s=E

X,Y

mr(X,Y) (3)

Assuming s ≥ 0, Chebychev’s inequality gives

PE* ≤ var(mr)/s

2

(4)

A more revealing expression for the variance of mr is derived in the

following: Let

ˆ

j(X,Y)=argmax

j≠Y

P

Θ

(h(X,Θ)= j)

so

mr(X,Y) = P

Θ

(h(X,Θ)=Y)−P

Θ

(h(X,Θ)=

ˆ

j(X,Y))

= E

Θ

[I(h(X,Θ)=Y)−I(h(X,Θ)=

ˆ

j(X,Y))]

.

Definition 2.2 The raw margin function is

rmg(Θ,X,Y)= I(h(X,Θ)=Y)−I(h(X,Θ)=

ˆ

j(X,Y)).

Thus, mr(X,Y) is the expectation of rmg(Θ,X,Y) with respect to Θ. For any

function f the identity

[E

Θ

f (Θ)]

2

= E

Θ,Θ'

f (Θ) f (Θ')

holds where Θ,Θ' are independent with the same distribution, implying that

mr(X,Y)

2

= E

Θ,Θ'

rmg(Θ,X,Y)rmg(Θ',X,Y) (5)

Using (5) gives

剩余34页未读，继续阅读

安全验证

文档复制为VIP权益，开通VIP直接复制

信息提交成功

## 评论1