Fern
´
andez-Delgado, Cernadas, Barro and Amorim
14. fda R, flexible discriminant analysis (Hastie et al., 1993), with function fda in the
mda package and the default linear regression method.
15. fda t is the same FDA, also with linear regression but tuning the parameter nprune
with values 2:3:15 (5 values).
16. mda R, mixture discriminant analysis (Hastie and Tibshirani, 1996), with function
mda in the mda package.
17. mda t uses the caret package as interface to function mda, tuning the parameter
subclasses between 2 and 11.
18. pda t, penalized discriminant analysis, uses the function gen.rigde in the mda package,
which develops PDA tuning the shrinkage penalty coefficient lambda with values from
1 to 10.
19. rda R, regularized discriminant analysis (Friedman, 1989), uses the function rda in
the klaR package. This method uses regularized group covariance matrix to avoid
the problems in LDA derived from collinearity in the data. The parameters lambda
and gamma (used in the calculation of the robust covariance matrices) are tuned with
values 0:0.25:1.
20. hdda R, high-dimensional discriminant analysis (Berg´e et al., 2012), assumes that
each class lives in a different Gaussian subspace much smaller than the input space,
calculating the subspace parameters in order to classify the test patterns. It uses the
hdda function in the HDclassif package, selecting the best of the 14 available models.
Bayesian (BY) approaches: 6 classifiers.
21. naiveBayes R uses the function NaiveBayes in R the klaR package, with Gaussian
kernel, bandwidth 1 and Laplace correction 2.
22. vbmpRadial t, variational Bayesian multinomial probit regression with Gaussian
process priors (Girolami and Rogers, 2006), uses the function vbmp from the vbmp
package, which fits a multinomial probit regression model with radial basis function
kernel and covariance parameters estimated from the training patterns.
23. NaiveBayes w (John and Langley, 1995) uses estimator precision values chosen from
the analysis of the training data.
24. NaiveBayesUpdateable w uses estimator precision values updated iteratively using
the training patterns and starting from the scratch.
25. BayesNet w is an ensemble of Bayes classifiers. It uses the K2 search method, which
develops hill climbing restricted by the input order, using one parent and scores of
type Bayes. It also uses the simpleEstimator method, which uses the training patterns
to estimate the conditional probability tables in a Bayesian network once it has been
learnt, which α = 0.5 (initial count).
26. NaiveBayesSimple w is a simple naive Bayes classifier (Duda et al., 2001) which
uses a normal distribution to model numeric features.
3142