没有合适的资源?快使用搜索试试~ 我知道了~
首页The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd(非草稿版)
The Elements of Statistical Learning: Data Mining, Inference, ...

The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 和之前上传的略有差别,这个要更好。
资源详情
资源评论
资源推荐

Springer Series in Statistics
Trevor Hastie
Robert Tibshirani
Jerome Friedman
Springer Series in Statistics
The Elements of
Statistical Learning
Data Mining,Inference,and Prediction
The Elements of Statistical Learning
During the past decade there has been an explosion in computation and information tech-
nology. With it have come vast amounts of data in a variety of fields such as medicine, biolo-
gy, finance, and marketing. The challenge of understanding these data has led to the devel-
opment of new tools in the field of statistics, and spawned new areas such as data mining,
machine learning, and bioinformatics. Many of these tools have common underpinnings but
are often expressed with different terminology. This book describes the important ideas in
these areas in a common conceptual framework. While the approach is statistical, the
emphasis is on concepts rather than mathematics. Many examples are given, with a liberal
use of color graphics. It should be a valuable resource for statisticians and anyone interested
in data mining in science or industry. The book’s coverage is broad, from supervised learning
(prediction) to unsupervised learning. The many topics include neural networks, support
vector machines, classification trees and boosting—the first comprehensive treatment of this
topic in any book.
This major new edition features many topics not covered in the original, including graphical
models, random forests, ensemble methods, least angle regression & path algorithms for the
lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on
methods for “wide” data (p bigger than n), including multiple testing and false discovery rates.
Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at
Stanford University. They are prominent researchers in this area: Hastie and Tibshirani
developed generalized additive models and wrote a popular book of that title. Hastie co-
developed much of the statistical modeling software and environment in R/S-PLUS and
invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the
very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-
mining tools including CART, MARS, projection pursuit and gradient boosting.
›
springer.com
STATISTICS
----
Trevor Hastie • Robert Tibshirani • Jerome Friedman
The Elements of Statictical Learning
Hastie • Tibshirani • Friedman
Second Edition

This is page v
Printer: Opaque this
To our parents:
Valerie and Patrick Hastie
Vera and Sami Tibshirani
Florence and Harry Friedman
andtoourfamilies:
Samantha, Timothy, and Lynda
Charlie, Ryan, Julie, and Cheryl
Melanie, Dora, Monika, and Ildiko

vi

This is page vii
Printer: Opaque this
Preface to the Second Edition
In God we trust, all others bring data.
–William Edwards Deming (1900-1993)
1
We have been gratified by the popularity of the first edition of The
Elements of Statistical Learning. This, along with the fast pace of research
in the statistical learning field, motivated us to update our book with a
second edition.
We have added four new chapters and updated some of the existing
chapters. Because many readers are familiar with the layout of the first
edition, we have tried to change it as little as possible. Here is a summary
of the main changes:
1
On the Web, this quote has been widely attributed to both Deming and Robert W.
Hayden; however Professor Hayden told us that he can claim no credit for this quote,
and ironically we could find no “data” confirming that Deming actually said this.

viii Preface to the Second Edition
Chapter What’s new
1. Introduction
2. Overview of Supervised Learning
3. Linear Methods for Regression LAR algorithm and generalizations
of the lasso
4. Linear Methods for Classification Lasso path for logistic regression
5. Basis Expansions and Regulariza-
tion
Additional illustrations of RKHS
6. Kernel Smoothing Methods
7. Model Assessment and Selection Strengths and pitfalls of cross-
validation
8. Model Inference and Averaging
9. Additive Models, Trees, and
Related Methods
10. Boosting and Additive Trees New example from ecology; some
material split off to Chapter 16.
11. Neural Networks Bayesian neural nets and the NIPS
2003 challenge
12. Support Vector Machines and
Flexible Discriminants
Path algorithm for SVM classifier
13. Prototype Methods and
Nearest-Neighbors
14. Unsupervised Learning Spectral clustering, kernel PCA,
sparse PCA, non-negative matrix
factorization archetypal analysis,
nonlinear dimension reduction,
Google page rank algorithm, a
direct approach to ICA
15. Random Forests New
16. Ensemble Learning New
17. Undirected Graphical Models New
18. High-Dimensional Problems New
Some further notes:
• Our first edition was unfriendly to colorblind readers; in particular,
we tended to favor red/green contrasts which are particularly trou-
blesome. We have changed the color palette in this edition to a large
extent, replacing the above with an orange/blue contrast.
• We have changed the name of Chapter 6 from “Kernel Methods” to
“Kernel Smoothing Methods”, to avoid confusion with the machine-
learning kernel method that is discussed in the context of support vec-
tor machines (Chapter 11) and more generally in Chapters 5 and 14.
• In the first edition, the discussion of error-rate estimation in Chap-
ter 7 was sloppy, as we did not clearly differentiate the notions of
conditional error rates (conditional on the training set) and uncondi-
tional rates. We have fixed this in the new edition.
剩余763页未读,继续阅读














安全验证
文档复制为VIP权益,开通VIP直接复制

评论6