000 | 05153cam a2200745 i 4500 | ||
---|---|---|---|
001 | 300478243 | ||
003 | OCoLC | ||
005 | 20240902170716.0 | ||
008 | 090130s2009 nyuad b 001 0 eng | ||
010 | _a2008941148 | ||
015 |
_a08,N30,0597 _2dnb |
||
020 | _a9780387848570 | ||
020 | _a0387848576 | ||
020 |
_z9780387848587 _q(electronic) |
||
020 |
_z0387848584 _q(electronic) |
||
020 | _a9780387848846 | ||
020 | _a0387848843 | ||
024 | 3 | _a9780387848570 | |
035 |
_a(OCoLC)300478243 _z(OCoLC)717787914 _z(OCoLC)754964653 _z(OCoLC)889892141 _z(OCoLC)1121619059 _z(OCoLC)1166882354 _z(OCoLC)1170976193 |
||
040 |
_aNUI _beng _erda _cNUI _dYDXCP _dCTB _dCDX _dBWX _dIXA _dOHX _dOCLCQ _dOCL _dUBA _dSNK _dAUW _dDLC _dHEBIS _dDEBBG _dOCL _dDEBSZ _dCHRRO _dGZM _dMYG _dALAUL _dUKMGB _dOCLCQ _dOHS _dFDA _dOCLCF _dOCLCQ _dOCLCO _dKLH _dOCLCQ _dUSU _dEYM _dTSC _dOCLCQ _dTFW _dOCLCO _dNJR _dDHA _dOCLCA _dOCLCQ _dGILDS _dOCLCO _dGZH _dZLM _dIPL _dOCLCO _dJVH _dOCLCO _dMNI _dOCLCO _dMST _dOCLCO _dGZN _dOCLCO _dCAI _dOCLCO _dSNN _dOCLCQ _dOCLCO _dIPS _dUKUOY _dOCLCA _dYOU _dOCLCQ _dHUELT _dOCLCO _dAZU _dNZHMA _dOCLCA _dOCLCQ _dOCLCO _dOCLCA _dAZDAC _dOCLCA _dTXHLS _dOCLCO _dIL4J6 _dOCLCO _dANO _dOCLCO _dVI# _dOCL _dFJD _dOCLCO _dOCLCL |
||
050 | 4 |
_aQ325.75 _b.H37 2009 |
|
060 | 4 |
_aQ325.75 _b.H37 2009 |
|
072 | 7 |
_as1se _2rero |
|
082 | 0 | 4 |
_a006.3122 HAS 2009 _223/eng/20221220 |
100 | 1 |
_aHastie, Trevor. _eauthor |
|
245 | 1 | 4 |
_aThe elements of statistical learning : _bdata mining, inference, and prediction / _cTrevor Hastie, Robert Tibshirani, Jerome Friedman |
250 | _aSecond edition | ||
264 | 1 |
_aNew York : _bSpringer, _c[2009] |
|
264 | 4 | _c©2009 | |
300 |
_axxii, 745 pages : _billustrations (some color), charts ; _c24 cm |
||
336 |
_atext _btxt _2rdacontent |
||
337 |
_aunmediated _bn _2rdamedia |
||
338 |
_avolume _bnc _2rdacarrier |
||
340 |
_gmonochrome _2rdacc |
||
340 |
_gpolychrome _2rdacc |
||
340 | _2rdaill | ||
340 | _2rdaill | ||
490 | 1 |
_aSpringer series in statistics, _x0172-7397 |
|
504 | _aIncludes bibliographical references (pages 699-727) and indexes | ||
505 | 0 | 0 |
_g1. _tIntroduction -- _g2. _tOverview of supervised learning -- _g3. _tLinear methods for regression -- _g4. _tLinear methods for classification -- _g5. _tBasis expansions and regularization -- _g6. _tKernel smoothing methods -- _g7. _tModel assessment and selection -- _g8. _tModel inference and averaging -- _g9. _tAdditive models, trees, and related methods -- _g10. _tBoosting and additive trees -- _g11. _tNeural networks -- _g12. _tSupport vector machines and flexible discriminants -- _g13. _tPrototype methods and nearest-neighbors -- _g14. _tUnsupervised learning -- _g15. _tRandom forests -- _g16. _tEnsemble learning -- _g17. _tUndirected graphical models -- _g18. _tHigh-dimensional problems: p>> N |
520 | 1 | _a"During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for "wide'' data (p bigger than n), including multiple testing and false discovery rates."--Publisher's description | |
650 | 0 | _aSupervised learning (Machine learning) | |
650 | 0 | _aElectronic data processing | |
650 | 0 | _aStatistics | |
650 | 0 |
_aBiology _xData processing |
|
650 | 0 | _aComputational biology | |
650 | 0 |
_aMathematics _xData processing |
|
650 | 0 | _aData mining | |
650 | 0 | _aArtificial intelligence | |
650 | 0 | _aLearning | |
650 | 1 | 2 | _aArtificial Intelligence |
650 | 2 | 2 | _aAlgorithms |
650 | 2 | 2 | _aComputing Methodologies |
650 | 2 | 2 | _aLearning |
650 | 2 | 2 | _aStatistics as Topic |
650 | 2 | _aComputational Biology | |
650 | 2 | _aData Mining | |
655 | 2 | _aStatistics | |
655 | 7 |
_aStatistik. _2gnd |
|
655 | 7 |
_aStatistics. _2lcgft |
|
700 | 1 |
_aTibshirani, Robert, _eauthor |
|
700 | 1 |
_aFriedman, J. H. _q(Jerome H.), _eauthor. _1https://id.oclc.org/worldcat/entity/E39PBJtxcwfDT9wrQ8yjBhhPwC |
|
830 | 0 | _aSpringer series in statistics | |
942 |
_2ddc _cBK _n0 |
||
999 |
_c415 _d415 |