頁籤選單縮合
題 名 | On Error-Rate Estimation in Nonparametric Classification |
---|---|
作 者 | Ghosh, Anil K.; Hall, Peter; | 書刊名 | Statistica Sinica |
卷 期 | 18:3 2008.07[民97.07] |
頁 次 | 頁1081-1100 |
分類號 | 319.5 |
關鍵詞 | Bayes risk; Bootstrap; Cross-validation; Discrimination; Error rate; Kernel methods; Nonparametric density estimation; Risk; |
語 文 | 英文(English) |
英文摘要 | There is a substantial literature on the estimation of error rate, or risk, for nonparametric classifiers. Error-rate estimation has at least two purposes: accurately describing the error rate, and estimating the tuning parameters that permit the error rate to be mininised. In the light of work on related problems in nonparametric statistics, it is attractive to argue that both problems admit the same solution. Indeed, methods for optimising the point-estimation performance of nonparametric curve estimators often start from an accurate estimator of error. However, we argue in this paper that accurate estimators of error rate in classification tend to give poor results when used to choose tuning parameters; and vice versa. Concise theory is used to illustrate this point in the case of cross-validation (which gives very accurate estimators of error rate, but poor estimators of tuning parameters) and the smoothed bootstrap (where error-rate estimation is poor but tuning-parameter approximations are particularly good). The theory is readily extended to other methods, for example to the 0.632+ bootstrap approach, which gives good estimators of error rate but poor estimators of tuning parameters. Reasons for the apparent contradiction are given, and numerical results are used to point to the practical implications of the theory. |
本系統中英文摘要資訊取自各篇刊載內容。