## ＜テクニカルレポート＞Refutably Probably Approximately Correct Learning

作成者 作成者名 所属機関 所属機関名 Research Institute of Fundamental Information Science Kyushu University 九州大学理学部附属基礎情報学研究施設 作成者名 所属機関 所属機関名 Research Institute of Fundamental Information Science Kyushu University 九州大学理学部附属基礎情報学研究施設 英語 Research Institute of Fundamental Information Science, Kyushu University 九州大学理学部附属基礎情報学研究施設 1994-04-22 87 Accepted Manuscript open access We propose a notion of the rejutably PAC learnability, which formalizes the refutability of hypothesis spaces in the PAC learning model. Intuitively, the refutable PAC learnability of a concept class ...F requires that the learning algorithm should refute F with high probability if a target concept can not be approximated by any concept in F with respect to the underlying probability distribution. We give a general upper bound of $Oleft((1/ \varepsilon + 1/\varepsilon' )log(mid F_n middelta)\right)$ on the number of examples required for refutably PAC learning of F. Here, $\varepsilon$ and $delta$ are the standard accuracy and confidence parameters, and $\varepsilon'$ is the refutation accuracy. We also define the strongly refutably PAC learnability by introducing the refutation threshold. We prove a general upper bound of \$ Oleft((1/\varepsilon^2 + 1/ \varepsilon'^2\right) log left(mid F_n middelta)\right) for strongly refutably PAC learning of F. These upper bounds reveal that both the refutably PAC learnability and the strongly refutably PAC learnability are equivalent to the standard PAC learnability within the polynomial size restriction.続きを見る

### 本文情報を非表示

rifis-tr-87 pdf 580 KB 104

### 詳細

レコードID 3184 査読無 RIFIS Technical Report || 87 || p1-7 http://www.i.kyushu-u.ac.jp/research/report.html Proc. 5th Intern. Workshop on Algorithmic Learning Theory テクニカルレポート 2009.04.22 2020.02.10