<テクニカルレポート>
Co-Learning of Recursive Languages from Positive Data

作成者
本文言語
出版者
発行日
収録物名
出版タイプ
アクセス権
関連DOI
関連URI
関連情報
概要 The present paper deals with the co-learnability of enumerable families ¥L¥ of uniformly recursive languages from positive data. This refers to the following scenario. A family ¥L¥ of target languages... as well as hypothesis space for it are specified. The co-learner is fed eventually all positive examples of an unknown target language L chosen from ¥L¥. The target language L is successfully co-learned if and only if the co-learner can definitely delete all but one possible hypotheses, and the remaining one has to correctly describe L. We investigate the capabilities of co-learning in dependence on the choice of the hypothesis space, and compare it to language learning in the limit from positive data. We distinguish between class preserving learning (¥L¥ has to be co-learned with respect to some suitably chosen enumeration of all and only the languages from ¥L¥), class comprising learning (¥L¥ has to be co-learned with respect to some hypothesis space containing at least all the languages from ¥L¥), and absolute co-learning (¥L¥ has to be co-learned with respect to all class preserving hypothesis spaces for ¥L¥). Our results are manifold. First, it is shown that co-learning is exactly as powerful as learning in the limit provided the hypothesis space is appropriately chosen. However, while learning in the limit is insensitive to the particular choice of the hypothesis space, the power of co-learning crucially depends on it. Therefore we study properties a hypothesis space should have in order to be suitable for co-learning. Finally, we derive sufficient conditions for absolute co-learnabilty, and separate it from finite learning.続きを見る

本文ファイル

pdf 110.ps.tar pdf 247 KB 297  
tgz 110.ps tgz 92.2 KB 14  

詳細

レコードID
査読有無
タイプ
登録日 2009.04.22
更新日 2018.08.31

この資料を見た人はこんな資料も見ています