作成者 |
|
|
本文言語 |
|
出版者 |
|
|
発行日 |
|
収録物名 |
|
巻 |
|
出版タイプ |
|
アクセス権 |
|
関連DOI |
|
|
関連URI |
|
|
関連情報 |
|
|
概要 |
The present paper deals with the co-learnability of enumerable families ¥L¥ of uniformly recursive languages from positive data. This refers to the following scenario. A family ¥L¥ of target languages... as well as hypothesis space for it are specified. The co-learner is fed eventually all positive examples of an unknown target language L chosen from ¥L¥. The target language L is successfully co-learned if and only if the co-learner can definitely delete all but one possible hypotheses, and the remaining one has to correctly describe L. We investigate the capabilities of co-learning in dependence on the choice of the hypothesis space, and compare it to language learning in the limit from positive data. We distinguish between class preserving learning (¥L¥ has to be co-learned with respect to some suitably chosen enumeration of all and only the languages from ¥L¥), class comprising learning (¥L¥ has to be co-learned with respect to some hypothesis space containing at least all the languages from ¥L¥), and absolute co-learning (¥L¥ has to be co-learned with respect to all class preserving hypothesis spaces for ¥L¥). Our results are manifold. First, it is shown that co-learning is exactly as powerful as learning in the limit provided the hypothesis space is appropriately chosen. However, while learning in the limit is insensitive to the particular choice of the hypothesis space, the power of co-learning crucially depends on it. Therefore we study properties a hypothesis space should have in order to be suitable for co-learning. Finally, we derive sufficient conditions for absolute co-learnabilty, and separate it from finite learning.続きを見る
|