| 作成者 |
|
|
|
|
|
|
|
| 本文言語 |
|
| 出版者 |
|
| 発行日 |
|
| 収録物名 |
|
| 巻 |
|
| 開始ページ |
|
| 終了ページ |
|
| 会議情報 |
|
| 出版タイプ |
|
| アクセス権 |
|
| 権利関係 |
|
| 関連DOI |
|
|
|
|
|
|
|
| 関連HDL |
|
|
|
|
|
|
|
| 関連情報 |
|
|
|
|
|
| 概要 |
In recent years, a large amount of multimedia data consisting of images and text have been generated in libraries through the digitization of physical materials into data for their preservation. When ...they are archived, appropriate cataloging metadata are assigned to them by librarians. Automatic annotations are helpful for reducing the cost of manual annotations. To this end, we propose a mapping system that links images and the associated text to entries on Wikipedia as a replacement for annotation by targeting images and associated text from photo-sharing sites. The uploaded images are accompanied by descriptive labels of contents of the sites that can be indexed for the catalogue. However, because users freely tag images with labels, these user-assigned labels are often ambiguous. The label “albatross”, for example, may refer to a type of bird or aircraft. If the ambiguities are resolved, we can use Wikipedia entries for cataloging as an alternative to ontologies. To formalize this, we propose a task called image label disambiguation where, given an image and assigned target labels to be disambiguated, an appropriate Wikipedia page is selected for the given labels. We propose a hybrid approach for this task that makes use of both user tags as textual information and features of images generated through image recognition. To evaluate the proposed task, we develop a freely available test collection containing 450 images and 2,280 ambiguous labels. The proposed method outperformed prevalent text-based approaches in terms of the mean reciprocal rank, attaining a value of over 0.6 on both our collection and the ImageCLEF collection.続きを見る
|