<conference paper>
Using Text Classification to Improve Annotation Quality by Improving Annotator Consistency

Creator
Language
Publisher
Date
Source Title
Vol
Issue
First Page
Conference
Publication Type
Access Rights
Related DOI
Abstract This paper presents results of experiments in which annotators were asked to selectively reexamine their decisions when those decisions seemed inconsistent. The annotation task was binary topic classi...fication. To operationalize the concept of annotationconsistency, a text classifier was trained on all manual annotations made during a complete first pass and then used to automatically recode every document. Annotators were then asked to perform a second manual pass, limiting their attention to cases in which their first annotation disagreed with the text classifier. On average across three annotators, each working independently, 11% of first pass annotations were reconsidered, 46% of reconsidered annotations were changed in the second pass, and 71% of changed annotations agreed with decisions made independently by an experienced fourth annotator. The net result was that for an 11% average increase in annotation cost it was possible to increase overall chance corrected agreement with the annotation decisions of an experienced annotator (as measured by kappa) from 0.70 to 0.75.show more

Hide fulltext details.

pdf 4479586 pdf 416 KB 229  

Details

EISSN
NCID
Record ID
Subject Terms
Created Date 2021.06.16
Modified Date 2021.12.02

People who viewed this item also viewed