Extracting Plausible Explanations of Anomalous Data

Abstract

We present a perspective on theory revision that characterizes the resulting revisions as explanations of anomalous data (i.e., data that contradict a given model). Additionally, we emphasize the plausibility of these explanations as opposed to the performance of a revised model. An explanation generator implementing (part of) John Stuart Mill’s Method of Induction was constructed that divides the available data into meaningful subsets to better resolve the anomalies. A domain expert judged the plausibility of the resulting explanations. We found that using relevant subsets of data can provide plausible explanations not generated when using all the data and that identifying plausible explanations can help select among equally possible revisions.

Publication
Technical Report CS-TR-03-105. Computer Science Department, University of Pittsburgh
Will Bridewell
Will Bridewell
Research Scientist in Artificial Intelligence

My research interests include the relationship between attention, cognition, and intentional action.