Download PDFOpen PDF in browserClassifier Labels as Language Grounding for Explanations14 pages•Published: September 17, 2018AbstractAdvances in state-of-the-art techniques including convolutional neural networks (CNNs) have led to improved perception in autonomous robots. However, these new techniques make a robot’s decision-making process obscure even for the experts. Our goal is to auto- matically generate natural language explanations of a robot’s perception-based inferences in order to help people understand what features contribute to these classification predic- tions. Generating natural language explanations is particularly challenging for perception and other high-dimension classification tasks because 1) we lack a mapping from features to language and 2) there are a large number of features which could be explained. We present a novel approach to generating explanations, which first finds the important features that most affect the classification prediction and then utilizes a secondary detector which can identify and label multiple parts of the features, to label only those important features. Those labels serve as the natural language groundings that we use in our explanations. We demonstrate our explanation algorithm’s ability on the floor identification classifier of our mobile service robot.Keyphrases: dependable robots, human robot interaction, interpretability, language and vision In: Daniel Lee, Alexander Steen and Toby Walsh (editors). GCAI-2018. 4th Global Conference on Artificial Intelligence, vol 55, pages 148-161.
|