Abstract: The Lucid methods described by Olah et al. (2018) provide a way to inspect the inner workings of neural networks trained on image classification tasks using feature visualization. Such methods have generally been applied to networks trained on visually rich, large-scale image datasets like ImageNet, which enables them to produce enticing feature visualizations. To investigate these methods further, we applied them to classifiers trained to perform the much simpler (in terms of dataset size and visual richness), yet challenging task of distinguishing between different kinds of white blood cell from microscope images. Such a task makes generating useful feature visualizations difficult, as the discriminative features are inherently hard to identify and interpret. We address this by presenting the "Illuminated Decision Tree" approach, in which we use a neural network trained on the task as a feature extractor, then learn a decision tree based on these features, and provide Lucid visualizations for each node in the tree. We demonstrate our approach with several examples, showing how this approach could be useful both in model development and debugging, and when explaining model outputs to non-experts.

DOI: DOI: https://dx.doi.org/10.5244/C.33.335
Comments: Presented at BMVC 2019: Workshop on Interpretable and Explainable Machine Vision, Cardiff, UK.
Source: Paper (Local PDF): 1909.05644.pdf
Source: Paper (arXiv): arXiv:1909.05644v1