C.A. Micchelli, W.L. Miranker
Journal of the ACM
We demonstrate an interactive visualization system to promote interpretability of convolutional neural networks (CNNs). Interpretation of deep learning models acts on the interface between increasingly complex model architectures and model architects, to provide an understanding of how a model operates, where it fails, or why it succeeds. Based on preliminary expert interviews and a careful literature review we design the system to comprehensively support architects on 4 visual dimensions.
C.A. Micchelli, W.L. Miranker
Journal of the ACM
Saurabh Paul, Christos Boutsidis, et al.
JMLR
Joxan Jaffar
Journal of the ACM
Kenneth L. Clarkson, Elad Hazan, et al.
Journal of the ACM