Dense Associative Memory Through the Lens of Random Features
Benjamin Hoover, Duen Horng Chau, et al.
NeurIPS 2024
Neural sequence-to-sequence models have proven to be accurate and robust for many sequence prediction tasks, and have become the standard approach for automatic translation of text. The models work with a five-stage blackbox pipeline that begins with encoding a source sequence to a vector space and then decoding out to a new target sequence. This process is now standard, but like many deep learning methods remains quite difficult to understand or debug. In this work, we present a visual analysis tool that allows interaction and 'what if'-style exploration of trained sequence-to-sequence models through each stage of the translation process. The aim is to identify which patterns have been learned, to detect model errors, and to probe the model with counterfactual scenario. We demonstrate the utility of our tool through several real-world sequence-to-sequence use cases on large-scale models.
Benjamin Hoover, Duen Horng Chau, et al.
NeurIPS 2024
Philippe Schwaller, Benjamin Hoover, et al.
Science Advances
Emma Beauxis-Aussalet, Michael Behrisch, et al.
IEE CG&A
Daniel Karl I. Weidele, Gaetano Rossiello, et al.
ISWC 2023