Rational Deep Machines: From Interpretable Classification to Controllable Generation
Online (Zoom meeting)
Zoom meeting link : https://zoom.us/j/98677788515?pwd=blw45UJbL1FaFxhKCMfR9t4JasXXrF.1
Meeting ID: 986 7778 8515
Passcode: 765582
Abstract
Deep neural networks achieve remarkable performance yet remain opaque, a critical liability in high-stakes applications. This talk traces a research journey toward building machines that are not only accurate but also explainable, robust, and ultimately controllable. Beginning with gradient-based saliency methods, including the widely adopted Grad-CAM++ (WACV 2018), the work evolved to ground attributions in causal theory (ICML 2019), harden explanation maps against adversarial manipulation (AAAI 2021), and achieve adversarial robustness without adversarial training (NeurIPS 2021). A parallel line of work developed concept-based ante-hoc explanations (CVPR 2022). Recognizing that generative models raise equally pressing questions about opacity, the research has expanded into sequence generation, with early work beginning to dissect what these models learn about regulatory DNA. At CSHL's Koo Lab, this has yielded D3, a discrete diffusion framework for designing regulatory DNA sequences with tunable activity. Together, these contributions chart a unified path from interpretable classification to controllable, biologically grounded generation.