The top portion of the campus entrance gate showing IISER Pune logo

Rational Deep Machines: From Interpretable Classification to Controllable Generation

By Anirban Sarkar, NY, USA

Online (Zoom meeting) 
Zoom meeting link : https://zoom.us/j/98677788515?pwd=blw45UJbL1FaFxhKCMfR9t4JasXXrF.1

Meeting ID: 986 7778 8515
Passcode: 765582 

Abstract 

Deep neural networks achieve remarkable performance yet remain opaque, a critical liability in high-stakes applications. This talk traces a research journey toward building machines that are not only accurate but also explainable, robust, and ultimately controllable. Beginning with gradient-based saliency methods, including the widely adopted Grad-CAM++ (WACV 2018), the work evolved to ground attributions in causal theory (ICML 2019), harden explanation maps against adversarial manipulation (AAAI 2021), and achieve adversarial robustness without adversarial training (NeurIPS 2021). A parallel line of work developed concept-based ante-hoc explanations (CVPR 2022). Recognizing that generative models raise equally pressing questions about opacity, the research has expanded into sequence generation, with early work beginning to dissect what these models learn about regulatory DNA. At CSHL's Koo Lab, this has yielded D3, a discrete diffusion framework for designing regulatory DNA sequences with tunable activity. Together, these contributions chart a unified path from interpretable classification to controllable, biologically grounded generation.

⚠️ External Link Warning

You are about to leave this site and open an external link.

• The content on external websites is not controlled or endorsed by us.
• Please ensure the link is safe before proceeding.
• Continue only if you trust the destination.

⚠️ बाहरी लिंक चेतावनी

आप इस साइट को छोड़कर एक बाहरी लिंक खोलने वाले हैं।

• बाहरी वेबसाइटों की सामग्री हमारे नियंत्रण या समर्थन में नहीं है।
• कृपया आगे बढ़ने से पहले सुनिश्चित करें कि लिंक सुरक्षित है।
• केवल तभी जारी रखें जब आप गंतव्य पर भरोसा करते हों।

Continue आगे