Mechanism for feature learning in neural networks and backpropagation-free machine learning models
Seminar room no 24, 1st floor, main building.
Abstract
Understanding how neural networks learn features, or relevant patterns in data, for prediction is necessary for their reliable use in technological and scientific applications. In this work, we presented a unifying mathematical mechanism, known as average gradient outer product (AGOP), that characterized feature learning in neural networks. We provided empirical evidence that AGOP captured features learned by various neural network architectures, including transformer-based language models, convolutional networks, multilayer perceptrons, and recurrent neural networks. Moreover, we demonstrated that AGOP, which is backpropagation-free, enabled feature learning in machine learning models, such as kernel machines, that a priori could not identify task-specific features. Overall, we established a fundamental mechanism that captured feature learning in neural networks and enabled feature learning in general machine learning models.
Bio: Parthe Pandit is an Assistant Professor with the Center for Machine Intelligence and Data Science (C-MInDS) at IIT Bombay. He also holds the Thakur Family Chair at IIT Bombay. He was a Simons Postdoctoral Fellow at Halıcıoğlu Data Science Institute at UC San Diego, and obtained his Ph.D. from UCLA.