Banner

TUTORIAL – FULL DAY SESSION
Bridging Language in Machines and Language in the Brain

Abstract : Can we obtain insights about the brain language processing using language models? How is the information in deep learning models related to brain recordings? Can we improve language models with the help of brain recordings? Such questions can be tackled by studying brain recordings like functional magnetic resonance imaging (fMRI). As a first step, the neuroscience community has contributed several large cognitive neuroscience datasets related to passive reading/listening and narrative stories. Encoding and decoding models using recent advances in deep learning have opened new opportunities for modelling brain activity and exploring the convergent representations underlying language comprehension in the human brain and in neural language models (LM). Using encoding and decoding, what insights can we draw from recent, largely task-free neuroimaging datasets for theories of language and the brain? This tutorial will provide a working knowledge of small language models, popular naturalistic neuroscience datasets, and state-of-the-art methods for brain encoding and decoding.

Outline of the Tutorial :

Time Speaker Topic
10.00 AM – 11.30 AM Prof. Bapi S. Raju Neuro-AI alignment: Introduction
11.30 AM – 11.45 AM Coffee Break
11.45 AM – 1.00 PM Dr. Subba Reddy Similarities and differences between language processing in brains and machines
1.00 PM – 2.30 PM Lunch Break
2.30 PM – 3.45 PM Dr. Marreddy Mounika Recent trends in LLMs
3.45 PM – 4.00 PM Coffee Break
4.00 PM – 5.30 PM Dr. Subba & Dr. Mounika Hands-on-session

Expected Length of the Tutorial : 7.5 hours

Target Audience and Pre-requisites : The tutorial should benefit researchers from academia and industry. Specifically, academics with an interest in how deep learning methods can be applied to obtain insights into brain language processing.

Whether any hands-on proposed : How does the brain process semantics while listening to stories? Using the BERT model to obtain insights into the brain.

Required Libraries : Transformers, Anaconda, Python-3.7 > and Nilearn

Instructors :

Prof. Bapi Raju S (Senior Member, IEEE) received a B.E. degree in electrical engineering from Osmania University, Hyderabad, India, in 1983, and an M.S. degree in biomedical engineering and the Ph.D. degree in mathematical sciences computer science from The University of Texas at Arlington, USA. He worked with BHEL, India, the University of Plymouth, U.K., and the ATR Research Laboratory, Kyoto, Japan, before joining the University of Hyderabad, India, in 1999. He is currently a Professor with the Cognitive Science Laboratory, Kohli Centre for Intelligent Systems (KCIS), IIIT Hyderabad, India. His research interests include the practical applications of various neural network and machine learning techniques, investigation of biological neural architectures, neuroimaging, and cognitive modeling. He is a member of the Society for Neuroscience, USA, the Cognitive Science Society, USA, and the Association for Computing Machinery (ACM)

Email: raju.bapi@iiit.ac.in

Dr. Subba Reddy Oota is a Postdoctoral Researcher at TU Berlin, Germany, under the supervision of Prof. Fatma Deniz. He defended his doctoral thesis at Inria, France, under the supervision of Dr. Xavier Hinaut and Prof. Alexandre Frederic. He is also a visiting scholar at MaxPlanck, Germany, under the supervision of Dr. Mariya Toneva. Previously, he was a Master student at IIIT-Hyderabad, India. His research interests are: Computational Neuroscience, Bridging AI & Neuroscience, Language Analysis in the Brain, Brain Encoding & Decoding (fMRI, MEG, EEG), Natural Language Processing and Deep Learning.

Dr. Mounika Marreddy is a Postdoctoral Researcher at University of Bonn, Germany, under the supervision of Prof. Lucie Flek. She completed her PhD at IIIT Hyderabad, India, under the supervision of Dr. Radhika Mamidi. Her postdoctoral research primarily focuses on Large Language Models as human-like annotators and their ability to capture the fluctuation of opinions among users in social media conversations. She utilize both closed and open-source Large Language Models in her research endeavors. Her research interests are: LLMs are Human like Annotators, User Level Opinion Fluctuations, Black box NLP: Interpretability of representations, Robustness, and Behavioral Interpretability and Low resource and multi-lingual NLP.

Ś