Banner

Special Session on
Explainable AI in Healthcare: Progress and Challenges

Aim and Scope

The integration of artificial intelligence into healthcare has led to significant improvements in diagnostics, treatment planning, and patient outcomes. However, many AI models, particularly deep learning systems, operate as opaque “black boxes,” limiting their acceptance in highstakes medical environments. Explainability is therefore not merely a desirable property but a fundamental requirement. Clinicians, regulators, and patients must be able to trust, interpret, and validate AI-driven decisions. This session addresses the urgent need for transparency, accountability, and ethical compliance in medical AI systems.

This session will cover a broad range of topics, including but not limited to:

  • Methods and techniques for interpretable and explainable AI in healthcare
  • Evaluation metrics for explainability and trustworthiness
  • Case studies of XAI applications in clinical decision support, medical imaging, and personalized medicine
  • Human-AI interaction and usability in healthcare settings
  • Regulatory, ethical, and legal considerations surrounding explainable AI
  • Trade-offs between model performance and interpretability
  • Bias detection, fairness, and robustness in healthcare AI systems

Paper Submission

Potential authors may submit their manuscripts for presentation consideration through ICETCI 2026 submission system electronically at https://edas.info/N34670 , following the conference guidelines. All submissions will go through peer review process. To submit your paper to this special session, you have to choose our special session title on the submission page.

Important Dates

Last Date for Paper Submission Mar 20, 2026
Apr 05, 2026
Apr 20, 2026
Final Notification of Review Outcomes Jun 15, 2026
Submission of Final Paper Jun 30, 2025

Organizers

Dr. Pankaj Kumar Jain
is an Assistant Professor at Sai University, Chennai, India, with expertise in biomedical engineering, medical imaging, and artificial intelligence. He obtained his PhD from the Indian Institute of Technology (BHU), Varanasi, where his research focused on deep learning-based stroke risk assessment using carotid ultrasound imaging. His research lies at the intersection of radiomics, machine learning, and precision medicine, with a strong emphasis on developing predictive models from multimodal imaging data such as MRI, PET, and ultrasound. He has contributed to advancing radiomics driven prediction of clinical outcomes, including treatment response in oncology and cardiovascular risk stratification.
Dr. Jain has authored numerous peer reviewed publications in leading journals in medical imaging and biomedical engineering, with significant citation impact. His work includes multicenter studies and development of deep learning frameworks for plaque characterization, cancer prediction, and imaging-based risk assessment. He has held research positions at Washington University in St. Louis and Jio Institute, and actively contributes to the scientific community as a reviewer for multiple international journals and as a technical program committee member for leading conferences such as ISBI and IJCNN. His current research focuses on radiomics, multimodal data integration, and clinically translatable AI systems for disease risk prediction and decision support in precision medicine.
E-mail: pankaj.j@saiuniversity.edu.in