Banner

Special Session on
Explainable AI for Computational Intelligence

Aim and Scope

The rapid adoption of AI in critical applications such as healthcare, finance, cybersecurity, and autonomous systems has led to an increasing need for transparency and interpretability in AI models. Explainable AI (XAI) aims to enhance the trustworthiness of machine learning models by making their decision-making processes more transparent and interpretable. This special session seeks to explore innovative approaches, techniques, and real-world applications of XAI, focusing on how computational intelligence can benefit from improved model explainability.

Topics of interest include, but are not limited to:

  • Post-hoc and intrinsic explainability methods in AI
  • SHAP, LIME, and other feature attribution techniques
  • Explainable AI in deep learning and reinforcement learning
  • Applications of XAI in healthcare, finance, and cybersecurity
  • Trust, fairness, and bias mitigation in AI models
  • Adversarial robustness and interpretability
  • Causal reasoning and model transparency
  • Regulatory and ethical considerations in XAI
  • Human-AI interaction and user-centric explanations

Paper Submission

Potential authors may submit their manuscripts for presentation consideration through ICETCI 2025 submission system electronically at https://edas.info/N33232 , following the conference guidelines. All submissions will go through peer review process. To submit your paper to this special session, you have to choose our special session title on the submission page.

Important Dates

Last Date for Paper Submission Mar 21, 2025
Apr 15, 2025
Final Notification of Review Outcomes Jun 15, 2025
Submission of Final Paper Jun 30, 2025

Organizer

Dr. S. Deepika is currently working as an Assistant Professor in the Department of CSE at Anurag University, Hyderabad. She has over 12 years of experience in academia and research, specializing in AI, Machine Learning, and Explainable AI. She has published numerous research papers in high-impact journals and has actively contributed to the field through international conferences and workshops. Her expertise includes model interpretability, trustworthy AI, and data-driven decision-making.

Email ID: deepikajaiswal9963@gmail.com