The rapid adoption of AI in critical applications such as healthcare, finance, cybersecurity, and autonomous systems has led to an increasing need for transparency and interpretability in AI models. Explainable AI (XAI) aims to enhance the trustworthiness of machine learning models by making their decision-making processes more transparent and interpretable. This special session seeks to explore innovative approaches, techniques, and real-world applications of XAI, focusing on how computational intelligence can benefit from improved model explainability.
Topics of interest include, but are not limited to:
Potential authors may submit their manuscripts for presentation consideration through ICETCI 2025 submission system electronically at https://edas.info/N33232 , following the conference guidelines. All submissions will go through peer review process. To submit your paper to this special session, you have to choose our special session title on the submission page.
Last Date for Paper Submission | Apr 15, 2025 |
Final Notification of Review Outcomes | Jun 15, 2025 |
Submission of Final Paper | Jun 30, 2025 |
Dr. S. Deepika is currently working as an Assistant Professor in the Department of CSE at Anurag University, Hyderabad. She has over 12 years of experience in academia and research, specializing in AI, Machine Learning, and Explainable AI. She has published numerous research papers in high-impact journals and has actively contributed to the field through international conferences and workshops. Her expertise includes model interpretability, trustworthy AI, and data-driven decision-making.
Email ID: deepikajaiswal9963@gmail.com