Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents
Special Session
Special Session on
Fairness, Interpretability, and Security in AI
 - FIS-AI 2026

5 - 7 March, 2026 - Marbella, Spain

Within the 18th International Conference on Agents and Artificial Intelligence - ICAART 2026


CO-CHAIRS

Meghana Kshirsagar
University of Limerick
Ireland

 
Brief Bio
Dr. Meghana Kshirsagar is an Associate Professor in Artificial Intelligence and Machine Learning at the University of Limerick, where she co-leads the Biocomputing and Developmental Systems Research Group. Her work sits at the intersection of intelligent systems and societal impact, with a focus on explainable AI, digital twins, and large language models for healthcare and data governance. She is particularly passionate about building equitable and transparent AI ecosystems, and her research has contributed to advancing the UN Sustainable Development Goals through her projects. Dr. Kshirsagar has secured significant competitive funding and led multi-institutional collaborations with partners including Fidelity Investments and the Health Service Executive in Ireland.
Irene Anthi
Cardiff University
United Kingdom

 
Brief Bio
Dr Eirini (Irene) Anthi is a Senior Lecturer in Data Science and Cybersecurity at Cardiff University, where she leads the Cybersecurity Laboratory. Her research focuses on using AI to develop intelligent and robust cyber defences, particularly for IoT and cyber-physical systems. She also works on securing AI itself, with an emphasis on adversarial machine learning and trustworthiness. Her contributions to AI safety have been recognised with an Elsevier Award for advancing the United Nations Sustainable Development Goals. Dr Anthi has published extensively and collaborated with industry leaders including Airbus, Thales, and PwC. She is also the co-founder of TrustAI, a startup focused on evaluating and enhancing the trustworthiness of AI models.
Gauri Vaidya
School od Medicine, University of Limerick
Ireland
https://www.linkedin.com/in/gauriivaidya/
 
Brief Bio
Dr. Gauri Vaidya is a Postdoctoral Researcher with the National Kidney Surveillance System at the School of Medicine, University of Limerick, Ireland, where she advances chronic disease surveillance through advanced data analysis and machine learning. She recently completed her PhD in Artificial Intelligence at the University of Limerick, where her research focused on developing resource-efficient hyperparameter optimization strategies for trustworthy and sustainable AI. Her expertise lies in optimization, knowledge graphs, and blockchain technologies, and she has applied these approaches across domains such as healthcare, sustainability, and system optimization. Dr. Vaidya has authored peer-reviewed publications in high-impact journals and leading conferences, and her contributions to digital health and AI have been recognized in Business Post Ireland’s 30 Under 30: Top People in Tech (2025).

SCOPE

Artificial intelligence is transforming decision-making and operational efficiency across critical domains, from finance to healthcare. Yet, as AI systems increasingly shape outcomes in high-stakes environments, concerns about fairness, interpretability, and security are becoming ever more urgent.

Inherently interpretable approaches such as decision trees for credit scoring, rule-based data mining algorithms for clinical decision support, sparse linear models for hospital readmission risk, or symbolic reasoning frameworks that encode human-understandable rules allow stakeholders to directly trace reasoning. Black-box models, like deep neural networks, can be explained post-hoc using tools such as saliency maps or counterfactuals (e.g., showing that income stability drove a loan rejection). Interpretability and explainability are both valuable, but the distinction is critical for building trust.

Fairness-aware algorithms can reduce racial bias in credit scoring or address inequities in healthcare outcomes by adjusting for underrepresented patient groups. Similarly, security intersects with fairness and interpretability in important ways. In fraud detection, interpretable rule-based models not only catch anomalies but also make reasoning clear to investigators, avoiding opaque false positives that disproportionately affect marginalized users. In credit scoring, fairness-aware linear models or symbolic reasoning frameworks can provide transparent, auditable decision rules.

By centering fairness, interpretability, and security, this session will showcase innovations that make AI more equitable, transparent, and resilient, addressing the pressing challenges of deploying AI in high-stakes environments. We invite submissions spanning algorithms, audits, and interdisciplinary approaches that advance these goals and strengthen the development of trustworthy AI.

TOPICS OF INTEREST

Topics of interest include, but are not limited to:
  • Secure Learning: Enhancing robustness under noisy, incomplete, or poisoned data conditions
  • Privacy-Preserving Techniques: Leveraging differential privacy and federated learning to protect sensitive information
  • Fairness-Aware Algorithms: Designing models that proactively mitigate bias and promote equity
  • AI for Equity: Applying AI to reduce disparities in high stakes environments
  • Bias Audits: Presenting real-world methodologies for detecting and correcting bias in deployed systems
  • AI for Security: Addressing fraud detection, credit scoring, and risk assessment in financial applications

IMPORTANT DATES

Paper Submission: December 17, 2025
Authors Notification: January 14, 2026
Camera Ready and Registration: January 22, 2026

SPECIAL SESSION PROGRAM COMMITTEE

Available soon.

PAPER SUBMISSION

Prospective authors are invited to submit papers in any of the topics listed above.
Instructions for preparing the manuscript (in Word and Latex formats) are available at: Paper Templates
Please also check the Guidelines.
Papers must be submitted electronically via the web-based submission system using the appropriated button on this page.

PUBLICATIONS

After thorough reviewing by the special session program committee, all accepted papers will be published in a special section of the conference proceedings book - under an ISBN reference and on digital support - and submitted for indexation by SCOPUS, Google Scholar, DBLP, Semantic Scholar, EI and Web of Science / Conference Proceedings Citation Index.
SCITEPRESS is a member of CrossRef (http://www.crossref.org/) and every paper is given a DOI (Digital Object Identifier).
All papers presented at the conference venue will be available at the SCITEPRESS Digital Library

SECRETARIAT CONTACTS

ICAART Special Sessions - FIS-AI 2026
e-mail: icaart.secretariat@insticc.org
footer