Conference:
32th Minisymposium of the Department of Artificial Intelligence and Systems Engineering of the Budapest University of Technology and Economics, 3-4. February 2025, Budapest, Hungary
Authors:
Deé-Lukács A, Földvári A, Pataricza A.
Abstract:
The growing reliance on embedded AI components in critical systems demands robust mechanisms for explainability and reliability. These systems often integrate highly complex, opaque models whose decision-making processes are difficult to interpret, posing significant challenges to debugging and trustworthiness. This paper introduces an approach that allows examining regions identified through model comparisons, specifically focusing on areas where interpretable surrogate models and opaque models diverge or produce inconsistencies. By analyzing these regions, the paper provides actionable insights for identifying edge cases and mitigating risks associated with model inaccuracies.
This paper leverages qualitative abstraction techniques to translate complex model behavior into comprehensible representations, enabling systematic evaluation of discrepancies. By focusing on the intersection of model behavior and system-level impact, the proposed methodologies offer a scalable approach for enhancing both the dependability and interpretability of AI enabled systems. The findings advance the state of explainable AI and contribute to the development of safer, more transparent applications in critical domains.