La recherche en Intelligence Artificielle Explicable (XAI) a souligné la nécessité de créer des modèles basés sur les connaissances du domaine pour qu’ils soient explicables du point de vue de leurs utilisateurs. Une part importante des travaux actuels se concentre sur la conception de modèles d’IA qui intègrent la sémantique et le mode de raisonnement des experts du domaine. S’inspirant des Modèles Basés sur les Concepts et des approches Neuro-Symboliques, nous proposons une architecture hybride pour construire des pipelines d’IA qui utilisent l’Apprentissage Automatique pour extraire les concepts du domaine et un raisonnement symbolique pour prédire une classification explicable. Le cœur de cette proposition est l’OntoClassifier, un module qui utilise des ontologies de domaine pour générer automatiquement des classifieurs ontologiquement explicables. Nous décrivons l’approche et l’architecture proposées en détaillant l’implémentation et les capacités de l’OntoClassifier. La solution est appliquée en Vision par Ordinateur et est illustrée à l’aide du Pizzaïolo Dataset.
The research in eXplainable Artificial Intelligence (XAI) has emphasized the need to create models based on domain knowledge to make them explainable from their users’ perspective. A significant portion of current work focuses on designing AI models that integrate the domain experts’ semantics and way of reasoning. Drawing inspiration from Concept Based Models and Neuro-Symbolic AI, we propose a hybrid architecture for constructing AI pipelines that utilize Machine Learning to extract domain concepts and symbolic reasoning to predict an explainable classification. The core of this proposal is the OntoClassifier, a module that uses domain ontologies to automatically generate ontologically explainable classifiers. We describe the proposed approach and architecture, detailing the implementation and capabilities of the OntoClassifier. The solution is applied in Computer Vision and is illustrated using the Pizzaïolo Dataset.
Keywords: XAI, hybrid model, knowledge based system, ontology, computer vision
Grégory Bourguin 1 ; Arnaud Lewandowski 1 ; Mourad Bouneffa 1
CC-BY 4.0
@article{ROIA_2025__6_1-2_35_0,
author = {Gr\'egory Bourguin and Arnaud Lewandowski and Mourad Bouneffa},
title = {Hybridation de mod\`eles {d{\textquoteright}IA} avec des classifieurs ontologiquement explicables},
journal = {Revue Ouverte d'Intelligence Artificielle},
pages = {35--57},
year = {2025},
publisher = {Association pour la diffusion de la recherche francophone en intelligence artificielle},
volume = {6},
number = {1-2},
doi = {10.5802/roia.92},
language = {fr},
url = {https://roia.centre-mersenne.org/articles/10.5802/roia.92/}
}
TY - JOUR AU - Grégory Bourguin AU - Arnaud Lewandowski AU - Mourad Bouneffa TI - Hybridation de modèles d’IA avec des classifieurs ontologiquement explicables JO - Revue Ouverte d'Intelligence Artificielle PY - 2025 SP - 35 EP - 57 VL - 6 IS - 1-2 PB - Association pour la diffusion de la recherche francophone en intelligence artificielle UR - https://roia.centre-mersenne.org/articles/10.5802/roia.92/ DO - 10.5802/roia.92 LA - fr ID - ROIA_2025__6_1-2_35_0 ER -
%0 Journal Article %A Grégory Bourguin %A Arnaud Lewandowski %A Mourad Bouneffa %T Hybridation de modèles d’IA avec des classifieurs ontologiquement explicables %J Revue Ouverte d'Intelligence Artificielle %D 2025 %P 35-57 %V 6 %N 1-2 %I Association pour la diffusion de la recherche francophone en intelligence artificielle %U https://roia.centre-mersenne.org/articles/10.5802/roia.92/ %R 10.5802/roia.92 %G fr %F ROIA_2025__6_1-2_35_0
Grégory Bourguin; Arnaud Lewandowski; Mourad Bouneffa. Hybridation de modèles d’IA avec des classifieurs ontologiquement explicables. Revue Ouverte d'Intelligence Artificielle, Post-actes de la conférence Ingénierie des Connaissances (IC 2021-2022-2023), Volume 6 (2025) no. 1-2, pp. 35-57. doi: 10.5802/roia.92
[1] An Experiment on Localization of Ontology Concepts in Deep Convolutional Neural Networks, The 11th International Symposium on Information and Communication Technology, SoICT 2022, Hanoi, Vietnam, December 1-3, 2022, ACM (2022), pp. 82-87 | DOI
[2] Attention Cannot Be an Explanation (2022) (Accessed 2024-10-18 https://arxiv.org/abs/2201.11194)
[3] Explainable Artificial Intelligence (XAI) : Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI, Inf. Fusion, Volume 58 (2020), pp. 82-115 | DOI
[4] Interpretable Neural-Symbolic Concept Reasoning, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA (Andreas Krause; Emma Brunskill; Kyunghyun Cho; Barbara Engelhardt; Sivan Sabato; Jonathan Scarlett, eds.) (Proceedings of Machine Learning Research), Volume 202, PMLR (2023), 103649, pp. 1801-1825 (Accessed 2023-10-16) | DOI | Zbl | MR
[5] Combining an Explainable Model Based on Ontologies with an Explanation Interface to Classify Images, Knowledge-Based and Intelligent Information & Engineering Systems : Proceedings of the 26th International Conference KES-2022, Verona, Italy and Virtual Event, 7-9 September 2022 (Matteo Cristani; Carlos Toro; Cecilia Zanni-Merk; Robert J. Howlett; Lakhmi C. Jain, eds.) (Procedia Computer Science), Volume 207, Elsevier (2022), pp. 2395-2403 (Accessed 2024-01-08) | DOI
[6] Pizzaïolo Dataset : Des Images Synthétiques Ontologiquement Explicables, Explain’AI Workshop, at 24e Conférence Francophone Sur l’Extraction et La Gestion Des Connaissances, EGC 2024. (2024) (Accessed 2024-03-04) | DOI
[7] Vers Des Classifieurs Ontologiquement Explicables, IC 2021 : 32e Journées Francophones d’Ingénierie Des Connaissances (Proceedings of the 32nd French Knowledge Engineering Conference), Bordeaux, France, June 30 – July 2, 2021 (Maxime Lefrançois, ed.) (2021), pp. 89-97 (Accessed 2023-10-16) | DOI
[8] Grad-CAM++ : Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (2018), pp. 839-847 (Accessed 2024-10-24) | DOI
[9] Human-XAI Interaction : A Review and Design Principles for Explanation User Interfaces, Human-Computer Interaction – INTERACT 2021 – 18th IFIP TC 13 International Conference, Bari, Italy, August 30 – September 3, 2021, Proceedings, Part II (Carmelo Ardito; Rosa Lanzilotti; Alessio Malizia; Helen Petrie; Antonio Piccinno; Giuseppe Desolda; Kori Inkpen, eds.) (Lecture Notes in Computer Science), Volume 12933, Springer (2021), pp. 619-640 (Accessed 2023-10-16) | DOI
[10] Trepan Reloaded : A Knowledge-driven Approach to Explaining Artificial Neural Networks, 24th European Conference on Artificial Intelligence (ECAI 2020), Volume 325, IOS Press (2020), pp. 2457-2464 (Accessed 2023-10-16) | DOI
[11] Integrating Computer Vision Algorithms and Ontologies for Spectator Crowd Behavior Analysis, Group and Crowd Behavior for Computer Vision, 1st Edition (Vittorio Murino; Marco Cristani; Shishir Shah; Silvio Savarese, eds.), Academic Press, 2017, pp. 297-319 | DOI
[12] Aligning Artificial Neural Networks and Ontologies towards Explainable AI, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, AAAI Press (2021), pp. 4932-4940 | DOI
[13] Review of the Application of Ontology in the Field of Image Object Recognition, Proceedings of the 11th International Conference on Computer Modeling and Simulation, ICCMS 2019, North Rockhampton, QLD, Australia, January 16-19, 2019, ACM (2019), pp. 142-146 | DOI
[14] What Does Explainable AI Really Mean ? A New Conceptualization of Perspectives, Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 Co-Located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017), Bari, Italy, November 16th and 17th, 2017 (Tarek R. Besold; Oliver Kutz, eds.) (CEUR Workshop Proceedings), Volume 2071, CEUR-WS.org (2017) (Accessed 2023-10-16) | DOI
[15] Explainable Artificial Intelligence : A Survey, 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 2018, Opatija, Croatia, May 21-25, 2018 (Karolj Skala; Marko Koricic; Tihana Galinac Grbac; Marina Cicin-Sain; Vlado Sruk; Slobodan Ribaric; Stjepan Gros; Boris Vrdoljak; Mladen Mauher; Edvard Tijan; Predrag Pale; Matej Janjic, eds.), IEEE (2018), pp. 210-215 | DOI
[16] CRAFT : Concept Recursive Activation FacTorization for Explainability, IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, IEEE (2023), pp. 2711-2721 | DOI
[17] Do Semantic Parts Emerge in Convolutional Neural Networks ?, Int. J. Comput. Vis., Volume 126 (2018) no. 5, pp. 476-494 | DOI
[18] European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”, AI Mag., Volume 38 (2017) no. 3, pp. 50-57 (Accessed 2024-10-18) | DOI
[19] Protégé OWL Tutorial | OWL Research at the University of Manchester, http://owl.cs.manchester.ac.uk/publications/talks-and-tutorials/protg-owl-tutorial/, 2011 (Accessed 2023-11-16) | DOI
[20] Ontologies with Python : Programming OWL 2.0 Ontologies with Python and Owlready2, Apress, 2020 | DOI
[21] Interpretability Beyond Classification Output : Semantic Bottleneck Networks (2019) (Accessed 2023-10-16 https://arxiv.org/abs/1907.10882) | DOI
[22] DeepProbLog : Neural Probabilistic Logic Programming, Advances in Neural Information Processing Systems 31 : Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada (Samy Bengio; Hanna M. Wallach; Hugo Larochelle; Kristen Grauman; Nicolò Cesa-Bianchi; Roman Garnett, eds.) (2018), pp. 3753-3763 (Accessed 2024-10-01)
[23] Not All Neuro-Symbolic Concepts Are Created Equal : Analysis and Mitigation of Reasoning Shortcuts, Advances in Neural Information Processing Systems 36 : Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 (Alice Oh; Tristan Naumann; Amir Globerson; Kate Saenko; Moritz Hardt; Sergey Levine, eds.) (2023) (Accessed 2024-10-18)
[24] Smooth Grad-CAM++ : An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models (2019) (Accessed 2024-10-24 https://arxiv.org/abs/1908.01224)
[25] “Why Should I Trust You ?” : Explaining the Predictions of Any Classifier, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016 (Balaji Krishnapuram; Mohak Shah; Alexander J. Smola; Charu C. Aggarwal; Dou Shen; Rajeev Rastogi, eds.), ACM (2016), pp. 1135-1144 | DOI
[26] Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead, Nat. Mach. Intell., Volume 1 (2019) no. 5, pp. 206-215 | DOI
[27] Grad-CAM : Visual Explanations from Deep Networks via Gradient-Based Localization, Int. J. Comput. Vis., Volume 128 (2020) no. 2, pp. 336-359 | DOI
[28] Very Deep Convolutional Networks for Large-Scale Image Recognition, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (Yoshua Bengio; Yann LeCun, eds.) (2015) (Accessed 2023-10-16) | DOI
[29] A Study on the Reliability of Visual XAI Methods for X-Ray Images, Healthcare Transformation with Informatics and Artificial Intelligence, ICIMTH 2023, 21st International Conference on Informatics, Management, and Technology in Healthcare, Athens, Greece, from 1-3 July 2023 (John Mantas; Parisis Gallos; Emmanouil Zoulias; Arie Hasman; Mowafa S. Househ; Martha Charalampidou; Andriana Magdalinou, eds.) (Studies in Health Technology and Informatics), Volume 305, IOS Press (2023), pp. 32-35 | DOI
[30] Explanations in Knowledge Systems : Design for Explainable Expert Systems, IEEE Expert, Volume 6 (1991) no. 3, pp. 58-64 | DOI
[31] Explaining Explanations in Probabilistic Logic Programming (2024) (Accessed 2024-10-21 https://arxiv.org/abs/2401.17045) | DOI
[32] Concept Embedding Models : Beyond the Accuracy-Explainability Trade-Off, NeurIPS (2022) (Accessed 2023-10-16) | DOI
[33] Interpretable Convolutional Neural Networks, 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, Computer Vision Foundation / IEEE Computer Society (2018), pp. 8827-8836 (Accessed 2024-10-18) | DOI
Cité par Sources :