Hybridation de modèles d’IA avec des classifieurs ontologiquement explicables
Revue Ouverte d'Intelligence Artificielle, Post-actes de la conférence Ingénierie des Connaissances (IC 2021-2022-2023), Volume 6 (2025) no. 1-2, pp. 35-57

La recherche en Intelligence Artificielle Explicable (XAI) a souligné la nécessité de créer des modèles basés sur les connaissances du domaine pour qu’ils soient explicables du point de vue de leurs utilisateurs. Une part importante des travaux actuels se concentre sur la conception de modèles d’IA qui intègrent la sémantique et le mode de raisonnement des experts du domaine. S’inspirant des Modèles Basés sur les Concepts et des approches Neuro-Symboliques, nous proposons une architecture hybride pour construire des pipelines d’IA qui utilisent l’Apprentissage Automatique pour extraire les concepts du domaine et un raisonnement symbolique pour prédire une classification explicable. Le cœur de cette proposition est l’OntoClassifier, un module qui utilise des ontologies de domaine pour générer automatiquement des classifieurs ontologiquement explicables. Nous décrivons l’approche et l’architecture proposées en détaillant l’implémentation et les capacités de l’OntoClassifier. La solution est appliquée en Vision par Ordinateur et est illustrée à l’aide du Pizzaïolo Dataset.

The research in eXplainable Artificial Intelligence (XAI) has emphasized the need to create models based on domain knowledge to make them explainable from their users’ perspective. A significant portion of current work focuses on designing AI models that integrate the domain experts’ semantics and way of reasoning. Drawing inspiration from Concept Based Models and Neuro-Symbolic AI, we propose a hybrid architecture for constructing AI pipelines that utilize Machine Learning to extract domain concepts and symbolic reasoning to predict an explainable classification. The core of this proposal is the OntoClassifier, a module that uses domain ontologies to automatically generate ontologically explainable classifiers. We describe the proposed approach and architecture, detailing the implementation and capabilities of the OntoClassifier. The solution is applied in Computer Vision and is illustrated using the Pizzaïolo Dataset.

Publié le :
DOI : 10.5802/roia.92
Mots-clés : IA explicable, modèle hybride, système à base de connaissance, ontologie, vision par ordinateur
Keywords: XAI, hybrid model, knowledge based system, ontology, computer vision

Grégory Bourguin 1 ; Arnaud Lewandowski 1 ; Mourad Bouneffa 1

1 LISIC 50 rue Ferdinand Buisson Calais, 62228 (France)
Licence : CC-BY 4.0
Droits d'auteur : Les auteurs conservent leurs droits
@article{ROIA_2025__6_1-2_35_0,
     author = {Gr\'egory Bourguin and Arnaud Lewandowski and Mourad Bouneffa},
     title = {Hybridation de mod\`eles {d{\textquoteright}IA}  avec des classifieurs  ontologiquement explicables},
     journal = {Revue Ouverte d'Intelligence Artificielle},
     pages = {35--57},
     year = {2025},
     publisher = {Association pour la diffusion de la recherche francophone en intelligence artificielle},
     volume = {6},
     number = {1-2},
     doi = {10.5802/roia.92},
     language = {fr},
     url = {https://roia.centre-mersenne.org/articles/10.5802/roia.92/}
}
TY  - JOUR
AU  - Grégory Bourguin
AU  - Arnaud Lewandowski
AU  - Mourad Bouneffa
TI  - Hybridation de modèles d’IA  avec des classifieurs  ontologiquement explicables
JO  - Revue Ouverte d'Intelligence Artificielle
PY  - 2025
SP  - 35
EP  - 57
VL  - 6
IS  - 1-2
PB  - Association pour la diffusion de la recherche francophone en intelligence artificielle
UR  - https://roia.centre-mersenne.org/articles/10.5802/roia.92/
DO  - 10.5802/roia.92
LA  - fr
ID  - ROIA_2025__6_1-2_35_0
ER  - 
%0 Journal Article
%A Grégory Bourguin
%A Arnaud Lewandowski
%A Mourad Bouneffa
%T Hybridation de modèles d’IA  avec des classifieurs  ontologiquement explicables
%J Revue Ouverte d'Intelligence Artificielle
%D 2025
%P 35-57
%V 6
%N 1-2
%I Association pour la diffusion de la recherche francophone en intelligence artificielle
%U https://roia.centre-mersenne.org/articles/10.5802/roia.92/
%R 10.5802/roia.92
%G fr
%F ROIA_2025__6_1-2_35_0
Grégory Bourguin; Arnaud Lewandowski; Mourad Bouneffa. Hybridation de modèles d’IA  avec des classifieurs  ontologiquement explicables. Revue Ouverte d'Intelligence Artificielle, Post-actes de la conférence Ingénierie des Connaissances (IC 2021-2022-2023), Volume 6 (2025) no. 1-2, pp. 35-57. doi: 10.5802/roia.92

[1] Anton Agafonov; Andrew Ponomarev An Experiment on Localization of Ontology Concepts in Deep Convolutional Neural Networks, The 11th International Symposium on Information and Communication Technology, SoICT 2022, Hanoi, Vietnam, December 1-3, 2022, ACM (2022), pp. 82-87 | DOI

[2] Arjun R. Akula; Song-Chun Zhu Attention Cannot Be an Explanation (2022) (Accessed 2024-10-18 https://arxiv.org/abs/2201.11194)

[3] Alejandro Barredo Arrieta; Natalia Díaz Rodríguez; Javier Del Ser; Adrien Bennetot; Siham Tabik; Alberto Barbado; Salvador García; Sergio Gil-Lopez; Daniel Molina; Richard Benjamins; Raja Chatila; Francisco Herrera Explainable Artificial Intelligence (XAI) : Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI, Inf. Fusion, Volume 58 (2020), pp. 82-115 | DOI

[4] Pietro Barbiero; Gabriele Ciravegna; Francesco Giannini; Mateo Espinosa Zarlenga; Lucie Charlotte Magister; Alberto Tonda; Pietro Lio; Frédéric Precioso; Mateja Jamnik; Giuseppe Marra Interpretable Neural-Symbolic Concept Reasoning, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA (Andreas Krause; Emma Brunskill; Kyunghyun Cho; Barbara Engelhardt; Sivan Sabato; Jonathan Scarlett, eds.) (Proceedings of Machine Learning Research), Volume 202, PMLR (2023), 103649, pp. 1801-1825 (Accessed 2023-10-16) | DOI | Zbl | MR

[5] Matthieu Bellucci; Nicolas Delestre; Nicolas Malandain; Cecilia Zanni-Merk Combining an Explainable Model Based on Ontologies with an Explanation Interface to Classify Images, Knowledge-Based and Intelligent Information & Engineering Systems : Proceedings of the 26th International Conference KES-2022, Verona, Italy and Virtual Event, 7-9 September 2022 (Matteo Cristani; Carlos Toro; Cecilia Zanni-Merk; Robert J. Howlett; Lakhmi C. Jain, eds.) (Procedia Computer Science), Volume 207, Elsevier (2022), pp. 2395-2403 (Accessed 2024-01-08) | DOI

[6] Grégory Bourguin; Arnaud Lewandowski Pizzaïolo Dataset : Des Images Synthétiques Ontologiquement Explicables, Explain’AI Workshop, at 24e Conférence Francophone Sur l’Extraction et La Gestion Des Connaissances, EGC 2024. (2024) (Accessed 2024-03-04) | DOI

[7] Grégory Bourguin; Arnaud Lewandowski; Mourad Bouneffa; Adeel Ahmad Vers Des Classifieurs Ontologiquement Explicables, IC 2021 : 32e Journées Francophones d’Ingénierie Des Connaissances (Proceedings of the 32nd French Knowledge Engineering Conference), Bordeaux, France, June 30 – July 2, 2021 (Maxime Lefrançois, ed.) (2021), pp. 89-97 (Accessed 2023-10-16) | DOI

[8] Aditya Chattopadhay; Anirban Sarkar; Prantik Howlader; Vineeth N Balasubramanian Grad-CAM++ : Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (2018), pp. 839-847 (Accessed 2024-10-24) | DOI

[9] Michael Chromik; Andreas Butz Human-XAI Interaction : A Review and Design Principles for Explanation User Interfaces, Human-Computer Interaction – INTERACT 2021 – 18th IFIP TC 13 International Conference, Bari, Italy, August 30 – September 3, 2021, Proceedings, Part II (Carmelo Ardito; Rosa Lanzilotti; Alessio Malizia; Helen Petrie; Antonio Piccinno; Giuseppe Desolda; Kori Inkpen, eds.) (Lecture Notes in Computer Science), Volume 12933, Springer (2021), pp. 619-640 (Accessed 2023-10-16) | DOI

[10] R. Confalonieri; T. Weyde; T. R. Besold; F. Moscoso del Prado Martín Trepan Reloaded : A Knowledge-driven Approach to Explaining Artificial Neural Networks, 24th European Conference on Artificial Intelligence (ECAI 2020), Volume 325, IOS Press (2020), pp. 2457-2464 (Accessed 2023-10-16) | DOI

[11] Davide Conigliaro; Roberta Ferrario; Céline Hudelot; Daniele Porello Integrating Computer Vision Algorithms and Ontologies for Spectator Crowd Behavior Analysis, Group and Crowd Behavior for Computer Vision, 1st Edition (Vittorio Murino; Marco Cristani; Shishir Shah; Silvio Savarese, eds.), Academic Press, 2017, pp. 297-319 | DOI

[12] Manuel de Sousa Ribeiro; João Leite Aligning Artificial Neural Networks and Ontologies towards Explainable AI, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, AAAI Press (2021), pp. 4932-4940 | DOI

[13] Zheyuan Ding; Li Yao; Bin Liu; Junfeng Wu Review of the Application of Ontology in the Field of Image Object Recognition, Proceedings of the 11th International Conference on Computer Modeling and Simulation, ICCMS 2019, North Rockhampton, QLD, Australia, January 16-19, 2019, ACM (2019), pp. 142-146 | DOI

[14] Derek Doran; Sarah Schulz; Tarek R. Besold What Does Explainable AI Really Mean ? A New Conceptualization of Perspectives, Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 Co-Located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017), Bari, Italy, November 16th and 17th, 2017 (Tarek R. Besold; Oliver Kutz, eds.) (CEUR Workshop Proceedings), Volume 2071, CEUR-WS.org (2017) (Accessed 2023-10-16) | DOI

[15] Filip Karlo Dosilovic; Mario Brcic; Nikica Hlupic Explainable Artificial Intelligence : A Survey, 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 2018, Opatija, Croatia, May 21-25, 2018 (Karolj Skala; Marko Koricic; Tihana Galinac Grbac; Marina Cicin-Sain; Vlado Sruk; Slobodan Ribaric; Stjepan Gros; Boris Vrdoljak; Mladen Mauher; Edvard Tijan; Predrag Pale; Matej Janjic, eds.), IEEE (2018), pp. 210-215 | DOI

[16] Thomas Fel; Agustin Picard; Louis Béthune; Thibaut Boissin; David Vigouroux; Julien Colin; Rémi Cadènc; Thomas Serre CRAFT : Concept Recursive Activation FacTorization for Explainability, IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, IEEE (2023), pp. 2711-2721 | DOI

[17] Abel Gonzalez-Garcia; Davide Modolo; Vittorio Ferrari Do Semantic Parts Emerge in Convolutional Neural Networks ?, Int. J. Comput. Vis., Volume 126 (2018) no. 5, pp. 476-494 | DOI

[18] Bryce Goodman; Seth R. Flaxman European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”, AI Mag., Volume 38 (2017) no. 3, pp. 50-57 (Accessed 2024-10-18) | DOI

[19] Matthew Horridge Protégé OWL Tutorial | OWL Research at the University of Manchester, http://owl.cs.manchester.ac.uk/publications/talks-and-tutorials/protg-owl-tutorial/, 2011 (Accessed 2023-11-16) | DOI

[20] Jean-Baptiste Lamy Ontologies with Python : Programming OWL 2.0 Ontologies with Python and Owlready2, Apress, 2020 | DOI

[21] Max Maria Losch; Mario Fritz; Bernt Schiele Interpretability Beyond Classification Output : Semantic Bottleneck Networks (2019) (Accessed 2023-10-16 https://arxiv.org/abs/1907.10882) | DOI

[22] Robin Manhaeve; Sebastijan Dumancic; Angelika Kimmig; Thomas Demeester; Luc De Raedt DeepProbLog : Neural Probabilistic Logic Programming, Advances in Neural Information Processing Systems 31 : Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada (Samy Bengio; Hanna M. Wallach; Hugo Larochelle; Kristen Grauman; Nicolò Cesa-Bianchi; Roman Garnett, eds.) (2018), pp. 3753-3763 (Accessed 2024-10-01)

[23] Emanuele Marconato; Stefano Teso; Antonio Vergari; Andrea Passerini Not All Neuro-Symbolic Concepts Are Created Equal : Analysis and Mitigation of Reasoning Shortcuts, Advances in Neural Information Processing Systems 36 : Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 (Alice Oh; Tristan Naumann; Amir Globerson; Kate Saenko; Moritz Hardt; Sergey Levine, eds.) (2023) (Accessed 2024-10-18)

[24] Daniel Omeiza; Skyler Speakman; Celia Cintas; Komminist Weldemariam Smooth Grad-CAM++ : An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models (2019) (Accessed 2024-10-24 https://arxiv.org/abs/1908.01224)

[25] Marco Túlio Ribeiro; Sameer Singh; Carlos Guestrin “Why Should I Trust You ?” : Explaining the Predictions of Any Classifier, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016 (Balaji Krishnapuram; Mohak Shah; Alexander J. Smola; Charu C. Aggarwal; Dou Shen; Rajeev Rastogi, eds.), ACM (2016), pp. 1135-1144 | DOI

[26] Cynthia Rudin Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead, Nat. Mach. Intell., Volume 1 (2019) no. 5, pp. 206-215 | DOI

[27] Ramprasaath R. Selvaraju; Michael Cogswell; Abhishek Das; Ramakrishna Vedantam; Devi Parikh; Dhruv Batra Grad-CAM : Visual Explanations from Deep Networks via Gradient-Based Localization, Int. J. Comput. Vis., Volume 128 (2020) no. 2, pp. 336-359 | DOI

[28] Karen Simonyan; Andrew Zisserman Very Deep Convolutional Networks for Large-Scale Image Recognition, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (Yoshua Bengio; Yann LeCun, eds.) (2015) (Accessed 2023-10-16) | DOI

[29] Jan Stodt; Manav Madan; Christoph Reich; Luka Filipovic; Tomi Ilijas A Study on the Reliability of Visual XAI Methods for X-Ray Images, Healthcare Transformation with Informatics and Artificial Intelligence, ICIMTH 2023, 21st International Conference on Informatics, Management, and Technology in Healthcare, Athens, Greece, from 1-3 July 2023 (John Mantas; Parisis Gallos; Emmanouil Zoulias; Arie Hasman; Mowafa S. Househ; Martha Charalampidou; Andriana Magdalinou, eds.) (Studies in Health Technology and Informatics), Volume 305, IOS Press (2023), pp. 32-35 | DOI

[30] William R. Swartout; Cécile Paris; Johanna D. Moore Explanations in Knowledge Systems : Design for Explainable Expert Systems, IEEE Expert, Volume 6 (1991) no. 3, pp. 58-64 | DOI

[31] Germán Vidal Explaining Explanations in Probabilistic Logic Programming (2024) (Accessed 2024-10-21 https://arxiv.org/abs/2401.17045) | DOI

[32] Mateo Espinosa Zarlenga; Pietro Barbiero; Gabriele Ciravegna; Giuseppe Marra; Francesco Giannini; Michelangelo Diligenti; Zohreh Shams; Frédéric Precioso; Stefano Melacci; Adrian Weller; Pietro Lió; Mateja Jamnik Concept Embedding Models : Beyond the Accuracy-Explainability Trade-Off, NeurIPS (2022) (Accessed 2023-10-16) | DOI

[33] Quanshi Zhang; Ying Nian Wu; Song-Chun Zhu Interpretable Convolutional Neural Networks, 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, Computer Vision Foundation / IEEE Computer Society (2018), pp. 8827-8836 (Accessed 2024-10-18) | DOI

Cité par Sources :