Computational Intelligence and Learning Systems (CILS) Research Group

CILS is dedicated to advancing cutting-edge research in artificial intelligence, with a focus on developing adaptive systems that enhance decision-making across complex environments. At CILS, we specialize in integrating machine learning, deep learning, and neuro-symbolic approaches to tackle challenges including early disease detection and privacy-preserving AI. Our team is committed to creating equitable, robust, and interpretable AI systems, pushing the boundaries of computational intelligence to meet real-world needs and benefit society.

 

 

MSc Computer Science defense

Student: Excellence Sowunmi

Title: Bridging External Symbols with Internal Subsymbolic Learning and Reasoning Using Cross-Modality Autoencoders.

Date and time: Friday, April 11th, 9:30am

Location: On Teams

Join the meeting now

Meeting ID: 216 795 347 159

Passcode: p3A6bg9Q

 

Abstract:

 Neuro-symbolic AI seeks to integrate the strengths of subsymbolic learning and symbolic reasoning. A key challenge in this integration lies in developing a unified internal

representation that supports both perceptual understanding and abstract inference across modalities. While neural networks are effective at processing sensory data

such as images and text, they often lack symbolic interpretability. Conversely, symbolic systems are limited in their ability to handle complex, high-dimensional inputs.

This thesis addresses this gap by introducing a cross-modal autoencoder architecture that learns subsymbolic bidirectional transformations between images and text.

These learned representations are then used to interpret external symbols (text) or generate them as explanations for concepts (images).

 

Grounded in the neuro-symbolic framework proposed by Silver and Mitchell, the model distinguishes between conceptual representations (conrep), which capture internal subsymbolic features from perceptual inputs, and symbolic representations (symrep), which enable external symbolic communication. The architecture encodes an image into a conrep and transforms it into a symrep for symbolic decoding, and conversely can map textual input into a symrep before decoding it as a conrep for image reconstruction. This bidirectional mapping supports both learning and reasoning across modalities.

 

To support robust multimodal reasoning, the system employs a shared reasoning layer, selective input masking, and a curriculum-based training strategy. The model is first trained on unimodal tasks before progressing to cross-modal generation, stabilizing learning and promoting better symrep–conrep alignment. Empirical evaluations—including standard testing, cross-validation, and qualitative inspection— demonstrate that the model performs stable and accurate reconstructions, even when one modality is masked.

 

This work contributes a unified neuro-symbolic learning framework that integrates symbolic and conceptual representations through structured training and shared latent reasoning. The proposed approach enables interpretable and resilient cross-modal inference, with applications in computer vision, natural language processing, and multimodal reasoning tasks.

 

Committee:

Drs. Danny Silver & Andrew McIntyre, Supervisors

Dr. Frank Rudzicz, Dalhousie Univ., External Examiner

Dr. Sazia Mahfuz, Internal Examiner

Dr. Darcy Benoit, Director of the School of Computer Science

Dr. Holger Teismann, Chair of the defence

 

Upcoming Events

Currently there are no events.