BS2025 / Program / Explainable transfer learning for activity recognition in smart buildings

Explainable transfer learning for activity recognition in smart buildings

Location
Room 3
Time
August 27, 11:15 am-11:30 am

Recently, the excessive wastage of electrical energy in buildings has highlighted the critical need for energy optimization. Advanced methodologies such as Activity Recognition (AR) have emerged to refine energy distribution, particularly within HVAC systems, in smart buildings. The goal is to achieve the optimal comfort level with minimal energy consumption.

Traditionally, supervised machine learning has been employed for AR in the smart building sector, where the model is trained with data collected in the same environment. Nonetheless, this approach frequently encounters the challenge of insufficient labeled data, attributed to high costs, extensive time requirements, and privacy concerns. Moreover, the model will not generalize well in related domains due to shifts in data distribution. Therefore, we employ Transfer Learning (TL), which leverages pre-existing knowledge from a well-labeled source domain to enhance model performance in a target domain. Instead of starting the learning process from scratch, TL allows models to utilize patterns and features learned from large datasets in the source domain to improve their predictions in new but similar domains.

Moreover, previous works regarding AR have largely neglected the aspect of explainability. Consequently, the predictions made by AR models are often not interpretable, leaving us without insight into the decision-making processes of these black-box models. This hampers our ability to verify, validate, and trust the outputs, as it is challenging to understand how specific predictions are generated. The model performance can also drift because production data differs from training data, which makes it crucial to continuously monitor the models to promote responsible AI. The absence of transparency can result in unintended biases and errors, compromising the reliability and health safety of automated systems in smart buildings.

This research addresses the identified challenges by adapting, improving, and evaluating various transfer learning approaches based on Decision Trees (DT), chosen for their human-readable decision rules, computational efficiency, and robustness to outliers. This study conducts an analytical comparison of the rule sets derived from these models. We utilize Explainable AI methods to interpret the models’ decision-making mechanisms on unseen data. The framework’s efficacy is tested across various benchmark datasets for AR, where it consistently achieves high accuracy while notably advancing the clarity and comprehensibility of the transfer learning mechanisms. The results underscore the potential of these explainable transfer learning models to enhance user trust and facilitate broader adoption in practical settings, thus contributing to the development of more accountable and transparent Building Management Systems (BMS).

Presenters

Create an account or log in to register for BS2025