Detecting human behavior models from multimodal observation in a smart home

Citation

Brdiczka, O.; Langet, M.; Maisonnasse, J.; Crowley, J. L. Detecting human behavior models from multimodal observation in a smart home. IEEE Transactions on Automation Science and Engineering. 2009 October; 6 (4): 588-597.

Abstract

This article addresses the learning and recognition of human behavior models from multimodal observation in a smart home environment. The proposed approach is part of a framework for acquiring a high-level contextual model for human behavior in an augmented environment. A 3D video tracking system creates and tracks entities (persons) in the scene. Further, a speech activity detector analyses audio streams coming from headset microphones and determines for each entity, whether the entity speaks or not. An ambient sound detector detects noises in the environment. An individual role detector derives basic activity like walking or interacting with table from the extracted entity properties of the 3D tracker. From the derived multimodal observations, different situations like aperitif or presentation are learned and detected using statistical models (HMMs). The objective of the proposed general framework is 2-fold: the automatic offline analysis of human behavior recordings and the online detection of learned human behavior models. To evaluate the proposed approach, several multimodal recordings showing different situations have been conducted. The obtained results, in particular for offline analysis, are very good, showing that multimodality as well as multi-person observation generation are beneficial for situation recognition.


Read more from SRI