User-centric Environment Discovery
Automatic environment discovery facilitates deployment of camera networks in smart homes.
In this project, objects in a smart home are identified based on observed user interactions and a
data integration and reasoning technique. This approach is complementary to the traditional
appearance-based object recognition methods, which often
demand large training sets for detecting each object of interest.
In our approach, object recognition is achieved in a semantic
way by linking object behaviors to the pose and activity of
the person using them. The complex relations between objects
and user activities are modeled with a Markov logic network
(MLN). A Markov logic network effectively encrypts commonsense
knowledge of complex relations associated with the object
in the syntax of first-order logic. As a probabilistic graphical
model, it also handles ambiguities in the relations as well as
uncertainties in the outputs of vision processing. We present
a system consisting of feature extraction and activity analysis
based on a camera network, and object inference with MLN.
Each camera in the network produces local description of the
user's location and activity, which are fused in a collaborative
processing module to calculate the user's location in 3D space
and estimate activity. Activity and location information are sent
to the MLN for the inference of object probability. Differences
of the inference methods between MLN and general graphical
models are discussed. An embodiment of the proposed approach
in a multi-camera smart home environment is developed.
Energy-Efficient User-centric Light Control in Smart Homes
Smart homes are conceptualized as environments responsive to user's presence
and actions and adaptive to user preferences and behavior models. Visual information
plays an enabling role in smart home applications such as interfaces and
gesture control. This project is based on the use of cameras and a distributed processing
method for automated control of lights in a smart home. The implemented
optimization formulations maintain the user's comfort while reducing the energy
cost of lights. Information from camera sensors provides occupancy reasoning
and human activity analysis. By employing the user's positions, activities, and
preference as constraints, the system optimizes the light setting for the user's
satisfaction in the occupied area.
In this project we develop a distributed vision-based smart
home care system aimed for monitoring elderly persons and patients remotely.
The cameras can continuously monitor the user or be triggered by a broadcast from a
user badge when an accidental fall or other conditions are sensed by the badge. Tracking the approximate position
of the user by the camera network allows for triggering cameras
with the best views. Distributed scene analysis modules analyze
the user's posture and head location; these information
are merged through a collaborative reasoning module, which
makes a final decision about the type of the report that needs
to be prepared. The developed prototype also allows for a
voice channel to be established between the user badge and
a call center over the phone line.
By making use of different
camera views, the number of false alarms is reduced,
making the system more reliable and more efficient.
Prompting Interactive Mobile Engagement (PRIME) System
The goal of this project is to create an adaptive intention inference and prompting solution in a smart home environment using a network of cameras and a mobile device to assist individuals with cognitive impairments to successfully complete regular daily tasks. The project integrates the multi-modal interfacing and data communication technologies of emerging mobile phones with the sensor networking and home automation technologies envisioned for the future smart homes. It employs the mobile phone as the interface between the patient's physical world (home sensor network, user's activities and tasks) and the digital world (knowledge base, user activity profile, health status reports) using its versatile communication and interfacing capabilities. The project offers its solution by incorporating innovative methods of: (1) interactive learning for inference of complex and intertwined multi-tasks based on statistical relational models, (2) adaptation of user activity profile for adjusting the intervention decision, (3) data processing with rich sources of information such as multi-camera and multi-modal sensor networks for recognition of user's location, pose, activity, gesture, facial expression, and contextual data in an unobtrusive setting, and (4) reinforcement learning for customization of system's interaction and prompting services to user's response skills.
Hierarchical Preference Learning for Light Control from User Feedback
A. Khalili, C. Wu and H. Aghajan,
Workshop on Human Communicative Behavior Analysis, CVPR, June 2010.
User-centric Environment Discovery with Camera Networks in Smart Homes
C. Wu and H. Aghajan,
IEEE Trans. on Systems, Man, and Cybernetics Part A, 2010.
Using Context with Statistical Relational Models - Object Recognition from Observing User Activity in Home Environment
C. Wu and H. Aghajan,
Workshop on Use of Context in Vision Processing (UCVP), ICMI-MLMI, Nov. 2009.
Distributed Vision-based Accident Management for Assisted Living
H. Aghajan, J. Augusto, C. Wu, P. McCullagh, and J. Walkden,
Int. Conf. on Smart homes and health Telematics (ICOST), June 2007.
Distributed Vision-Based Reasoning for Smart Home Care
A. Keshavarz, A. Maleki-Tabar, and H. Aghajan,
ACM SenSys Workshop on Distributed Smart Cameras (DSC), Oct. 2006.
Smart Home Care Network using Sensor Fusion and Distributed Vision-Based Reasoning
A. Maleki-Tabar, A. Keshavarz, and H. Aghajan,
ACM Multimedia Workshop On Video Surveillance and Sensor Networks (VSSN), Oct. 2006.