Extraction and Interpretation of Deep Autoencoder-based Temporal Features from Wearables for Forecasting Personalized Mood, Health, and Stress
Master of Science
High-resolution wearable sensor data contain physiological and behavioral information that can be utilized to predict and eventually improve human health and wellbeing. We propose a semi-supervised deep neural network framework to automatically learn features from passively collected multi-modal sensor data. This process can be personalized by finetuning the general features with participant-specific data. Then, using the learned features, we performed personalized prediction of subjective wellbeing scores with high precision. We also provide visual explanation and statistical interpretation of the automatically learned features and the prediction models. In this study, we explored multiple implementations of our framework including locally connected linear network, convolutional neural network, recurrent neural network, and visual attention network. The framework was evaluated using wearable sensor data and wellbeing labels collected from college students (total 6391 days from N=239). Sensor data include skin temperature, skin conductance, and acceleration; wellbeing scores include self-reported mood, health and stress ranged from 0 to 100. Compared to the prediction performance based on hand-crafted features, the proposed framework achieved higher precision with a smaller number of features. Our results show promising potentials of predicting self-reported mood, health, and stress accurately using an interpretable deep learning framework, ultimately for developing real-time health and wellbeing monitoring and intervention systems that can benefit various populations.
Representation learning; Wearable sensors; Recurrent autoencoders; Personalized prediction; Network interpretability