top of page

Create Your First Project

Start adding your projects to your portfolio. Click on "Manage Projects" to get started

Multimodal Smart-Home Dataset and Foundation Model for Human-Centered Intelligence

Project Type

University-Industry Cooperation

Date

2025-2026

In collaboration with Xiaomi, this project builds a large-scale multimodal smart-home dataset integrating vision, audio, mmWave radar, and IMU signals to enable low-privacy human perception and behavior understanding in indoor environments. The system captures natural daily activities—such as sleeping, sitting, and interacting with furniture—under diverse lighting and spatial conditions. Each modality is precisely synchronized and labeled with identity, location, and action semantics to support cross-user and cross-scene generalization. Using this dataset, we are developing a foundation model for embodied perception, capable of identity recognition, spatial localization, and activity classification through multimodal fusion. The long-term goal is to establish an open benchmark for privacy-preserving embodied AI and advance next-generation intelligent living spaces that perceive, adapt, and coexist seamlessly with humans.

bottom of page