Ruiqi Wang

Ph.D. student at Computer Science & Engineering @ WashU.

w1.jpg
Email Address
I am a final year Ph.D. student advised by Dr. Chenyang Lu.

I am member of the AI for Health Institute and Cyber-Physical Systems Laboratory. I earned my BSE degree in ECE and CE from University of Michigan, Ann Arbor and Shanghai Jiao Tong University (SJTU, 上海交通大学). I am a recipient of the 2025 Google PhD Fellowship.

My research lies at the intersection of Machine Learning Systems, Embedded Systems, Computer Vision, and Human Action Recognition, with a focus on impactful real-world applications.

  • In Embedded Systems and Edge Computing, I develop efficient algorithms for machine learning inference on resource-constrained devices, addressing complex tasks like image classification and video-based action recognition. My work includes optimizing offloading strategies to balance performance under strict deadlines and limited resources, ensuring high accuracy and low latency in real-time systems.

  • In AI for Health, my Smart Kitchen project uses computer vision and action recognition to help individuals with cognitive impairments by detecting and correcting action sequencing errors in daily tasks like cooking. I also apply deep learning and computer vision to identify blood cancers from microscopic images, improving diagnostic accuracy.

Through these efforts, I aim to develop cutting-edge, deployable solutions for smart environments and healthcare, leveraging embedded systems and AI to make meaningful societal contributions. One of my projects earning the Best Student Paper Award at the IEEE Real-Time Systems Symposium (RTSS ‘23).

news

Dec 05, 2025 📄 My work with Jingwen Zhang, PhD (First Author), Addressing Cohort Variability with Adaptive Fusion of Wearable and Clinical Data: A Case Study in Predicting Pancreatic Surgery Outcomes, has been accepted in ACM Transactions on Computing for Healthcare (HEALTH).
Paper Summary
  • We identify substantial cohort variability, exemplified by pre- vs. post-COVID surgical populations, and show that wearable and clinical features provide differing predictive value across patients, undermining fixed-weight multimodal models.
  • To address this, we introduce AdaMoE, an adaptive Mixture-of-Experts framework with diversity regularization that dynamically fuses wearable and clinical data, achieving more robust and accurate post-surgical outcome prediction than existing fusion methods.
Oct 23, 2025 🏆 I have received the 2025 Google PhD Fellowship, recognizing and supporting my research on AI for health. The news was also featured by WashU Engineering.
Sep 18, 2025 📄 CHEF-VL: Detecting Cognitive Sequencing Errors in Cooking with Vision-Language Models, accepted to Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies and to be presented at UbiComp / ISWC 2026 in Oct. 2026 in Shanghai.
Paper Summary
  • Two-Year Real-World Data Collection:
    Developed a Smart Kitchen environment and collected a large, richly annotated dataset from over 100 participants performing a standardized cooking task, enabling research grounded in authentic human behavior.
  • A New AI Framework for Cognitive Support:
    Introduced CHEF-VL, a dual–Vision-Language Model system that integrates action recognition, environmental state detection, and action–state refinement to accurately detect sequencing errors in real time.
Jul 12, 2025 📄 Real-time video-based human action recognition on embedded platforms, accepted to appear at ACM/IEEE EMSOFT @ ESWEEK 2025.
Jun 01, 2025 💼 Research Engineer Intern at Plus (May 2025 – Aug 2025, Santa Clara, CA): Vision-language model development for autonomous driving and curating datasets from real-world and synthetic sources.

selected publications

  1. UbiComp 2026
    2025_chefvl_2.png
    CHEF-VL: Detecting Cognitive Sequencing Errors in Cooking with Vision-language Models
    Ruiqi Wang, Peiqi Gao, Patrick John Lynch, Tingjun Liu, Yejin Lee, Carolyn Baum, Lisa Tabor Connor, and Chenyang Lu
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Dec 2025
  2. EMSOFT 2025
    2025_RT_HARE.png
    Real-Time Video-Based Human Action Recognition on Embedded Platforms
    Ruiqi Wang, Zichen Wang, Peiqi Gao, Mingzhen Li, Jaehwan Jeong, Yihang Xu, Yejin Lee, Carolyn Baum, Lisa Connor, and Chenyang Lu
    ACM Transactions on Embedded Computing Systems (SPECIAL ISSUE ESWEEK 2025), Sep 2025
  3. RTSS 2023
    Best Student
    Paper Award
    2023_rtss_pnc.png
    Progressive Neural Compression for Adaptive Image Offloading Under Timing Constraints
    Ruiqi Wang, Hanyang Liu, Jiaming Qiu, Moran Xu, Roch Guérin, and Chenyang Lu
    In 2023 IEEE Real-Time Systems Symposium (RTSS), Sep 2023
    Best Student Paper Award
  4. EMSOFT 2022
    2022_dqn.png
    Adaptive Edge Offloading for Image Classification Under Rate Limit
    Jiaming Qiu, Ruiqi Wang, Ayan Chakrabarti, Roch Guerin, and Chenyang Lu
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Nov 2022