Ruiqi Wang

Ph.D. student at Computer Science & Engineering @ WashU.

w1.jpg
Email Address
I am a final year Ph.D. student advised by Dr. Chenyang Lu.

I am member of the AI for Health Institute and Cyber-Physical Systems Laboratory. I earned my BSE degree in ECE and CE from University of Michigan, Ann Arbor and Shanghai Jiao Tong University (SJTU, 上海交通大学). I am a recipient of the 2025 Google PhD Fellowship.

My research lies at the intersection of Machine Learning Systems, Embedded Systems, Computer Vision, and Human Action Recognition, with a focus on impactful real-world applications.

  • In Embedded Systems and Edge Computing, I develop efficient algorithms for machine learning inference on resource-constrained devices, addressing complex tasks like image classification and video-based action recognition. My work includes optimizing offloading strategies to balance performance under strict deadlines and limited resources, ensuring high accuracy and low latency in real-time systems.

  • In AI for Health, my Smart Kitchen project uses computer vision and action recognition to help individuals with cognitive impairments by detecting and correcting action sequencing errors in daily tasks like cooking. I also apply deep learning and computer vision to identify blood cancers from microscopic images, improving diagnostic accuracy.

Through these efforts, I aim to develop cutting-edge, deployable solutions for smart environments and healthcare, leveraging embedded systems and AI to make meaningful societal contributions. One of my projects earning the Best Student Paper Award at the IEEE Real-Time Systems Symposium (RTSS ‘23).

news

Oct 23, 2025 🏆 I have received the 2025 Google PhD Fellowship, recognizing and supporting my research on AI for health. The news was also featured by WashU Engineering.
Sep 18, 2025 📄 CHEF-VL: Detecting Cognitive Sequencing Errors in Cooking with Vision-Language Models, accepted to Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies and to be presented at UbiComp / ISWC 2026.
Jul 12, 2025 📄 Real-time video-based human action recognition on embedded platforms, accepted to appear at ACM/IEEE EMSOFT @ ESWEEK 2025.
Jun 01, 2025 💼 Research Engineer Intern at Plus (May 2025 – Aug 2025, Santa Clara, CA): Vision-language model development for autonomous driving and curating datasets from real-world and synthetic sources.
Dec 04, 2024 📄 Jiaming Qiu, Ruiqi Wang, et al., Optimizing Edge Offloading Decisions for Object Detection at ACM/IEEE SEC 2024. [Explore the code on GitHub.][IEEE Xplore.]

selected publications

  1. UbiComp 2026
    2025_chefvl.png
    CHEF-VL: Detecting Cognitive Sequencing Errors in Cooking with Vision-Language Models
    Ruiqi Wang, Peiqi Gao, Patrick John Lynch, Tingjun Liu, Yejin Lee, Carolyn Baum, Lisa Tabor Connor, and Chenyang Lu
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2025
  2. EMSOFT 2025
    2025_RT_HARE.png
    Real-Time Video-Based Human Action Recognition on Embedded Platforms
    Ruiqi Wang, Zichen Wang, Peiqi Gao, Mingzhen Li, Jaehwan Jeong, Yihang Xu, Yejin Lee, Carolyn Baum, Lisa Connor, and Chenyang Lu
    ACM Transactions on Embedded Computing Systems (SPECIAL ISSUE ESWEEK 2025), Sep 2025
  3. RTSS 2023
    Best Student
    Paper Award
    2023_rtss_pnc.png
    Progressive Neural Compression for Adaptive Image Offloading Under Timing Constraints
    Ruiqi Wang, Hanyang Liu, Jiaming Qiu, Moran Xu, Roch Guérin, and Chenyang Lu
    In 2023 IEEE Real-Time Systems Symposium (RTSS), Sep 2023
    Best Student Paper Award
  4. EMSOFT 2022
    2022_dqn.png
    Adaptive Edge Offloading for Image Classification Under Rate Limit
    Jiaming Qiu, Ruiqi Wang, Ayan Chakrabarti, Roch Guerin, and Chenyang Lu
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Nov 2022