Welcome to my homepage. My name is Lei Zhao.
I got my Ph.D. from University of Pittsburgh in 2022. I received my B.S. degree of Software Engineering in 2011 and M.S. degree of Computer Science in 2014 respectively from Northwestern Polytechnical University in China. After graduation, I worked as a postdoctoral research scientist in Meta Reality Labs from 2022 to 2023. Starting from 2023, I joined Artificial Intelligence Research Lab in HPE.
My research interests include non-volatile memory technologies, process-in-memory accelerators, secure and privacy-preserving machine learning, etc. I am enthusiastic about self-supervised learning (contrastive learning in particular) and multimodality.
Research
My research interests lie in the fields of nonvolatile memory technology (NVM), machine learning (ML) and accelerators for ML algorithms.
Nonvolatile memory
Nonvolatile memory related research is one of my earliest research projects. Among various types of NVM, ReRAM's crossbar structure provides the capability to perform MAC operations in the memory. So, besides designing efficient NVM-based system for general computing, I am also interested in applying NVM on machine learning acceleration.
Machine learning
Because lots of the acceleration techniques need to modify the lowest level of ML implementation, a clean and easy-to-modify ML framework is handy for researchers to fast prototype new ideas. So, I developed a C++ and CUDA based minimum ML framework implementation (github). I am also an active learner of new ML techniques, which I am constantly trying to implement in the framework. You are very welcome to try it out and use it in your own project if needed.
Secure ML accelerator
ML algorithms, especially deep neural networks (DNNs), have been widely used in commercial applications. Training DNNs demands both high computing capability and massive (private) dataset. Thus, a trained commercial ML application should be treated as an intellectual property (IP). My research in this area intends to use hardware characteristics to provide protection on trained machine learning models that have been deployed on ML accelerators.
Publications
Full publication list is at HERE.Conference Papers.
Speeding Up Crossbar Resistive Memory by Exploiting In-memory Data Patterns
Wen Wen, Lei Zhao, Youtao Zhang, and Jun Yang
International Conference On Computer Aided Design (ICCAD), 2017.
[Paper]
Exploit Common Source-Line to Construct Energy Efficient Domain Wall Memory based Caches
Xianwei Zhang, Lei Zhao, Youtao Zhang, and Jun Yang
International Conference on Computer Design (ICCD), 2015.
[Paper]
Journal Papers.
Privacy-preserving Time Series Medical Images Analysis Using a Hybrid Deep Learning Framework
Zijie Yue, Shuai Ding, Lei Zhao, Youtao Zhang, Zehong Cao, M. Tanveer, Alireza Jolfaei and Xi Zheng
ACM Transactions on Internet Technology, 2020.
[Paper]
Exploiting In-memory Data Patterns for Performance Improvement on Crossbar Resistive Memory
Wen Wen, Lei Zhao, Youtao Zhang, and Jun Yang
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2019.
[Paper]
Contact Me
210 S. Bouquet Street
Department of Computer Science
University of Pittsburgh
Pittsburgh, PA 15260-9161, USA
Office: 6514
E-mail: leizhao@cs.pitt.edu