Pan Li joined Georgia Tech in 2023 Spring. Before that, Pan Li worked at the Purdue Computer Science Department as an assistant professor from the 2020 fall to the 2023 Spring. Before joining Purdue, Pan worked as a postdoc at Stanford Computer Science Department from 2019 to 2020. Pan did his Ph.D. in Electrical and Computer Engineering at the University of Illinois Urbana-Champaign. Pan Li has got the NSF CAREER award, the Best Paper award from the Learning on Graph Conference, Sony Faculty Innovation Award, JPMorgan Faculty Award.
- Ph.D., Electrical and Computer Engineering, University of Illinois Urbana–Champaign, 2019
Prof. Li's research centers on Machine Learning on Graphs, studying graph‑structured data to solve challenges in scalability, reasoning, and reliability in modern AI.
His work focuses on two major frontiers:
- Graphs for Trustworthy & Efficient LLMs — methods for machine unlearning, jailbreak defense, structural bias insertion, and reasoning enhancement.
- Geometric Deep Learning & AI for Science — architectures respecting physical symmetries, geometric foundation models, and diffusion models for structured scientific data.
Prof. Li's teaching interests lie at the intersection of machine learning, optimization, and their applications in scientific discovery. He provides rigorous instruction in the mathematical foundations of data science while exposing students to cutting-edge research in geometric deep learning and AI for science.
His course areas include:
- Optimization and Numerical Methods
- Convex Optimization (ECE 6270)
- Optimization in Information Systems (ECE 3251)
- Computational Methods in Optimization
He also teaches machine learning on graphs and mentors interdisciplinary VIP teams applying geometric deep learning to scientific problems.
- NSF Career Award 2023
- The Best Paper Award of Learning on Graph Conference, 2022
- Sony Faculty Innovation award, 2021
- JP Morgan Faculty award, 2021
- ECE Distinguished Research Fellowship at UIUC 2018
- Graph‑KV: Breaking Sequence via Injecting Structural Biases into LLMs, NeurIPS 2025
- Do LLMs Really Forget? Evaluating Unlearning..., NeurIPS 2025
- The Trojan Knowledge: Bypassing LLM Guardrails..., arXiv:2512.01353, 2025
- Siqi Miao et al., Locality‑Sensitive Hashing‑Based Efficient Point Transformer, ICML 2024 (Oral)
- Pan Li et al., Distance Encoding: More Powerful GNNs, NeurIPS 2020