Large language models encode vast amounts of world knowledge, yet we lack a deep understanding of how this knowledge is stored, retrieved, and evolves over time. Updating or correcting knowledge in deployed models also remains challenging without costly retraining.
My research focuses on knowledge mechanisms and editing for LLMs. I aim to understand how knowledge interacts within these models and design post-training methods to manipulate memory and reasoning for agentic systems, enabling models to evolve continuously while preserving their core capabilities.

Baohua Yan, Qingyuan Liu, Jennifer Kava, Xuan Di
Preprint
Proposed a Fourier-domain diffusion framework that explicitly models frequency consistency across classes and diffusion timesteps to mitigate information loss in standard DDPMs. Implemented frequency-aware denoising and adaptive spectral regularization to stabilize the generative process and improve cross-domain generalization.
# Controllable Diffusion
Baohua Yan, Qingyuan Liu, Jennifer Kava, Xuan Di
Preprint
Proposed a Fourier-domain diffusion framework that explicitly models frequency consistency across classes and diffusion timesteps to mitigate information loss in standard DDPMs. Implemented frequency-aware denoising and adaptive spectral regularization to stabilize the generative process and improve cross-domain generalization.
# Controllable Diffusion

Qingyuan Liu*, Jiachen Gu*, Yunzhi Yao, Hong Wang, Nanyun Peng
In The Fourteenth International Conference on Learning Representations (ICLR). 2026.
Top-1.1% in Transfer/Meta/Lifelong Learning track
[TL;DR] [Paper] [Code] [EasyEdit] [Project Page]
Developed SPHERE (Sparse Projection for Hyperspherical Energy-Regularized Editing), projecting new knowledge onto sparse hyperspherical subspaces to preserve uniformity and editing stability with rigorous proof, achieving +16.4% higher editing capability while best preserving general performance on LLaMA3-8B and Qwen2.5-7B.
# Model Editing # Knowledge Mechanisms # Lifelong Learning
Qingyuan Liu*, Jiachen Gu*, Yunzhi Yao, Hong Wang, Nanyun Peng
In The Fourteenth International Conference on Learning Representations (ICLR). 2026.
Top-1.1% in Transfer/Meta/Lifelong Learning track
[TL;DR] [Paper] [Code] [EasyEdit] [Project Page]
Developed SPHERE (Sparse Projection for Hyperspherical Energy-Regularized Editing), projecting new knowledge onto sparse hyperspherical subspaces to preserve uniformity and editing stability with rigorous proof, achieving +16.4% higher editing capability while best preserving general performance on LLaMA3-8B and Qwen2.5-7B.
# Model Editing # Knowledge Mechanisms # Lifelong Learning

Baohua Yan, Qingyuan Liu, Zhaobin Mo, Kangrui Ruan, Xuan Di
The Thirteenth International Conference on Learning Representations, ICLR DeLTa workshop. 2025.
Proposed a controllable diffusion generation framework that balances latent space via guiding signals, enabling generation of counterfactual data while preserving factual consistency. Experiments on MNIST demonstrate its potential for counterfactual data generation.
# Controllable Diffusion
Baohua Yan, Qingyuan Liu, Zhaobin Mo, Kangrui Ruan, Xuan Di
The Thirteenth International Conference on Learning Representations, ICLR DeLTa workshop. 2025.
Proposed a controllable diffusion generation framework that balances latent space via guiding signals, enabling generation of counterfactual data while preserving factual consistency. Experiments on MNIST demonstrate its potential for counterfactual data generation.
# Controllable Diffusion

Qingyuan Liu, Yun-Yun Tsai, Ruijian Zha, Pengyuan Shi, Victoria Li, Chengzhi Mao, Junfeng Yang
Preprint
Designed LAVID, an agentic LVLM-based framework integrating external tools (e.g. SAM) for AI-generated video detection, boosting F1 scores by 6.2%–30.2% across GPT-4o, Gemini-1.5-Pro, Qwen-VL, and LLaVA.
# AI-Synthetic # Agentic Framework
Qingyuan Liu, Yun-Yun Tsai, Ruijian Zha, Pengyuan Shi, Victoria Li, Chengzhi Mao, Junfeng Yang
Preprint
Designed LAVID, an agentic LVLM-based framework integrating external tools (e.g. SAM) for AI-generated video detection, boosting F1 scores by 6.2%–30.2% across GPT-4o, Gemini-1.5-Pro, Qwen-VL, and LLaVA.
# AI-Synthetic # Agentic Framework

Zhaobin Mo*, Qingyuan Liu*, Baohua Yan, Longxiang Zhang, Xuan Di
Proceeding of 27th IEEE International Conference on Intelligent Transportation Systems (ITSC). 2024
Designed the Causal Adjacency Learning (CAL) framework, enhancing prediction performance on the ODD dataset; applied a heuristic method combining correlation calculation and conditional independence testing; achieved +26.7% average RMSE improvement on SafeGraph dataset over baselines based on distance, correlation, and attention matrix.
# Graph Neural Networks
Zhaobin Mo*, Qingyuan Liu*, Baohua Yan, Longxiang Zhang, Xuan Di
Proceeding of 27th IEEE International Conference on Intelligent Transportation Systems (ITSC). 2024
Designed the Causal Adjacency Learning (CAL) framework, enhancing prediction performance on the ODD dataset; applied a heuristic method combining correlation calculation and conditional independence testing; achieved +26.7% average RMSE improvement on SafeGraph dataset over baselines based on distance, correlation, and attention matrix.
# Graph Neural Networks

Qingyuan Liu, Pengyuan Shi, Yun-Yun Tsai, Chengzhi Mao, Junfeng Yang
IEEE / CVF Computer Vision and Pattern Recognition Conference, GenAI Workshop. 2024
Columbia Engineering Research Highlight
Developed a Diffusion Reconstruction Error (DIRE) method for AI-generated video detection, leveraging video generation models with temporal cues to achieve up to 93.7% accuracy on Stable Video Diffusion, Sora, Pika, and Gen-2 datasets.
# AI-Synthetic
Qingyuan Liu, Pengyuan Shi, Yun-Yun Tsai, Chengzhi Mao, Junfeng Yang
IEEE / CVF Computer Vision and Pattern Recognition Conference, GenAI Workshop. 2024
Columbia Engineering Research Highlight
Developed a Diffusion Reconstruction Error (DIRE) method for AI-generated video detection, leveraging video generation models with temporal cues to achieve up to 93.7% accuracy on Stable Video Diffusion, Sora, Pika, and Gen-2 datasets.
# AI-Synthetic

Qingyuan Liu, Yuxuan Zhou, Shuai Bao
International Conference on Cloud Computing, Internet of Things, and Computer Applications (CICA). 2022.
Improve GAN-based face swapping under the CycleGAN framework, enabling training without paired images. The model is trained on over 1,000 images and can process video inputs to generate swapped-face videos within minutes. Experiments on Hillary and Trump videos demonstrate competitive quality and speed compared to other GAN-based methods.
# AI-Synthetic
Qingyuan Liu, Yuxuan Zhou, Shuai Bao
International Conference on Cloud Computing, Internet of Things, and Computer Applications (CICA). 2022.
Improve GAN-based face swapping under the CycleGAN framework, enabling training without paired images. The model is trained on over 1,000 images and can process video inputs to generate swapped-face videos within minutes. Experiments on Hillary and Trump videos demonstrate competitive quality and speed compared to other GAN-based methods.
# AI-Synthetic