Biography
I am a first-year Ph.D. student in Computer Science at Tsinghua University, under the supervision of Prof. Hongning Wang. I am also currently interning at Zhipu AI.
My research primarily focuses on LLM RL, especially Agentic RL and efficient RL training for LLMs. I am particularly interested in improving long-horizon planning and memory-aware training for LLM agents, while optimizing training efficiency under real-world infrastructure constraints.
Beyond research, I actively contribute to open-source LLM RL systems, including slime and sglang. I am one of the core contributors to both GLM-4.5 and GLM-5.
My past research experience includes working as a research intern at the Shanghai AI Lab. I also served as a research assistant at HKU, under the supervision of Prof. Difan Zou. In addition, I was a visiting student at KAUST, where I worked with Dr. Guohao Li and Prof. Bernard Ghanem.
If you would like to get in touch with me, feel free to reach out via email: xiechengxing34@gmail.com or via WeChat.
Open-Source Contribution
- Core contributor of GLM-5 (Technical Report)
- Core contributor of GLM-4.5 (Technical Report)
- Core contributor of slime (LLM post-training framework for scalable RL. Repository)
- Contributor of SGLang (primarily working on RL-related features. Repository)
Education
Doctor of Philosophy (August 2025 – Present) College AI, Tsinghua University. Supervised by Prof. Hongning Wang.
Bachelor (September 2021 – June 2025) School of Computer Science and Technology, Xidian University, Xi’an, China.
Selected Publication
- GLM-5: from Vibe Coding to Agentic Engineering
- I’m one of the core contributors of GLM-5.
- GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
- I’m one of the core contributors of GLM-4.5.
- Can Large Language Model Agents Simulate Human Trust Behavior?
- Authors: Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, James Evans, Philip Torr, Bernard Ghanem, Guohao Li
- Accepted in NeurIPS 2024, with 200+ citations. The code is available here.
- SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution
- Authors: Chengxing Xie, Bowen Li, Chang Gao, He Du, Wai Lam, Difan Zou, Kai Chen
- Accepted in ACL 2025 Findings. The code is available here.
