Biography
I am a first-year Ph.D. candidate in Computer Science at Tsinghua University, under the supervision of Prof. Hongning Wang. I am also currently interning at Zhipu AI.
My research primarily focuses on LLM RL, with a particular interest in Agentic RL and how to enhance LLMs to effectively perform tasks in real-world environments. Beyond research, I am also passionate about open-source contributions and am actively working on LLM RL infrastructure projects(e.g. slime, sglang).
My past research experience includes working as a research intern at the Shanghai AI Lab. I also served as a research assistant at HKU, under the supervision of Prof. Difan Zou. In addition, I had a visiting research student at KAUST, where I worked with Dr. Guohao Li and Prof. Bernard Ghanem.
If you would like to get in touch with me, feel free to reach out via my email: xiechengxing34@gmail.com or via WeChat.
Open-Source Contribution
- Core contributor of GLM 4.5 (Technical Report)
- Main contributor of slime (A LLM post-training framework aiming for RL Scaling. Repository)
- Contributor of Sglang (Primarily working on RL-related features. Repository)
Education
Doctor of Philosophy (August 2025 – Now)
College AI, Tsinghua University.
Supervised by Prof. Hongning Wang.Bachelor (September 2021 – June 2025)
School of Computer Science and Technology, Xidian University, Xi’an, China.
Selected Publication
- Can Large Language Model Agents Simulate Human Trust Behavior?
- Authors: Chengxing Xie, Canyu Chen, Feiran Jia, Ziyu Ye, Shiyang Lai, Kai Shu, Jindong Gu, Adel Bibi, Ziniu Hu, David Jurgens, James Evans, Philip Torr, Bernard Ghanem, Guohao Li
- Accepted in NeurIPS 2024, with 100+ citations. The code is available here.
- GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
- I’m one of the core contributors of GLM-4.5.
- GLM-4.5 is an SOTA MoE LLM with 355B parameters, achieving top performance on agentic, reasoning, and coding tasks, while outperforming competitors with fewer parameters.
- SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution
- Authors: Chengxing Xie, Bowen Li, Chang Gao, He Du, Wai Lam, Difan Zou, Kai Chen
- Accepted in ACL 2025 Findings. The code is available here.