The Homepage of Banghua Zhu

CTO & Co-founder of RadixArk. Incoming Assistant Professor at University of Washington.

bio2.jpeg

I’m the CTO & co-founder of RadixArk, where we’re bringing frontier-level AI infrastructure to everyone. We believe deeply in building in the open.

Our team is composed of the creators and core developers of SGLang—one of the leading open-source LLM inference engines—and Miles, our open-source reinforcement learning framework.

Featured: Check out our short course on Post-training of LLMs, co-taught with Andrew Ng on DeepLearning.AI.

If you’re interested in what we’re building, come join our SGLang Slack and help shape the ecosystem.

We’re hiring. If you want to push the frontier of open AI infrastructure, see open roles at RadixArk.


Open Source


Background

I’m also an incoming assistant professor at the University of Washington, where I lead the Foundation Model and Reinforcement Learning Research Lab (FMRL2).

Previously, I co-founded Nexusflow AI in 2023, providing reliable AI agent solutions for enterprise use-cases.

I received my PhD from the Department of EECS, UC Berkeley, advised by Prof. Jiantao Jiao and Prof. Michael I. Jordan. I am a recipient of the 2023 David J. Sakrison Memorial Prize from Berkeley EECS for truly outstanding PhD research.


Select Past Work

  • Starling-7B — Open 7B model, ranked #1 among Mistral-based 7B models on Chatbot Arena.
  • Athene-V2-72B — Open model competitive with GPT-4o, ranked only behind Deepseek V3/R1 (671B) among non-reasoning open models on Chatbot Arena.
  • Chatbot Arena & Arena-Hard-Auto — Core contributor to the most widely-used LLM evaluation platforms.
  • S-LoRA — Scalable serving of thousands of LoRA adaptors.
  • Fundamental Limits of RLHF — Identified theoretical limits and near-optimal algorithms for RLHF.
  • LLM Watermarking — Statistically near-optimal algorithm for LLM watermarking.