
I’m a Ph.D. candidate at Stanford CS, advised by Chelsea Finn and part of the IRIS lab. I am affiliated with SAIL, CRFM, and the ML Group at Stanford. My research is generously supported through grants and fellowships from OpenAI and KFAS.
My name (윤호) is pronounced like ‘you-know’ said quickly (stress on ‘you’). This is a good approximation.
My research focuses on establishing text as an explicit and editable substrate for knowledge, complementing the implicit information stored in neural network weights. Instead of relying solely on weights, we can store and update knowledge directly in discrete text form, modified through mutations guided by rich experiential feedback.
The core vision is towards enabling models to extract massive amounts of information from direct experience (e.g. raw observations, expert feedback, experiment results). As we deploy models on increasingly complex, long-horizon tasks, the scalar reward bottleneck of reinforcement learning will prove increasingly limiting. Learning through text offers a way forward by allowing models to learn from richer signals that scale naturally with task complexity.
Selected Papers
ICML 2025 workshops: AI for Math, PRAL, ES-FoMo
ICML 2025 Workshop PUT
UIST 2024, NeurIPS 2023 workshops XAIA and ICBINB
ICLR 2024 (spotlight)