prof_pic.jpg

I’m a Ph.D. candidate at Stanford CS, advised by Chelsea Finn and part of the IRIS lab. I am affiliated with SAIL, CRFM, and the ML Group at Stanford. My research is generously supported through grants and fellowships from OpenAI and KFAS.

I’m developing a new machine learning paradigm where text serves as a primary substrate for storing and updating knowledge. Instead of encoding knowledge solely in neural network weights, I build systems that store and update knowledge directly in text form, modified through text mutations based on rich experiential feedback.

The core vision: enable models to extract massive amounts of information from direct experience (e.g. raw observations, expert feedback, experiment results). As we deploy models on complex, long-horizon tasks, RL’s scalar reward bottleneck will become increasingly limiting. I believe that learning through text can address this by allowing models to learn from a richer set of signals that scale naturally with task complexity.

To this end, I have developed methods for encoding and selecting among a small set of hypotheses about the world [1,2,3] and efficiently fine-tuning model weights [4,5]. I created an interface that enables non-experts to teach vision models via natural language feedback [6]. Most recently, I developed a hierarchical RL framework LLMs discover and leverage textual “abstractions” to solve complex reasoning tasks [7].

My name (윤호) is pronounced like ‘you-know’ said quickly (stress on ‘you’). This is a good approximation.

Selected Papers

[1] Test-Time Alignment via Hypothesis Reweighting

Yoonho Lee, Jonathan Williams, Henrik Marklund, Archit Sharma, Eric Mitchell, Anikait Singh, Chelsea Finn

ICML 2025 Workshop PUT

[2] Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features

Annie S. Chen*, Yoonho Lee*, Amrith Setlur, Sergey Levine, Chelsea Finn

ICLR 2024 (spotlight)

[4] AutoFT: Learning an Objective for Robust Fine-Tuning

Caroline Choi*, Yoonho Lee*, Annie S. Chen, Allan Zhou, Aditi Raghunathan, Chelsea Finn

NeurIPS 2023 workshop DistShift

[5] Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

Yoonho Lee*, Annie S. Chen*, Fahim Tajwar, Ananya Kumar, Huaxiu Yao, Percy Liang, Chelsea Finn

ICLR 2023

[6] Clarify: Improving Model Robustness with Natural Language Corrections

Yoonho Lee, Michelle Lam, Helena Vasconcelos, Michael S. Bernstein, Chelsea Finn

UIST 2024, NeurIPS 2023 workshops XAIA and ICBINB

[7] Learning to Discover Abstractions for LLM Reasoning

Yuxiao Qu*, Anikait Singh*, Yoonho Lee*, Amrith Setlur, Ruslan Salakhutdinov, Chelsea Finn, Aviral Kumar

ICML 2025 workshops AI for Math, PRAL, ES-FoMo