I’m a Ph.D. candidate at Stanford CS, advised by Chelsea Finn and part of the IRIS lab. I am affiliated with SAIL, CRFM, and the ML Group at Stanford. My research is generously supported through grants and fellowships from OpenAI and KFAS.
During my mandatory military service in South Korea, I served as a research scientist at Kakao and AITRICS, collaborating with Juho Lee. I hold a master’s degree in Computer Science (advised by Seungjin Choi) from POSTECH.
Here are some key questions that guide my research:
- Teaching strong models: Pre-trained models already possess much of what we aim to teach them. Post-training is more about eliciting existing capabilities than instilling new information. How can we develop more effective paradigms for “teaching” that leverage these pre-existing capabilities?
- Underspecification: No dataset fully specifies its intended task. How can we help models recognize and represent the multiple valid interpretations consistent with given data? How do we best leverage this diversity of hypotheses?
- Understanding information: Within data lies an underlying essence (“information”) that exists independently of its specific representation. How can we better conceptualize this notion of information and understand how machine learning models extract, process, and communicate it?
Selected Papers
Test-Time Alignment via Hypothesis Reweighting
arXiv preprint
Clarify: Improving Model Robustness with Natural Language Corrections
UIST 2024, NeurIPS 2023 workshops XAIA and ICBINB
AutoFT: Learning an Objective for Robust Fine-Tuning
NeurIPS 2023 workshop DistShift
Project and Probe: Sample-Efficient Domain Adaptation by Interpolating Orthogonal Features
ICLR 2024 (spotlight)
DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature
ICML 2023 (long oral)