I'm a graduate student in Computer Science at UC Davis. My research lies at the intersection of foundation models, reasoning, and interpretability. I explore how LLMs and vision–language models (VLMs) can be made more transparent, aligned, and context-aware. My work focuses on developing context-adaptive embeddings, building explainable AI agents, and designing multimodal interfaces that translate raw model capacity into reliable, human-aligned intelligence for high-stakes domains.
If you’re interested in my research, would like to discuss relevant topics, or explore potential collaborations, please feel free to get in touch.
Research Interests: Reasoning, LLMs, VLMs, Embeddings, Interpretability
Google Scholar / GitHub / LinkedIn / Twitter/ Email