Human-centered AI
Designing AI around human judgment, human needs, and human values instead of treating people as the cleanup layer for automation.
Assistant Professor at Clemson University
I study how people actually work with AI in consequential settings, with a focus on human-AI collaboration, human-AI teaming, trust, training, and performance.
My goal is to design AI that supports human expertise rather than flattening it, and to understand what makes mixed human-AI teams resilient, legible, and effective.
I lead the BIG CAT Research Group, where we examine how human-AI teams become more trustworthy, more coordinated, and more effective in practice.
I work across three tightly connected areas, all centered on making AI workable inside human systems rather than apart from them.
Designing AI around human judgment, human needs, and human values instead of treating people as the cleanup layer for automation.
Understanding how communication, coordination, and shared work change when intelligent systems become part of everyday practice.
Studying what helps mixed teams become effective in high-stakes environments, including trust, training, autonomy, and resilience.
My work combines HCI, CSCW, and human factors to generate research that is grounded in theory and useful in practice.