On the Common Pitfalls of Designing and Communicating Within-Subjects Experiments in HCI
London Bielicke, Emery D. Berger, Adam Chlipala, and 1 more author
In Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems, , 2026
Well-designed experiments are essential for drawing valid statistical conclusions. In studies with limited resources (e.g., access to human participants), researchers often assign multiple conditions to the same participant (i.e., within-subjects experiments). Although such designs can increase statistical power, dependencies across trials within a participant may threaten the validity of the experiment. Unfortunately, domain-specific assumptions about these dependencies are often left implicit when conducting, analyzing, and communicating results from within-subjects experiments. We show that some common within-subjects experiments in the HCI community make assumptions that may not actually hold and provide a formal representation for precisely encoding these assumptions. We hope these results and examples will motivate changes to how we as a community reason about, design, and communicate experiments.