Co-Optimizing Human-System Performance in VR/AR Abstract: Virtual and Augmented Reality enables unprecedented possibilities for displaying virtual content, sensing physical surroundings, and tracking human behaviors with high fidelity. However, we still haven't created "superhumans" who can outperform what we are in physical reality, nor a "perfect" XR system that delivers infinite battery life or realistic sensation. In this talk, I will discuss some of our recent research on leveraging eye/muscular sensing and learning to model our perception, reaction, and sensation in virtual environments. Based on the knowledge, we create just-in-time visual content that jointly optimizes human (such as reaction speed to events) and system performance (such as reduced display power consumption) in XR. Bio: Qi Sun is an assistant professor at New York University. Before joining NYU, he was a research scientist at Adobe Research. He received his PhD at Stony Brook University. His research interests lie in perceptual computer graphics, VR/AR, computational cognition, and visual optics. He is a recipient of the IEEE Virtual Reality Best Dissertation Award, with his research recognized as Best Paper Awards in ACM SIGGRAPH and IEEE ISMAR. His research is funded by NASA, NSF, DARPA, NVIDIA, and Adobe.