Kyungjun's headshot photo taken in the Joshua Tree National Park, California, USA.

Kyungjun Lee

Ph.D. Candidate in Computer Science @ University of Maryland, College Park

I am interested in understanding human interactions with computer systems and transfering the interactions into inclusive designs of such systems. In particular, enabling AI to understand human's intentions through interactions, I design and develop intelligent systems to augment users' experiences and capabilities.

kyungjun@umd.edu | Google Scholar | Twitter

About me

Kyungjun Lee is a 5th-year Ph.D. candidate in Computer Science at the University of Maryland, College Park, and a member of Human-Computer Interaction Lab and Intelligent Assistant Machines Lab, advised by Hernisa Kacorri. Kyungjun has been exploring human intersections with AI, AR, and wearable cameras to design a system that can understand the user's intent. His Ph.D. dissertation is to design intelligent camera systems to help blind people access their visual surroundings. He has also worked in the HCI group at Snap Research and collaborated with Cognitive Assistance Lab at Carnegie Mellon University.

News

Selected publications

a blind user wearing smart glasses to detect a pedestrian

Pedestrian Detection with Wearable Cameras for the Blind: A Two-way Perspective
Kyungjun Lee, Daisuke Sato, Saki Asakawa, Hernisa Kacorri, Chieko Asakawa
Proceedings of ACM CHI Conference on Human Factors in Computing Systems (CHI), 2020
ACM | arXiv | local | resources | talk@CHI | talk@HCIL


multiple photos of objects, such as soda bottles, cereal boxes, and soda cans, from crowdworkers

Crowdsourcing the Perception of Machine Teaching
Jonggi Hong, Kyungjun Lee, June Xu, Hernisa Kacorri
Proceedings of ACM CHI Conference on Human Factors in Computing Systems (CHI), 2020
ACM | arXiv


original image, hand mask, and object center heatmap blob used to train a hand-primed object localization model

Hand-Priming in Object Localization for Assistive Egocentric Vision
Kyungjun Lee, Abhinav Shrivastava, Hernisa Kacorri
Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2020
Best Paper Award, Applications
CvF | arXiv | local


hand-guided sonification feedback uses stereophonic sound to distinguish the location of the object on the horizontal axis and different sinusoidal waves to indicate how far the object is positioned from the center of the camera frame.

Revisiting Blind Photography in the Context of Teachable Object Recognizers
Kyungjun Lee, Jonggi Hong, Simone Pimento, Ebrima Jarjue, Hernisa Kacorri
Proceedings of ACM SIGACCESS Conference on Computers and Accessibility (ASSETS), 2019
ACM | local


the pipeline of hand-guided object recognition: (1) input, (2) hand recognition, (3) object localization, (4) object classification

Hands Holding Clues for Object Recognition in Teachable Machines
Kyungjun Lee, Hernisa Kacorri
Proceedings of ACM CHI Conference on Human Factors in Computing Systems (CHI), 2019
ACM | local | dataset | talk@CHI


Web interface of teachable object recognition with an example of a plastic bottle

Exploring Machine Teaching for Object Recognition with the Crowd
Jonggi Hong, Kyungjun Lee, June Xu, Hernisa Kacorri
Extended Abstracts of ACM CHI Conference on Human Factors in Computing Systems (CHI), 2019
ACM