Human-Computer Interaction Research

The human-computer interaction research group at NAVER AI Lab is a vibrant research group demonstrating how contemporary AI technologies can be beautifully embedded in computing systems, and understanding how we should design AI technologies to benefit end-users. Our research interests included but not limited to:

  • AI-infused interactive systems

  • Digital health and well-being applications

  • Accessibility and safety of AI

  • Large Language Model-driven computing systems and empathetic agents

Call For Open-Rank Research Scientists

We invite applications for self-motivated research scientists in the field of HCI.

Location: In-person, NAVER main office at Seongnam, Gyeonggi, South Korea

We expect you to do the following:

  • Execute academic research agendas at the intersection of HCI and AI.

  • Actively collaborate with other researchers at NAVER AI LAB to demonstrate the capabilities of AI technologies in designing novel HCI systems.

  • Lead a wide range of research activities including but not limited to interactive prototyping, user studies, surveys, design sprint, literature review, and deployment study.

  • Disseminate research outcomes at top-tier academic venues such as conferences and journals.

Working Environment:

  • You can pursue your research visions in a bottom-up research environment where you can propose a research agenda and organize the team on your own.

  • You can collaborate with other researchers at other teams at NAVER or other academic institutes.

  • We provide various forms of collaboration including research internship, research centers (e.g., Seoul National University, KAIST, and University of Toronto).

  • You will have opportunities to collaborate with product teams at NAVER, which develop numerous kinds of in-the-wild services on various platforms such as web, mobile, desktop, and smart speakers.

Minimum Qualifications

  • Holds a PhD degree (or expected to receive within 3 months) in HCI-related disciplines such as Computer Science, Information Science, and Industrial Design

  • 4 primary-authored (1st or corresponding) main track full papers at [CHI, UIST, CSCW, or IMWUT] within the last 6 years, at least 2 of them at CHI (Note that our research interns hold about 2 or more CHI papers on average when applying)

  • Expertise in the quantitative and qualitative HCI research methods

  • Proficient verbal and written communication in English

Preferred Qualifications

  • Being knowledgeable in Machine Learning, Computer Vision, or NLP technologies to streamline the collaboration with AI researchers

  • Having rich experience in designing and developing AI-infused interactive systems

How to Apply to our Recruitment Pool

Please submit your application via the recruitment platform to register for our Talent Pool (available in Korean/ English), where sign-in is required.

  • Application category: Tech > Common > Common > AI (Full-time)

  • Please be advised that the hiring process could extend up to three months. If you have a deadline for another offer, we encourage you to reach out to HR for further assistance.


Selected Publications (2023-)

NAVER AI Lab employees (full-time and interns) are distinguished by being displayed in bold text.

2024

ChaCha: Leveraging Large Language Models to Prompt Children to Share Their Emotions about Personal Events Woosuk Seo, Chanmo Yang,and Young-Ho Kim ACM CHI 2024 (PDF)

MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling Taewan Kim, Seolyeong Bae, Hyun Ah Kim, Su-woo Lee, Hwajung Hong, Chanmo Yang*, and Young-Ho Kim*(*co-corresponding) ACM CHI 2024 (PDF)

Understanding the Impact of Long-Term Memory on Self-Disclosure with Large Language Model-Driven Chatbots for Public Health Intervention Eunkyung Jo, Yuin Jeong, SoHyun Park, Daniel A. Epstein, and Young-Ho Kim ACM CHI 2024 (PDF)

DiaryMate: Understanding User Perceptions and Experience in Human-AI Collaboration for Personal Journaling Taewan Kim, Donghoon Shin, Young-Ho Kim, and Hwajung Hong ACM CHI 2024 (PDF)

GenQuery: Supporting Expressive Visual Search with Generative Models Kihoon Son, DaEun Choi, Tae Soo Kim, Young-Ho Kim, and Juho Kim ACM CHI 2024 (PDF)

EvalLM: Interactive Evaluation of Large Language Model Prompts on User-Defined Criteria Tae Soo Kim, Yoonjoo Lee, Jamin Shin, Young-Ho Kim, and Juho Kim ACM CHI 2024 (PDF)

Leveraging Large Language Models to Power Chatbots for Collecting User Self-Reported Data Jing Wei, Sungdong Kim, Hyunhoon Jung, and Young-Ho Kim PACM HCI (CSCW 2024)

2023

The Bot on Speaking Terms: The Effects of Conversation Architecture on Perceptions of Conversational Agents Christina Wei, Young-Ho Kim, and Anastasia Kuzminykh ACM CUI 2023 (PDF)

Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations Tong Sun, Yuyang Gao, Shubham Khaladkar, Sijia Liu, Liang Zhao, Young-Ho Kim, and Sungsoo Ray Hong PACM HCI (CSCW 2023) (PDF)

[CHI Best Paper Award] Understanding the Benefits and Challenges of Deploying Conversational AI Leveraging Large Language Models for Public Health Intervention Eunkyung Jo, Daniel A. Epstein, Hyunhoon Jung, and Young-Ho Kim ACM CHI 2023 (PDF)

AVscript: Accessible Video Editing with Audio-Visual Scripts Mina Huh, Saelyne Yang, Yi-Hao Peng, Xiang 'Anthony' Chen, Young-Ho Kim, and Amy Pavel ACM CHI 2023 (PDF)

DataHalo: A Customizable Notification Visualization System for Personalized and Longitudinal Interactions Guhyun Han, Jaehun Jung, Young-Ho Kim*, and Jinwook Seo* (*co-corresponding) ACM CHI 2023 (PDF)

DAPIE: Interactive Step-by-Step Explanatory Dialogues to Answer Children’s Why and How Questions Yoonjoo Lee, Tae Soo Kim, Sungdong Kim, Yohan Yun, Juho Kim ACM CHI 2023

Last updated