# Human-Computer Interaction Research

The human-computer interaction research group at NAVER AI Lab is a vibrant research group demonstrating how contemporary AI technologies can be beautifully embedded in computing systems, and understanding how we should design AI technologies to benefit end-users. Our research interests included but not limited to:

* AI-infused interactive systems
* Digital health and well-being applications
* Accessibility and safety of AI
* Large Language Model-driven computing systems and empathetic agents

## Call For Open-Rank Research Scientists

We invite applications for self-motivated research scientists in the field of HCI.&#x20;

**Location:** In-person, NAVER main office at Seongnam, Gyeonggi, South Korea

### We expect you to do the following:

* Execute academic research agendas at the intersection of HCI and AI.
* Actively collaborate with other researchers at NAVER AI LAB to demonstrate the capabilities of AI technologies in designing novel HCI systems.
* Lead a wide range of research activities including but not limited to interactive prototyping, user studies, surveys, design sprint, literature review, and deployment study.
* Disseminate research outcomes at top-tier academic venues such as conferences and journals.

### Working Environment:

* You can pursue your research visions in a bottom-up research environment where you can propose a research agenda and organize the team on your own.
* You can collaborate with other researchers at other teams at NAVER or other academic institutes.
* We provide various forms of collaboration, including research internship.
* You will have opportunities to collaborate with product teams at NAVER, which develop numerous kinds of in-the-wild services on various platforms such as web, mobile, desktop, and smart speakers.

## Minimum Qualifications

* Holds a PhD degree (or expected to receive within 3 months) in HCI-related disciplines such as Computer Science, Information Science, and Industrial Design
* **3 primary-authored (1st or corresponding) main track full papers at \[CHI, UIST, CSCW, or IMWUT]** within the last *6 year*s, **at least 2 of them at CHI.**
* Expertise in technical prototyping of interactive computing artifacts
* Expertise in the quantitative and qualitative HCI research methods
* Proficient verbal and written communication in English

## Preferred Qualifications

* Being knowledgeable in Machine Learning, Computer Vision, or NLP technologies to streamline the collaboration with AI researchers
* Having rich experience in designing and developing AI-infused interactive systems

<br>

***

## Selected Publications (2023-)

NAVER AI Lab employees (full-time and interns) are distinguished by being displayed in bold text.&#x20;

### 2024

**ChaCha: Leveraging Large Language Models to Prompt Children to Share Their Emotions about Personal Events**\
**Woosuk Seo**, Chanmo Yang,and **Young-Ho Kim** \
ACM CHI 2024 ([PDF](http://younghokim.net/files/papers/chacha-chi24-preprint-240219.pdf))

**MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling**\
**Taewan Kim**, Seolyeong Bae, Hyun Ah Kim, Su-woo Lee, Hwajung Hong, Chanmo Yang\*, and **Young-Ho Kim**\*(\*co-corresponding)\
ACM CHI 2024 ([PDF](http://younghokim.net/files/papers/kim_mindfuldiary_chi24_preprint_240222.pdf))

**Understanding the Impact of Long-Term Memory on Self-Disclosure with Large Language Model-Driven Chatbots for Public Health Intervention**\
**Eunkyung Jo**, Yuin Jeong, SoHyun Park, Daniel A. Epstein, and **Young-Ho Kim**\
ACM CHI 2024 ([PDF](http://younghokim.net/files/papers/carecall-ltm-chi24-preprint-240216.pdf))

**DiaryMate: Understanding User Perceptions and Experience in Human-AI Collaboration for Personal Journaling**\
**Taewan Kim**, Donghoon Shin, **Young-Ho Kim**, and Hwajung Hong\
ACM CHI 2024 ([PDF](http://younghokim.net/files/papers/diarymate-chi24.pdf))

**GenQuery: Supporting Expressive Visual Search with Generative Models**\
Kihoon Son, DaEun Choi, Tae Soo Kim, **Young-Ho Kim**, and Juho Kim\
ACM CHI 2024 ([PDF](http://younghokim.net/files/papers/genquery-chi24.pdf))

**EvalLM: Interactive Evaluation of Large Language Model Prompts on User-Defined Criteria**\
Tae Soo Kim, Yoonjoo Lee, **Jamin Shin**, **Young-Ho Kim**, and Juho Kim\
ACM CHI 2024 ([PDF](http://younghokim.net/files/papers/evallm-chi24.pdf))

Leveraging Large Language Models to Power Chatbots for Collecting User Self-Reported Data\
**Jing Wei**, **Sungdong Kim**, Hyunhoon Jung, and **Young-Ho Kim**\
PACM HCI (CSCW 2024)

### 2023

**The Bot on Speaking Terms: The Effects of Conversation Architecture on Perceptions of Conversational Agents**\
Christina Wei, **Young-Ho Kim**, and Anastasia Kuzminykh\
ACM CUI 2023 ([PDF](http://younghokim.net/files/papers/bot-on-speaking-terms-wei-cui-2023.pdf))

**Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations**\
Tong Sun, Yuyang Gao, Shubham Khaladkar, Sijia Liu, Liang Zhao, **Young-Ho Kim,** and Sungsoo Ray Hong\
PACM HCI (CSCW 2023) ([PDF](http://younghokim.net/files/papers/deepfuse_sun_cscw2023.pdf))

\[CHI Best Paper Award]\
**Understanding the Benefits and Challenges of Deploying Conversational AI Leveraging Large Language Models for Public Health Intervention**\
**Eunkyung Jo**, Daniel A. Epstein, Hyunhoon Jung, and **Young-Ho Kim**\
ACM CHI 2023 ([PDF](http://younghokim.net/files/papers/jo-carecall-chi2023.pdf))

**AVscript: Accessible Video Editing with Audio-Visual Scripts**\
Mina Huh, Saelyne Yang, Yi-Hao Peng, Xiang 'Anthony' Chen, **Young-Ho Kim**, and Amy Pavel\
ACM CHI 2023 ([PDF](http://younghokim.net/files/papers/huh-avscript-chi2023.pdf))

**DataHalo: A Customizable Notification Visualization System for Personalized and Longitudinal Interactions** \
Guhyun Han, Jaehun Jung, **Young-Ho Kim**\*, and Jinwook Seo\* (\*co-corresponding)\
ACM CHI 2023 ([PDF](http://younghokim.net/files/papers/han-datahalo-chi2023.pdf))

**DAPIE: Interactive Step-by-Step Explanatory Dialogues to Answer Children’s Why and How Questions**\
Yoonjoo Lee, Tae Soo Kim, **Sungdong Kim**, Yohan Yun, Juho Kim\
ACM CHI 2023
