Backbone Research
About Us
Our team is devoted to a comprehensive exploration of fundamental topics in deep learning and artificial intelligence. We have a strong passion for developing efficient and powerful deep neural network models and rigorously training them to establish a solid foundation. Our focus lies in advancing large language models from a machine-learning perspective toward multimodal learning and beyond. Our research interests are summarized as follows:
Research topics - we do everything related to LLMs and VLMs
New Foundation Models/Elements: We focus on modeling the foundational architecture itself and its underlying architectural elements in large language models and multi-model models.
New Training Methods: Our team is also developing novel training methodologies that enhance the learning efficiency and effectiveness of our models.
Cost-Efficiency Improvements: We explore various avenues to increase overall efficiency, addressing a wide range of needs and challenges.
Data-Related Topics: We examine critical issues related to training data to optimize training and performance.
All Machine Learning (ML)-perspective topics: We work on all fundamental topics you can imagine from an ML perspective.
Hiring Research Scientists
We are seeking globally competitive research scientists committed to advancing generational research. As a valued team member, you will have the opportunity to lead your own projects, select your research topics, and collaborate with other members and our network of affiliated centers. We place a high priority on the proactivity and initiative of our new members, viewing these qualities as essential to advancing our environment. Ideal candidates should possess excellent communication skills and a strong desire to innovate and collaborate, contributing to our dynamic and forward-thinking team.
Requirements
Publication record: Proven track record of publications in multiple top-tier conferences and journals in AI/ML (prerequisite).
Communication skills: Exceptional ability to communicate effectively and engage in open discussions.
Analytical abilities: Deep insight and intuition, complemented by outstanding problem-solving abilities.
Academic qualification: Ph.D. (or expected within six months) in Computer Science, Electrical Engineering, or equivalent research experience in these fields.
Research experience: Significant experience in research collaboration and academic writing in relevant domains.
Hiring Research Interns
We are seeking interns passionate about the research topics we focus on. While alignment with our mentioned topics is not mandatory, preference will be given to candidates who are actively studying closely related areas. A key prerequisite for the internship is excellent communication skills, which are necessary for clearly and effectively articulating research progress and findings. This is a full-time, in-person internship role.
Requirements
Communication skills: Strong communication skills and openness to constructive discussions and feedback (prerequisite for those who would like to be an internship member).
Educational background: Currently pursuing a PhD or equivalent in Computer Science (CS), Electrical Engineering (EE), Mathematics, or other relevant fields (not mandatory but preferred).
Research contributions: At least one paper was authored by the first author of an AI/ML-related conference.
Collaboration experience: Proven experience in research collaborations and academic writing in related fields.
How to Apply
Recent selected publications (from 2023 to 2025)
Below is a list of recent publications that reflect our areas of interest and ongoing projects. NAVER AI Lab members are distinguished by being displayed in bold text. The corresponding authors and those who have made equal contributions are denoted by *, and the internship members or visiting
researchers who worked at NAVER AI Lab are donated by ✝︎.
2025
Less is Not Worse: Effective Reasoning Without Complete Reasoning Chains, Jaehui Hwang* (NAVER AI Lab), Sangdoo Yun (NAVER AI Lab), Byeongho Heo (NAVER AI Lab), Dongyoon Han* (NAVER AI Lab), NeurIPS Workshop on Efficient Reasoning 2025.
Token Bottleneck: One Token to Remember Dynamics, Taekyung Kim (NAVER AI Lab), Dongyoon Han (NAVER AI Lab), Byeongho Heo (NAVER AI Lab), Jeongeun Park✝︎ (Korea Univ.), Sangdoo Yun (NAVER AI Lab), NeurIPS 2025.
NegMerge: Consensual Weight Negation for Strong Machine Unlearning, Hyoseo Kim✝︎ (Sogang Univ.), Dongyoon Han* (NAVER AI Lab), Junsuk Choe* (Sogang Univ.). ICML 2025.
MaskRIS: Semantic Distortion-aware Data Augmentation for Referring Image Segmentation, Minhyun Lee✝︎ (KAIST), Seungho Lee (KAIST), Song Park (NAVER AI Lab), Dongyoon Han (NAVER AI Lab), Byeongho Heo (NAVER AI Lab), Hyunjung Shim (KAIST), TMLR 2025.
Peri-LN: Revisiting Layer Normalization in the Transformer Architecture. Jeonghoon Kim (NAVER Cloud, KAIST), Byeongchan Lee (KAIST), Cheonbok Park (NAVER Cloud, KAIST), Yeontaek Oh (NAVER Cloud), Beomjun Kim (KAIST), Taehwan Yoo (NAVER Cloud), Seongjin Shin (NAVER Cloud), Dongyoon Han (NAVER AI Lab), Jinwoo Shin (KAIST), Kang Min Yoo (NAVER Cloud). ICML 2025.
Masking meets Supervision: A Strong Learning Alliance. Byeongho Heo (NAVER AI Lab), Taekyung Kim (NAVER AI Lab), Sangdoo Yun (NAVER AI Lab), Dongyoon Han (NAVER AI Lab). CVPR 2025.
Self-supervised Visual State Representation Learning for robotics from Dynamic Scenes, Taekyung Kim (NAVER AI Lab), Dongyoon Han (NAVER AI Lab), Byeongho Heo (NAVER AI Lab), Jeongeun Park✝︎ (Korea Univ.), Sangdoo Yun (NAVER AI Lab), ICLR Robot Learning Workshop 2025.
Token-Supervised Value Models for Enhancing Mathematical Problem-Solving Capabilities of Large Language Models. Jung Hyun Lee (NAVER Cloud), June Yong Yang (KAIST), Byeongho Heo (NAVER AI Lab), Dongyoon Han (NAVER AI Lab), Kyungsu Kim (SNU), Eunho Yang (KAIST), Kang Min Yoo (NAVER Cloud). ICLR 2025.
DaWin: Training-free Dynamic Weight Interpolation for Robust Adaptation. Changdae Oh✝︎ (UW-Madison), Yixuan Li (UW-Madison), Kyungwoo Song* (Yonsei University), Sangdoo Yun* (NAVER AI Lab), Dongyoon Han* (NAVER AI Lab). ICLR 2025.
Morphing Tokens Draw Strong Masked Image Models. Taekyung Kim* (NAVER AI Lab), Byeongho Heo (NAVER AI Lab), Dongyoon Han* (NAVER AI Lab). ICLR 2025.
2024
Rotary Position Embedding for Vision Transformer, Byeongho Heo, Song Park, Dongyoon Han, Sangdoo Yun, ECCV 2024.
Learning with Unmasked Tokens Drives Stronger Vision Learners, Taekyung Kim*, Sanghyuk Chun, Byeongho Heo, Dongyoon Han*, ECCV 2024.
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs, Donghyun Kim*, Byeongho Heo, Dongyoon Han*, ECCV 2024.
SeiT++: Masked Token Modeling Improves Storage-efficient Training, Minhyun Lee*, Song Park*, Byeongho Heo, Dongyoon Han, Hyunjung Shim, ECCV 2024.
Leveraging Temporal Contextualization for Video Action Recognition, Minji Kim, Dongyoon Han, Taekyung Kim*, Bohyung Han*, ECCV 2024.
Model Stock: All we need is just a few fine-tuned models, Dong-Hwan Jang✝︎, Sangdoo Yun*, Dongyoon Han*, ECCV 2024 (oral presentation).
HYPE: Hyperbolic Entailment Filtering for Underspecified Images and Texts, Wonjae Kim, Sanghyuk Chun, Taekyung Kim, Dongyoon Han, Sangdoo Yun, ECCV 2024 (oral presentation).
Similarity of Neural Architectures using Adversarial Attack Transferability, Jaehui Hwang✝︎, Dongyoon Han, Byeongho Heo, Song Park, Sanghyuk Chun, Jong-Seok Lee, ECCV 2024.
2023
Neglected Free Lunch - Learning Image Classifiers Using Annotation Byproducts, Dongyoon Han*, Junsuk Choe*, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh, ICCV 2023.
Switching Temporary Teachers for Semi-Supervised Semantic Segmentation, Jaemin Na✝︎, Jung-Woo Ha, Hyung Jin Chang, Dongyoon Han*, Wonjun Hwang*, NeurIPS 2023.
Gramian Attention Heads are Strong yet Efficient Vision Learners, Jongbin Ryu*, Dongyoon Han*, Jongwoo Lim, ICCV 2023.
Scratching Visual Transformer's Back with Uniform Attention, Nam Hyeon-Woo✝︎, Kim Yu-Ji, Byeongho Heo, Dongyoon Han, Seong Joon Oh, Tae-Hyun Oh, ICCV 2023.
Generating Instance-level Prompts for Rehearsal-free Continual Learning, Dahuin Jung✝︎, Dongyoon Han, Jiwhan Bang, Hwanjun Song, ICCV 2023 (oral presentation).
SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage, Song Park*, Sanghyuk Chun*, Byeongho Heo, Wonjae Kim, Sangdoo Yun, ICCV 2023.
The Devil is in the Points: Weakly Semi-Supervised Instance Segmentation via Point-Guided Mask Representation, Beomyoung Kim, Joonhyun Jeong, Dongyoon Han, Sung Ju Hwang, CVPR 2023.
What Do Self-Supervised Vision Transformers Learn?, Namuk Park✝︎, Wonjae Kim, Byeongho Heo, Taekyung Kim, Sangdoo Yun, ICLR 2023.
Last updated
Was this helpful?