Backbone Research
About Us
Our team is devoted to a comprehensive exploration of fundamental topics within the fields of deep learning and artificial intelligence. We have a strong passion for developing efficient and powerful models for deep neural networks and rigorously training them to establish solid foundation models. Initially focusing on computer vision and machine learning, we have recently broadened our scope to include language modeling and multimodal learning. Our research interests are summarized as follows:
Research topics
New Foundation Models/Elements: We focus on designing and building innovative foundation models and architectural elements in vision (language) models and language models.
New Training Methods: Our team is also dedicated to developing novel training methodologies that enhance the learning efficiency and effectiveness of our models.
Cost-efficiency Improvements: We explore various avenues to increase overall efficiency, addressing a wide range of needs and challenges.
Data-Related Topics: We examine critical issues related to training data to optimize training and performance.
Hiring Research Scientists
We are seeking globally competitive research scientists committed to advancing generational research. As a valued team member, you will have the opportunity to lead your own projects, select your research topics, and collaborate with other members and our network of affiliated centers. We place a high priority on the proactivity and initiative of our new members, viewing these qualities as essential to driving our environment forward. Ideal candidates should possess excellent communication skills and a strong desire to innovate and collaborate, contributing to our dynamic and forward-thinking team.
Requirements
Strong insight and intuition, coupled with excellent problem-solving skills.
Hold a PhD degree or equivalent (or expect to receive one within 6 months) in Computer Science (CS), Electrical Engineering (EE), or other closely related fields.
Demonstrated publication record in top-tier conferences and journals within AI/ML.
Extensive experience in research collaborations and academic writing in relevant fields.
Strong communication skills and openness to discussions.
Hiring Research Interns
We are seeking internship members who are passionate about the research topics we focus on. While alignment with our mentioned topics is not mandatory, preference will be given to candidates who are actively studying closely related areas. A key prerequisite for the internship is excellent communication skills, necessary for articulating research progress and findings clearly and effectively. This is a full-time, in-person internship role.
Requirements
Strong communication skills, and openness to constructive discussions and feedback.
Pursuing a PhD or equivalent in Computer Science (CS), Electrical Engineering (EE), Mathematics, or other relevant fields (not mandatory but preferred).
At least one paper authored as the first author in AI/ML-related conferences.
Proven experience in research collaborations and academic writing in related fields.
How to Apply
Recent publications (from 2023 to 2024)
Below is a list of recent publications that reflect our areas of interest and ongoing projects. NAVER AI Lab members are distinguished by being displayed in bold text. The corresponding authors and those who have made equal contributions are denoted by *, and the internship members or visiting researchers who worked in NAVER AI Lab are donated by ✝︎.
2024
Leveraging Temporal Contextualization for Video Action Recognition, Minji Kim✝︎, Dongyoon Han, Taekyung Kim*, Bohyung Han*, arxiv 2024.
Model Stock: All we need is just a few fine-tuned models, Dong-Hwan Jang✝︎, Sangdoo Yun*, Dongyoon Han*, arxiv 2024.
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs, Donghyun Kim*, Byeongho Heo, Dongyoon Han*, arxiv 2024.
Rotary Position Embedding for Vision Transformer, Byeongho Heo, Song Park, Dongyoon Han, Sangdoo Yun, arxiv 2024.
Morphing Tokens Draw Strong Masked Image Models, Taekyung Kim*, Byeongho Heo, Dongyoon Han*, arxiv 2024.
2023
SeiT++: Masked Token Modeling Improves Storage-efficient Training, Minhyun Lee✝︎*, Song Park*, Byeongho Heo, Dongyoon Han, Hyunjung Shim, arxiv 2023.
Learning with Unmasked Tokens Drives Stronger Vision Learners, Taekyung Kim*, Sanghyuk Chun, Byeongho Heo, Dongyoon Han*, arxiv 2023.
Masking Augmentation for Supervised Learning, Byeongho Heo, Taekyung Kim, Sangdoo Yun, Dongyoon Han, arxiv 2023.
Neglected Free Lunch - Learning Image Classifiers Using Annotation Byproducts, Dongyoon Han*, Junsuk Choe*, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh, ICCV 2023.
Switching Temporary Teachers for Semi-Supervised Semantic Segmentation, Jaemin Na, Jung-Woo Ha, Hyung Jin Chang, Dongyoon Han*, Wonjun Hwang*, NeurIPS 2023.
Gramian Attention Heads are Strong yet Efficient Vision Learners, Jongbin Ryu*, Dongyoon Han*, Jongwoo Lim, ICCV 2023.
Scratching Visual Transformer's Back with Uniform Attention, Nam Hyeon-Woo✝︎, Kim Yu-Ji, Byeongho Heo, Dongyoon Han, Seong Joon Oh, Tae-Hyun Oh, ICCV 2023.
Generating Instance-level Prompts for Rehearsal-free Continual Learning, Dahuin Jung✝︎, Dongyoon Han, Jiwhan Bang, Hwanjun Song, ICCV 2023 (oral presentation).
SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage, Song Park*, Sanghyuk Chun*, Byeongho Heo, Wonjae Kim, Sangdoo Yun, ICCV 2023.
The Devil is in the Points: Weakly Semi-Supervised Instance Segmentation via Point-Guided Mask Representation, Beomyoung Kim, Joonhyun Jeong, Dongyoon Han, Sung Ju Hwang, CVPR 2023.
What Do Self-Supervised Vision Transformers Learn?, Namuk Park✝︎, Wonjae Kim, Byeongho Heo, Taekyung Kim, Sangdoo Yun, ICLR 2023.
Last updated