Backbone Research

About Us

Our team is devoted to a comprehensive exploration of fundamental topics in deep learning and artificial intelligence. We have a strong passion for developing efficient and powerful deep neural network models and rigorously training them to establish a solid foundation. Our focus lies in advancing large language models from a machine-learning perspective toward multimodal learning and beyond. Our research interests are summarized as follows:

  • New Foundation Models/Elements: We focus on modeling the foundational architecture itself and its underlying architectural elements in large language models and multi-model models.

  • New Training Methods: Our team is also developing novel training methodologies that enhance the learning efficiency and effectiveness of our models.

  • Cost-Efficiency Improvements: We explore various avenues to increase overall efficiency, addressing a wide range of needs and challenges.

  • Data-Related Topics: We examine critical issues related to training data to optimize training and performance.

  • All Machine Learning (ML)-perspective topics: We work on all fundamental topics you can imagine from an ML perspective.

Hiring Research Scientists

We are seeking globally competitive research scientists committed to advancing generational research. As a valued team member, you will have the opportunity to lead your own projects, select your research topics, and collaborate with other members and our network of affiliated centers. We place a high priority on the proactivity and initiative of our new members, viewing these qualities as essential to advancing our environment. Ideal candidates should possess excellent communication skills and a strong desire to innovate and collaborate, contributing to our dynamic and forward-thinking team.

Requirements

  • Publication record: Proven track record of publications in multiple top-tier conferences and journals in AI/ML (prerequisite).

  • Communication skills: Exceptional ability to communicate effectively and engage in open discussions.

  • Analytical abilities: Deep insight and intuition, complemented by outstanding problem-solving abilities.

  • Academic qualification: Ph.D. (or expected within six months) in Computer Science, Electrical Engineering, or equivalent research experience in these fields.

  • Research experience: Significant experience in research collaboration and academic writing in relevant domains.

Hiring Research Interns

We are seeking interns passionate about the research topics we focus on. While alignment with our mentioned topics is not mandatory, preference will be given to candidates who are actively studying closely related areas. A key prerequisite for the internship is excellent communication skills, which are necessary for clearly and effectively articulating research progress and findings. This is a full-time, in-person internship role.

Requirements

  • Communication skills: Strong communication skills and openness to constructive discussions and feedback (prerequisite for those who would like to be an internship member).

  • Educational background: Currently pursuing a PhD or equivalent in Computer Science (CS), Electrical Engineering (EE), Mathematics, or other relevant fields (not mandatory but preferred).

  • Research contributions: At least one paper was authored by the first author of an AI/ML-related conference.

  • Collaboration experience: Proven experience in research collaborations and academic writing in related fields.

How to Apply

  • Please submit your application via this platform to register for our Talent Pool (available in Korean/English), where sign-in is required.

    • Application category: Tech > Common > Common > AI (Intern)

    • Fall/Winter Internship may start on September 2nd.

Recent selected publications (from 2023 to 2025)

Below is a list of recent publications that reflect our areas of interest and ongoing projects. NAVER AI Lab members are distinguished by being displayed in bold text. The corresponding authors and those who have made equal contributions are denoted by *, and the internship members or visiting

researchers who worked at NAVER AI Lab are donated by ✝︎.

2025

2024

2023

Last updated

Was this helpful?