NAVER Tech Career
  • NAVER Tech Career
  • NAVER Career
  • Positions
    • Data & Analytics (Naver Search)
    • NAVER AI Lab
      • Backbone Research
      • Generation Research
      • ML Research
      • Language Research
      • Human-Computer Interaction Research
  • Teams
    • NAVER Search
    • NAVER CLOUD AI
      • EVENTS
        • CLOVA @ NeurIPS 2022
        • CLOVA AI LAB @ CVPR 2022
        • HyperscaleFAccT @ FAccT 2022
        • NAVER CLOVA @ ICLR 2022
        • ICLR 2022 Social : ML in Korea
        • CLOVA AI Lab @ AAAI 2022
        • NeurIPS 2021 Social: ML in Korea
        • EMNLP 2021
        • CLOVA AI Lab @ NeurIPS 2021
        • NAVER CLOVA & AI Lab @ NAACL 2022
        • CLOVA & AI Lab @ ICML 2022
      • AI Lab
  • PUBLICATIONS
    • Up-to-date Publications
    • Core ML & AI
    • Computer Vision
    • NLP & Speech
    • Search & RecSys
    • HCI and More Research Areas
  • PRESENTATIONS
    • Search & RecSys
Powered by GitBook
On this page
  • About Us
  • Research topics
  • Hiring Research Scientists
  • Hiring Research Interns
  • Recent selected publications (from 2023 to 2024)

Was this helpful?

  1. Positions
  2. NAVER AI Lab

Backbone Research

About Us

Our team is devoted to a comprehensive exploration of fundamental topics within the fields of deep learning and artificial intelligence. We have a strong passion for developing efficient and powerful models for deep neural networks and rigorously training them to establish solid foundation models. Initially focusing on computer vision and machine learning, we have recently broadened our scope to include language modeling and multimodal learning. Our research interests are summarized as follows:

Research topics

  • New Foundation Models/Elements: We focus on modeling foundational architecture itself and underlying architectural elements in language/vision/vision language models.

  • New Training Methods: Our team is also developing novel training methodologies that enhance the learning efficiency and effectiveness of our models.

  • Cost-efficiency Improvements: We explore various avenues to increase overall efficiency, addressing a wide range of needs and challenges.

  • Data-Related Topics: We examine critical issues related to training data to optimize training and performance.

Hiring Research Scientists

We are seeking globally competitive research scientists committed to advancing generational research. As a valued team member, you will have the opportunity to lead your own projects, select your research topics, and collaborate with other members and our network of affiliated centers. We place a high priority on the proactivity and initiative of our new members, viewing these qualities as essential to driving our environment forward. Ideal candidates should possess excellent communication skills and a strong desire to innovate and collaborate, contributing to our dynamic and forward-thinking team.

Requirements

  • Publication record: Proven track record of publications in multiple top-tier conferences and journals in AI/ML (prerequisite).

  • Communication skills: Exceptional ability to communicate effectively and engage in open discussions.

  • Analytical abilities: Deep insight and intuition, complemented by outstanding problem-solving abilities.

  • Academic qualification: Ph.D. (or expected within six months) in Computer Science, Electrical Engineering, or equivalent research experience in these fields.

  • Research experience: Significant experience in research collaboration and academic writing in relevant domains.

Hiring Research Interns

We are seeking internship members who are passionate about the research topics we focus on. While alignment with our mentioned topics is not mandatory, preference will be given to candidates who are actively studying closely related areas. A key prerequisite for the internship is excellent communication skills, which are necessary for articulating research progress and findings clearly and effectively. This is a full-time, in-person internship role.

Requirements

  • Communication skills: Strong communication skills and openness to constructive discussions and feedback (prerequisite for those who would like to be an internship member).

  • Educational background: Currently pursuing a PhD or equivalent in Computer Science (CS), Electrical Engineering (EE), Mathematics, or other relevant fields (not mandatory but preferred).

  • Research contributions: At least one paper was authored by the first author of an AI/ML-related conference.

  • Collaboration experience: Proven experience in research collaborations and academic writing in related fields.

How to Apply

    • Application category: Tech > Common > Common > AI (Intern)

    • Fall/Winter Internship may start on September 2nd.

Recent selected publications (from 2023 to 2024)

Below is a list of recent publications that reflect our areas of interest and ongoing projects. NAVER AI Lab members are distinguished by being displayed in bold text. The corresponding authors and those who have made equal contributions are denoted by *, and the internship members or visiting researchers who worked in NAVER AI Lab are donated by ✝︎.

2024

2023

PreviousNAVER AI LabNextGeneration Research

Last updated 4 months ago

Was this helpful?

Please submit your application via this platform to register for our Talent Pool (available in /), where sign-in is required.

, Byeongho Heo, Song Park, Dongyoon Han, Sangdoo Yun, ECCV 2024.

, Taekyung Kim*, Sanghyuk Chun, Byeongho Heo, Dongyoon Han*, ECCV 2024.

, Donghyun Kim*, Byeongho Heo, Dongyoon Han*, ECCV 2024.

, Minhyun Lee✝︎*, Song Park*, Byeongho Heo, Dongyoon Han, Hyunjung Shim, ECCV 2024.

, Minji Kim✝, Dongyoon Han, Taekyung Kim*, Bohyung Han*, ECCV 2024.

, Dong-Hwan Jang✝︎, Sangdoo Yun*, Dongyoon Han*, ECCV 2024 (oral presentation).

, Wonjae Kim, Sanghyuk Chun, Taekyung Kim, Dongyoon Han, Sangdoo Yun, ECCV 2024 (oral presentation).

, Jaehui Hwang✝︎, Dongyoon Han, Byeongho Heo, Song Park, Sanghyuk Chun, Jong-Seok Lee, ECCV 2024.

, Dongyoon Han*, Junsuk Choe*, Seonghyeok Chun, John Joon Young Chung, Minsuk Chang, Sangdoo Yun, Jean Y. Song, Seong Joon Oh, ICCV 2023.

, Jaemin Na✝︎, Jung-Woo Ha, Hyung Jin Chang, Dongyoon Han*, Wonjun Hwang*, NeurIPS 2023.

, Jongbin Ryu*, Dongyoon Han*, Jongwoo Lim, ICCV 2023.

, Nam Hyeon-Woo✝︎, Kim Yu-Ji, Byeongho Heo, Dongyoon Han, Seong Joon Oh, Tae-Hyun Oh, ICCV 2023.

, Dahuin Jung✝︎, Dongyoon Han, Jiwhan Bang, Hwanjun Song, ICCV 2023 (oral presentation).

, Song Park*, Sanghyuk Chun*, Byeongho Heo, Wonjae Kim, Sangdoo Yun, ICCV 2023.

, Beomyoung Kim, Joonhyun Jeong, Dongyoon Han, Sung Ju Hwang, CVPR 2023.

, Namuk Park✝︎, Wonjae Kim, Byeongho Heo, Taekyung Kim, Sangdoo Yun, ICLR 2023.

Korean
English
Rotary Position Embedding for Vision Transformer
Learning with Unmasked Tokens Drives Stronger Vision Learners
DenseNets Reloaded: Paradigm Shift Beyond ResNets and ViTs
SeiT++: Masked Token Modeling Improves Storage-efficient Training
Leveraging Temporal Contextualization for Video Action Recognition
Model Stock: All we need is just a few fine-tuned models
HYPE: Hyperbolic Entailment Filtering for Underspecified Images and Texts
Similarity of Neural Architectures using Adversarial Attack Transferability
Neglected Free Lunch - Learning Image Classifiers Using Annotation Byproducts
Switching Temporary Teachers for Semi-Supervised Semantic Segmentation
Gramian Attention Heads are Strong yet Efficient Vision Learners
Scratching Visual Transformer's Back with Uniform Attention
Generating Instance-level Prompts for Rehearsal-free Continual Learning
SeiT: Storage-Efficient Vision Training with Tokens Using 1% of Pixel Storage
The Devil is in the Points: Weakly Semi-Supervised Instance Segmentation via Point-Guided Mask Representation
What Do Self-Supervised Vision Transformers Learn?