NAVER Tech Career
  • NAVER Tech Career
  • NAVER Career
  • Positions
    • Data & Analytics (Naver Search)
    • NAVER AI Lab
      • Backbone Research
      • Generation Research
      • ML Research
      • Language Research
      • Human-Computer Interaction Research
  • Teams
    • NAVER Search
    • NAVER CLOUD AI
      • EVENTS
        • CLOVA @ NeurIPS 2022
        • CLOVA AI LAB @ CVPR 2022
        • HyperscaleFAccT @ FAccT 2022
        • NAVER CLOVA @ ICLR 2022
        • ICLR 2022 Social : ML in Korea
        • CLOVA AI Lab @ AAAI 2022
        • NeurIPS 2021 Social: ML in Korea
        • EMNLP 2021
        • CLOVA AI Lab @ NeurIPS 2021
        • NAVER CLOVA & AI Lab @ NAACL 2022
        • CLOVA & AI Lab @ ICML 2022
      • AI Lab
  • PUBLICATIONS
    • Up-to-date Publications
    • Core ML & AI
    • Computer Vision
    • NLP & Speech
    • Search & RecSys
    • HCI and More Research Areas
  • PRESENTATIONS
    • Search & RecSys
Powered by GitBook
On this page
  • Research Scientists & Research Interns @ Language Research, NAVER AI Lab
  • Join Our Team!
  • About Us
  • About the Research Scientists
  • About the Research Interns
  • Selected Papers

Was this helpful?

  1. Positions
  2. NAVER AI Lab

Language Research

PreviousML ResearchNextHuman-Computer Interaction Research

Last updated 9 months ago

Was this helpful?

Research Scientists & Research Interns @ Language Research, NAVER AI Lab

Join Our Team!

About Us

Language Research Team at is dedicated to understanding humanity and society, and advancing human-like but also trustworthy and safe language models and Artificial Intelligence. As a team operating in both academic and industrial environments, we strive to tackle problems that are both fundamental and relevant to the real world.

Our current Research Mission and Interests are centered around building trustworthy and safe Large Language Models (LLMs), with a focus on:

  • Datasets, Benchmarks, and Evaluation Metrics for LLMs

  • LLM Security: Attacks, Defenses & Detections

  • Safety Alignment, Learning, and Inference Algorithms

  • LLM Agents, (Multi-)Agent Interactions, Decision-making, and Autonomous Agents

Check out our latest papers*(, ).*


About the Research Scientists

About the Role

We are looking for Research Scientists to join our team to research and development of safe and trustworthy Language Models and AI. The Research Scientists are encouraged to lead and/or support research projects collaboratively within the team and across the research field, other teams, and external organizations.

Specifically, the research topics include following but not limited to:

  • Red-teaming, Adversarial Attack, Security Attack

  • Watermarking

  • Training Data/Privacy Probing & Leakage

  • Model/Data/Task Contamination

  • Robustness

  • Safety Alignment

  • Model Unlearning

  • AI Explainability & Interpretability

  • Causality

  • Societal Impact by LLM Applications

Key Responsibilities

  • Undertake pioneering research by formulating challenging research questions and devising problem-solving methods.

  • Lead a wide range of research activities including but not limited to the ideation and development of safe and trustworthy AI systems, and authoring research papers.

  • Communicate research progress and findings clearly and effectively.

  • Actively collaborate with other researchers.

  • Report and present the research findings and developments at top-tier academic venues.

Requirements

  • Holds a PhD degree or equivalent (or expected to receive within 6 months) in Computer Science (CS), Electrical Engineering (EE), Mathematics, or other relevant fields.

  • An academic publication record at top-tier conferences in Natural Language Processing (e.g., *ACL), Machine Learning (e.g., NeurIPS, ICLR), and others (e.g., FAccT).

  • Experience in research collaborations and academic writing in related fields.

    • (Preferred) Global research/industrial collaboration experiences.

  • Excellent analytical and problem-solving skills.

  • Strong communication skills, openness to constructive discussion, and receptiveness to feedback.

How to apply

    • Application category: Tech > Common > Common > AI Safety (Full-time)

  • Hiring process

    • Application screening → Coding test → Job talk → Interview → (optional) Second Interview → Notification


About the Internship

Our team is offering research intern positions for 2024 Fall and 2025 Winter. As an intern, you'll be actively involved in developing and conducting research on trustworthy and safe large language models.

Before starting your internship, we will discuss closely to refine and develop your research plan. This process ensures that your proposal aligns with our mutual research interests. We strongly support your initiative to lead your main project while also engaging in other research projects. This approach offers a balanced experience in both research leadership and collaboration.

A key goal of this internship is to produce academic papers suitable for submission to top-tier conferences or journals. Additionally, we anticipate that the outcomes of the project will make meaningful contributions to real-world applications.

    • The office could be changed to NAVER Green Factory or the other near building.

  • This internship offers a flexible starting date.

Key Responsibilities

  • Undertake pioneering research by formulating challenging research questions and devising problem-solving methods. This includes implementing and evaluating models, as well as authoring research papers.

  • Communicate research progress and findings clearly and effectively.

  • Demonstrate proactivity and the ability to successfully complete projects.

Requirements

  • Pursuing a PhD or equivalent in Computer Science (CS), Electrical Engineering (EE), Mathematics, or other relevant fields.

  • At least one paper authored as the first author in AI/ML-related conferences.

    • (Preferred) A strong academic publication record at top-tier conferences in Natural Language Processing (e.g., *CL), Machine Learning (e.g., NeurIPS, ICLR), and others.

  • Experience in research collaborations and academic writing in related fields.

  • Excellent analytical and problem-solving skills.

  • Strong communication skills, openness to constructive discussion, and receptiveness to feedback.`

How to apply

    • Application category: Tech > Common > Common > AI (Intern)

  • Your application should include the following:

    • CV

    • Brief research interests and research plans

      • that include research questions and goals with a few related works.

      • that include brief idea and direction to solve the problem. (not necessary to be perfect!)

  • Hiring process

    • Application screening → Coding test → Job talk → Interview → (optional) Second Interview → Notification

  • Note

    • Please submit your application by Sept. 6.

    • This position could be closed early when the position is full.

We look forward to your application and the possibility of you joining our team. If you have any question, please contact us! 🤗


Selected Papers

Please apply at here (/ ) (sign-in required)

This is a full-time, in-person role at (Seongnam-si, Gyeonggi-do, South Korea)

Please apply here (/) (sign-in required)

Yongjin Yang, Haneul Yoo, Hwaran Lee, Arxiv, 2024 dataset & benchmarkuncertainty

, Haneul Yoo, Yongjin Yang, Hwaran Lee, Arxiv, 2024 dataset & benchmark llm-security

, Minbeom Kim, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung, Arxiv, 2024 alignment dataset & benchmark

, Taehyun Lee, Seokhee Hong, Jaewoo Ahn, Ilgee Hong, Hwaran Lee, Sangdoo Yun, Jamin Shin, Gunhee Kim, ACL, 2024 llm-security

, Dennis Thomas Ulmer, Martin Gubri, Hwaran Lee, Sangdoo Yun, Seong Joon Oh, ACL, 2024 uncertainty

, Martin Gubri, Dennis Thomas Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh, ACL Findings, 2024 llm-security

, Jiyoung Lee, Minwoo Kim, Seungho Kim, Junghwan Kim, Seunghyun Won, Hwaran Lee, Edward Choi, ACL Findings, 2024 dataset & benchmark

, Jaewoo Ahn, Taehyun Lee, Junyoung Lim, Jin-Hwa Kim, Sangdoo Yun, Hwaran Lee, Gunhee Kim, ACL Findings, 2024 dataset & benchmark

, M Kim, J Koo, H Lee, J Park, H Lee, K Jung, NAACL (Shrot) dataset & benchmark

, S Kim, J Shin, Y Cho, J Jang, S Longpre, H Lee, S Yun, S Shin, S Kim, J Throne, M Seo, ICLR 2024 dataset evaluation

, TS Kim, Y Lee, J Shin, YH Kim, J Kim, arXiv preprint arXiv:2309.13633 evaluation

, J Jin, J Kim, N Lee, H Yoo, A Oh, H Lee, TACL dataset & benchmark

, T Kim, J Shin, YH Kim, S Bae, S Kim, arXiv preprint arXiv:2305.13857 evaluation

, S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo, EMNLP 2023 dataset

, S Kim, S Bae, J Shin, S Kang, D Kwak, KM Yoo, M Seo, EMNLP 2023 alignment

, S Kim, S Yun, H Lee, M Gubri, S Yoon, SJ Oh, NeurIPS 2023 (spotlight) llm-security

, T Lee, S Hong, J Ahn, I Hong, H Lee, S Yun, J Shin, G Kim, arXiv preprint arXiv:305.15060 llm-security

, H Lee, S Hong, J Park, T Kim, G Kim, JW Ha, ACL 2023 (industry track) dataset & benchmark

, H Lee, S Hong, J Park, T Kim, M Cha, Y Choi, BP Kim, G Kim, EJ Lee, Y Lim, A Oh, S Park, JW Ha, ACL 2023 (best paper nominated) dataset & benchmark

, D Lee, JY Lee, JW Ha, JH Kim, SW Lee, H Lee, HO Song, ACL 2023 llm-security

, M Kim, H Lee, KM Yoo, J Park, H Lee, K Jung, ACL 2023 (Findings) learning & inference

, M Ko, I Seong, H Lee, J Park, M Chang, M Seo, ACL 2023(Findings) dataset & benchmark

Korean
English
About the Research Interns
NAVER 1784
Korean
English
CSRT: Evaluation and Analysis of LLMs using Code-Switching Red-Teaming Dataset
AdvisorQA: Towards Helpful and Harmless Advice-seeking Question Answering with Collective Intelligence
Who Wrote this Code? Watermarking for Code Generation
Calibrating Large Language Models Using Their Generations Only
TRAP: Targeted Random Adversarial Prompt Honeypot for Black-Box Identification
KorNAT: LLM Alignment Benchmark for Korean Social Values and Common Knowledge
TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models
LifeTox: Unveiling Implicit Toxicity in Life Advice
Prometheus: Inducing Fine-grained Evaluation Capability in Language Models
EvalLM: Interactive Evaluation of Large Language Model Prompts on User-Defined Criteria
KoBBQ: Korean Bias Benchmark for Question Answering
Revealing User Familiarity Bias in Task-Oriented Dialogue via Interactive Evaluation
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Aligning Large Language Models through Synthetic Feedback
ProPILE: Probing Privacy Leakage in Large Language Models
Who Wrote this Code? Watermarking for Code Generation
KoSBi: A Dataset for Mitigating Social Bias Risks Towards Safer Large Language Model Application
SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable Responses Created Through Human-Machine Collaboration
Query-Efficient Black-Box Red Teaming via Bayesian Optimization
Critic-Guided Decoding for Controlled Text Generation
ClaimDiff: Comparing and Contrasting Claims on Contentious Issues
NAVER AI Lab
all
About Us
About the Research Scientists
About the Research Interns
selected papers
MAQA: Evaluating Uncertainty Quantification in LLMs Regarding Data Uncertainty,