Comment on page
ML Research
Job description of ML research team of NAVER AI Lab (Location: Seongnam, South Korea or full-remote)
We hire full-time regular research scientists and research interns. As a research scientist in ML Research in NAVER AI Lab, your mission will be publications at top AI venues and contributions to NAVER and AI communities through impactful research.
Our current teams and works you may be interested in.
- Visual backbones. We are aiming to build strong vision backbones by better optimization techniques and architectural innovation. Our interest includes data augmentation (CutMix [Yun 2019], VideoMix [Yun 2020]) and novel neural architecture design (ReXNet [Han 2021], PiT [Heo 2021b], MaxPoolNet [Han 2022], DemystifyingNTK [Mok 2022], ViDT [Song 2021a], ViDT+ [Song 2022a]).
- Machine learning optimization. We are interested in seeking better optimization methods for arbitrary machine learning tasks. Our publication includes novel optimizers (AdamP [Heo 2021a], SWAD [Cha 2021]), knowledge distillation (Overhaul KD [Heo 2019], Show Attend Distill [Ji 2021], ReLabel [Yun 2021]), learning with large-scale insufficient annotations (e.g., weakly- / self- / semi-supervised learning) (WSOL Evaluation [Choe 2020], RDAP [Choe 2021], IVR [Kim 2021a], W-OoD [Lee 2022], CGL [Jung 2022]) and learning with noisy labels / long-tailed classes (MORPH [Song 2021b], Noisy label survey [Song 2022b], CMO [Park 2022]).
- Multi-modal learning. Learning unified global representations for different modalities is a challenging task. We tackle complicated joint optimization across various modalities including vision, language, or audio. Our recent projects are mostly focused on vision-and-language (ViLT [Kim 2021c], PCME [Chun 2021a], ECCV Caption [Chun 2022]), multi-modal generation (LF-Font [Park 2021a], MX-Font [Park 2021b]), and audio-visual representation learning.
- Trustworthy AI. Existing machine learning models cannot understand the problem itself. Especially, we are interested in solving challenging tasks, such as de-biasing or shortcut learning (ReBias [Bahng 2020], WCST-ML [Scimeca 2022]), domain generalization (SWAD [Cha 2021], MIRO [Cha 2022]), algorithmic fairness (CGL [Jung 2022]), robust learning against natural or adversarial corruptions. We also have worked on proper uncertainty estimation methods (PCME [Chun 2021a], TAUFE [Park 2021c]), and explainable AI (CALM [Kim 2021b]). Our research focuses on expanding machine knowledge from "just prediction" to "logical reasoning". Unfortunately, because the correct evaluation of machine reliability is difficult, we also have worked on building fair evaluation benchmarks and metrics (RegEval [Chun 2019], WSOL Evaluation [Cheo 2020], PRDC [Naeem 2020], ECCV Caption [Chun 2022])
- Research scientist
- Strong track record of publications at top-tier conferences in machine learning and computer vision, e.g., NeurIPS, ICLR, CVPR, ICCV, ECCV, ICML, AAAI.
- Relevant job experiences, e.g., laboratory researcher experiences or full-time industrial research experiences.
- Preferred
- Ph.D. in CS, EE, mathematics, or other related technical fields, or equivalent work experience.
- Strong programming skills in Python (PyTorch).
- Experience with serving as an active member in the research community (e.g. reviewing activities, tutorial and workshop organization, and research code contributions).
- Responsibilities
- Organize and execute one’s own research agenda.
- Lead and collaborate on ambitious research projects.
- Research intern
- Experience in research collaborations and paper writing in related fields.
- Proficient programming skills in Python (PyTorch).
- Preferred
- Currently in an MS or Ph.D. program in CS, EE, mathematics, or other related technical fields.
- Proficient track record of publications at top-tier conferences in machine learning, computer vision, natural language processing, audio, hci, and speech.
Please check https://naver-career.gitbook.io/en/positions/ai-ml#application-process for more details.
- [Bahng 2020] Learning De-biased Representations with Biased Representations. Hyojin Bahng, Sanghyuk Chun, Sangdoo Yun, Jaegul Choo, Seong Joon Oh, ICML 2020.
- [Cha 2021] SWAD: Domain Generalization by Seeking Flat Minima. Junbum Cha, Sanghyuk Chun, Kyungjae Lee, Han-Cheol Cho, Seunghyun Park, Yunsung Lee, Sungrae Park, NeurIPS 2021.
- [Cha 2022] Domain Generalization by Mutual-Information Regularization with Pre-trained Models. Junbum Cha, Kyungjae Lee, Sungrae Park, Sanghyuk Chun, ArXiv Preprint.
- [Cheo 2020] Evaluating Weakly Supervised Object Localization Methods Right. Junsuk Choe, Seong Joon Oh, Seungho Lee, Sanghyuk Chun, Zeynep Akata, Hyunjung Shim, CVPR 2020.
- [Cheo 2021] Region-based dropout with attention prior for weakly supervised object localization. Junsuk Choe, Dongyoon Han, Sangdoo Yun, Jung-Woo Ha, Seong Joon Oh, Hyunjung Shim, Pattern Recognition 2021.
- [Chun 2019] An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods. Sanghyuk Chun, Seong Joon Oh, Sangdoo Yun, Dongyoon Han, Junsuk Choe, Youngjoon Yoo, ICML Workshop 2019.
- [Chun 2021a] Probabilistic Embeddings for Cross-Modal Retrieval. Sanghyuk Chun, Seong Joon Oh, Rafael Sampaio de Rezende, Yannis Kalantidis, Diane Larlus, CVPR 2021.
- [Chun 2021b] StyleAugment: Learning Texture De-biased Representations by Style Augmentation without Pre-defined Textures. Sanghyuk Chun, Song Park, ArXiv Preprint.
- [Chun 2022] ECCV Caption: Correcting False Negatives by Collecting Machine-and-Human-verified Image-Caption Associations for MS-COCO. Sanghyuk Chun, Wonjae Kim, Song Park, Minsuk Chang, Seong Joon Oh, ArXiv Preprint.
- [Han 2021] Rethinking Channel Dimensions for Efficient Model Design. Dongyoon Han, Sangdoo Yun, Byeongho Heo, YoungJoon Yoo, CVPR 2021.
- [Han 2022] Learning Features with Parameter-Free Layers. Dongyoon Han, YoungJoon Yoo, Beomyoung Kim, Byeongho Heo, ICLR 2022.
- [Heo 2019] A Comprehensive Overhaul of Feature Distillation. Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, Jin Young Choi, ICCV 2019.
- [Heo 2021a] AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights. Byeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Gyuwan Kim, Youngjung Uh, Jung-Woo Ha. ICLR 2021.
- [Heo 2021b] Rethinking spatial dimensions of vision transformers. Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh, ICCV 2021.
- [Ji 2021] Show, attend and distill: Knowledge distillation via attention-based feature matching. Mingi Ji, Byeongho Heo, Sungrae Park, AAAI 2021.
- [Jung 2022] Learning Fair Classifiers with Partially Annotated Group Labels. Sangwon Jung, Sanghyuk Chun, Taesup Moon, CVPR 2022.
- [Kim 2021a] Normalization Matters in Weakly Supervised Object Localization. Jeesoo Kim, Junsuk Choe, Sangdoo Yun, Nojun Kwak, ICCV 2021
- [Kim 2021b] Keep CALM and Improve Visual Feature Attribution. Jae Myung Kim, Junsuk Choe, Zeynep Akata, Seong Joon Oh, ICCV 2021.
- [Kim 2021c] ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. Wonjae Kim, Bokyung Son, Ildoo Kim, ICML 2021.
- [Lee 2022] Weakly Supervised Semantic Segmentation using Out-of-Distribution Data. Jungbeom Lee, Seong Joon Oh, Sangdoo Yun, Junsuk Choe, Eunji Kim, Sungroh Yoon, CVPR 2022.
- [Mok 2022] Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training? Jisoo Mok, Byunggook Na, Ji-Hoon Kim, Dongyoon Han, Sungroh Yoon, ICLR 2022.
- [Naeem 2020] Reliable Fidelity and Diversity Metrics for Generative Models. Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, Jaejun Yoo, ICML 2020.
- [Park 2021a] Few-shot Font Generation with Localized Style Representations and Factorization. Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim, AAAI 2021.
- [Park 2021b] Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts. Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim, ICCV 2021.
- [Park 2021c] Task-Agnostic Undesirable Feature Deactivation Using Out-of-Distribution Data. Dongmin Park, Hwanjun Song, Minseok Kim, Jae-Gil Lee, NeurIPS 2021.
- [Park 2022] The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification. Seulki Park, Youngkyu Hong, Byeongho Heo, Sangdoo Yun, Jin Young Choi, CVPR 2022.
- [Scimeca 2022] Which shortcut cues will DNNs choose? a study from the parameter-space perspective. Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Michael Poli, Sangdoo Yun, ICLR 2022.
- [Song 2021a] Exploiting Scene Depth for Object Detection with Multimodal Transformers. Hwanjun Song, Eunyoung Kim, Varun Jampani, Deqing Sun, Jae-Gil Lee, Ming-Hsuan Yang, BMVC 2021.
- [Song 2021b] Robust Learning by Self-Transition for Handling Noisy Labels. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee, KDD 2021.
- [Song 2022a] ViDT: An Efficient and Effective Fully Transformer-based Object Detector. Hwanjun Song, Deqing Sun, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang, ICLR 2022.
- [Song 2022b] Learning from Noisy Labels with Deep Neural Networks: A Survey. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee, TNNLS 2022
- [Yun 2019] CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo, ICCV 2019.
- [Yun 2020] Videomix: Rethinking data augmentation for video classification. Sangdoo Yun, Seong Joon Oh, Byeongho Heo, Dongyoon Han, Jinhyung Kim, ArXiv Preprint.
- [Yun 2021] Re-labeling ImageNet: from single to multi-labels, from global to localized labels. Sangdoo Yun, Seong Joon Oh, Byeongho Heo, Dongyoon Han, Junsuk Choe, Sanghyuk Chun, CVPR 2021
Last modified 1yr ago