Job description of ML research team of NAVER AI Lab (Location: Seongnam, South Korea or full-remote)
We hire full-time regular research scientists and research interns. As a research scientist in ML Research in NAVER AI Lab, your mission will be publications at top AI venues and contributions to NAVER and AI communities through impactful research.
Research topics (Including, but not limited to)
Our current teams and works you may be interested in.
[Mok 2022] Demystifying the Neural Tangent Kernel from a Practical Perspective: Can it be trusted for Neural Architecture Search without training? Jisoo Mok, Byunggook Na, Ji-Hoon Kim, Dongyoon Han, Sungroh Yoon, ICLR 2022.
[Naeem 2020] Reliable Fidelity and Diversity Metrics for Generative Models. Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, Jaejun Yoo, ICML 2020.
[Park 2021a] Few-shot Font Generation with Localized Style Representations and Factorization. Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim, AAAI 2021.
[Park 2021b] Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts. Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim, ICCV 2021.
[Park 2021c] Task-Agnostic Undesirable Feature Deactivation Using Out-of-Distribution Data. Dongmin Park, Hwanjun Song, Minseok Kim, Jae-Gil Lee, NeurIPS 2021.
[Park 2022] The Majority Can Help The Minority: Context-rich Minority Oversampling for Long-tailed Classification. Seulki Park, Youngkyu Hong, Byeongho Heo, Sangdoo Yun, Jin Young Choi, CVPR 2022.
[Scimeca 2022] Which shortcut cues will DNNs choose? a study from the parameter-space perspective. Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Michael Poli, Sangdoo Yun, ICLR 2022.
[Song 2021a] Exploiting Scene Depth for Object Detection with Multimodal Transformers. Hwanjun Song, Eunyoung Kim, Varun Jampani, Deqing Sun, Jae-Gil Lee, Ming-Hsuan Yang, BMVC 2021.
[Song 2021b] Robust Learning by Self-Transition for Handling Noisy Labels. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee, KDD 2021.
[Song 2022a] ViDT: An Efficient and Effective Fully Transformer-based Object Detector. Hwanjun Song, Deqing Sun, Sanghyuk Chun, Varun Jampani, Dongyoon Han, Byeongho Heo, Wonjae Kim, Ming-Hsuan Yang, ICLR 2022.
[Song 2022b] Learning from Noisy Labels with Deep Neural Networks: A Survey. Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, Jae-Gil Lee, TNNLS 2022