NAVER AI LAB
NAVER AI LAB aims to achieve results that will surprise the world through impactful research, thus contributing to the global AI research community, as the mid&long-term AI research team of NAVER.
Introducing NAVER AI LAB Click!
Meet Research Scientists at NAVER AI LAB Click!
[NAVER AI LAB Job Description]
Research topics
Including, but not limited to -
Next generation of backbones for image, video, and audio recognition.
Multimodal hyperscale representation
Novel neural architecture design (e.g., NAS).
Object recognition (e.g., classification, detection, segmentation, retrieval, etc.).
Lightweight and energy-efficient models (e.g., pruning, quantization, compression).
Learning with large-scale insufficient annotations (e.g., weakly- / self- / semi-supervised learning).
Novel learning algorithms for NNs (e.g., normalization, optimization, etc.).
Generative models for image, video, text, and audio
Uncoditional & conditional image generation
Image-to-image and vid-to-vid translation
Disentanglement and controllability
Cross-modal generation
Audio and music generation
Effective learning algorithm for generative models
Style transfer and super-resolution for images and videos
Neural render (NeRF) and super-resolution
Hyperscale language models and their extensions
Controllable LM, Hallucination
Prompt optimization
Multi-modal and Multi-lingual extension.
New evaluation metrics
Extension to dialogs, QA, summarization, content generation, etc.
Human computer interaction and interactive AI.
Accessibility
Computational Interaction
Computational Social Science and Social Computing
Data-driven Interface Design
Human Computation
Visualization
Representation learning for semi-structed or structured data.
Graph representation learning
Time-series prediction and representation learning
Trustworthy AI.
Explainable AI and causal inference.
Robust machine learning (adversarial robustness, domain generalization).
De-biased and fair machine learning.
Proper uncertainty estimation (e.g. prediction calibration, probabilistic machine learning).
Privacy-preserving AI (e.g. differential privacy, federated learning, etc.).
Audio recognition.
Big representation learning for automatic speech recognition (ASR).
Audio-visual speech recognition.
Healthcare AI
EMR/EHR based foundation models (large-scale pre-trained language models) for healthcare
Clinical predictive modeling with EMR/EHR (e.g., disease prediction, ICD code mapping)
Clinical decision support system
Medical image analysis for otorhinolaryngology & dentistry
Interpretability of AI models (XAI)
Causal inference in machine learning & intervention modeling for healthcare services
Other topics.
AI for social good.
Reinforcement learning in the wild.
AI Research with External Collaboration
SNU-NAVER Hyperscale AI Center
Professors: Byung-Gon Chun, Gunhee Kim, Seungwon Hwang, Hyunoh Song, Byoung-Tak Zhang, Taesup Moon, Sang-Goo Lee, Kyomin Jung, Kyoung Mu Lee
Main topics (not limited)
Advanced hyperscale language models (multimodal, multi-lingual)
Reliable and efficient distributed training
Overcoming limitations of current hyperscale LMs (hallucination, prompt optimization, bias)
Advanced large-scale self-supervised learning
Some members will contribute as an adjunct professor of SNU.
KAIST-NAVER Hypercreative AI Center
Professors: Jaegul Choo, Jinwoo Shin, Sung Ju Hwang, Eunho Yang, Jae-Sik Choi, Juho Lee, Kee-Eung Kim, Alice Oh, Juho Kim, Edward Choi, Minjoon Seo
Main topics (not limited)
Multi- and cross-modal content generation
Generation controllability and quality measurement
Representation learning for content generation
Some members will contribute as an adjunct professor of KAIST.
Academic Advisors
Kyunghyun Cho (NYU): NLP, hyperscale LM
Andrew Zisserman (U. of Oxford): Speech recognition and audio-visual representation learning
Jun-Yan Zhu (CMU): Generative models
Jonghyun Choi (GIST): Continual and online learning
Joseph Lim (USC): Reinforcement learning in the wild
Requirements
Research intern
Experience on research collaborations and paper writing in related fields.
Proficient programming skills in Python (PyTorch).
Preferred
Currently in an MS or PhD programme in CS, EE, mathematics or other related technical fields.
Proficient track record of publications at top-tier conferences in machine learning, computer vision, natural language processing, audio, hci, and speech.
Hiring process:
Research intern:
Algorithm coding test > Paper implementation or tech talk > Job interview
Last updated