NAVER Tech Career
  • NAVER Tech Career
  • NAVER Career
  • Positions
    • Data & Analytics (Naver Search)
    • NAVER AI Lab
      • Backbone Research
      • Generation Research
      • ML Research
      • Language Research
      • Human-Computer Interaction Research
  • Teams
    • NAVER Search
    • NAVER CLOUD AI
      • EVENTS
        • CLOVA @ NeurIPS 2022
        • CLOVA AI LAB @ CVPR 2022
        • HyperscaleFAccT @ FAccT 2022
        • NAVER CLOVA @ ICLR 2022
        • ICLR 2022 Social : ML in Korea
        • CLOVA AI Lab @ AAAI 2022
        • NeurIPS 2021 Social: ML in Korea
        • EMNLP 2021
        • CLOVA AI Lab @ NeurIPS 2021
        • NAVER CLOVA & AI Lab @ NAACL 2022
        • CLOVA & AI Lab @ ICML 2022
      • AI Lab
  • PUBLICATIONS
    • Up-to-date Publications
    • Core ML & AI
    • Computer Vision
    • NLP & Speech
    • Search & RecSys
    • HCI and More Research Areas
  • PRESENTATIONS
    • Search & RecSys
Powered by GitBook
On this page
  • Summary
  • Schedule
  • Speakers
  • How to attend and participate in the workshop
  • Presentation video
  • Organizers
  • Sponsors

Was this helpful?

  1. Teams
  2. NAVER CLOUD AI
  3. EVENTS

HyperscaleFAccT @ FAccT 2022

Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)

PreviousCLOVA AI LAB @ CVPR 2022NextNAVER CLOVA @ ICLR 2022

Last updated 2 years ago

Was this helpful?

Summary

Extremely large-scale pretrained generative language models (LMs), called hyperscale LMs, such as GPT-3, PanGu-α, Jurassic-1, Gopher, and HyperCLOVA show astonishing performances in various natural language generation tasks under in-context few-shot or zero-shot settings. However, although hyperscale LMs largely contribute to various aspects of both research and real world business, many researchers have concerns on their severe side effects as well. In aspects of FAccT, in particular, many researchers argue that hyperscale LMs include the potential risks of fairness, accountability, and transparency on AI ethics such as data and model bias, toxic content generation, malicious usages, and intellectual property-related legal issues.

Our CRAFT, entitled “Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)” addresses these limitations and potential risks to develop hyperscale LMs and apply them into real-world applications for users. The ultimate goal of HyperFAcct CRAFT is to explore the solutions to what efforts we need to make for solving these issues. Experts with diverse backgrounds, such as machine learning, software engineering, business, law, AI ethics, and social computing, participate in our CRAFT as contributors.

Three presentations deal with legal and ethical issues, bias problems, and data transparency in hyperscale LMs for twenty minutes each. Seven panels discuss the presented topics in depth and seek the solutions to alleviate the potential risks in the viewpoint of both research and application deployment. In particular, we aim to derive the detailed execution policy and action items for better and safer hyperscale LM applications. The discussion of our CRAFT will be a helpful reference for many other research groups and AI companies that want to leverage hyperscale AI in the world.

Schedule

  • Time: 21st of June, 2022. (Tue) 11:00 AM - 2:30 PM

Timezone: Korean Standard Time - GMT+9

Speakers

How to attend and participate in the workshop

    • Only attendees who registered for the conference can join this event

Presentation video

Organizers

Sponsors

This workshop will be held at in Seoul, South Korea on the 21st of June, 2022.

Venue:

Live Stream:

📅

👉

(Professor, School of Law, Seoul National University)

(Associate Professor, New York University)

(Founder, Ethical AI)

(Professor, School of Computer Science, KAIST)

(Assistant Professor, School of Law, Seoul National University)

(Executive Director, NAVER CLOVA)

(Principal Senior Researcher, KT)

(Research Scientist, Anthropic)

(Institute for Basic Science)

In-person: You can register for

Online:

(Research Head, NAVER CLOVA)

(Research Scientist, NAVER CLOVA)

(Group Leader, NAVER LABS Europe)

(Assistant Professor, School of Law, Seoul National University)

(Chief Investigator, Data Science Group, Institute for Basic Science)

ACM FAccT 2022
Room #205, Coex, Seoul, South Korea
Join the online session via Hopin
Add to your calendar
Open in google drive
Haksoo Ko
Kyunghyun Cho
Margaret Mitchell
Alice Oh
Sangchul Park
Nako Sung
Hwijung Ryu
Deep Ganguli
Meeyoung Cha
ACM FAccT 2022 Conference (Registration Page)
Join the online session via Hopin
Jung-Woo Ha
Hwaran Lee
Matthias Galle
Sangchul Park
Meeyoung Cha