HyperscaleFAccT @ FAccT 2022
Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)
Last updated
Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)
Last updated
Extremely large-scale pretrained generative language models (LMs), called hyperscale LMs, such as GPT-3, PanGu-α, Jurassic-1, Gopher, and HyperCLOVA show astonishing performances in various natural language generation tasks under in-context few-shot or zero-shot settings. However, although hyperscale LMs largely contribute to various aspects of both research and real world business, many researchers have concerns on their severe side effects as well. In aspects of FAccT, in particular, many researchers argue that hyperscale LMs include the potential risks of fairness, accountability, and transparency on AI ethics such as data and model bias, toxic content generation, malicious usages, and intellectual property-related legal issues.
Our CRAFT, entitled “Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)” addresses these limitations and potential risks to develop hyperscale LMs and apply them into real-world applications for users. The ultimate goal of HyperFAcct CRAFT is to explore the solutions to what efforts we need to make for solving these issues. Experts with diverse backgrounds, such as machine learning, software engineering, business, law, AI ethics, and social computing, participate in our CRAFT as contributors.
Three presentations deal with legal and ethical issues, bias problems, and data transparency in hyperscale LMs for twenty minutes each. Seven panels discuss the presented topics in depth and seek the solutions to alleviate the potential risks in the viewpoint of both research and application deployment. In particular, we aim to derive the detailed execution policy and action items for better and safer hyperscale LM applications. The discussion of our CRAFT will be a helpful reference for many other research groups and AI companies that want to leverage hyperscale AI in the world.
This workshop will be held at ACM FAccT 2022 in Seoul, South Korea on the 21st of June, 2022.
Time: 21st of June, 2022. (Tue) 11:00 AM - 2:30 PM
Live Stream: Join the online session via Hopin
Timezone: Korean Standard Time - GMT+9
Haksoo Ko (Professor, School of Law, Seoul National University)
Kyunghyun Cho (Associate Professor, New York University)
Margaret Mitchell (Founder, Ethical AI)
Alice Oh (Professor, School of Computer Science, KAIST)
Sangchul Park (Assistant Professor, School of Law, Seoul National University)
Nako Sung (Executive Director, NAVER CLOVA)
Hwijung Ryu (Principal Senior Researcher, KT)
Deep Ganguli (Research Scientist, Anthropic)
Meeyoung Cha (Institute for Basic Science)
In-person: You can register for ACM FAccT 2022 Conference (Registration Page)
Online: Join the online session via Hopin
Only attendees who registered for the conference can join this event
Jung-Woo Ha (Research Head, NAVER CLOVA)
Hwaran Lee (Research Scientist, NAVER CLOVA)
Matthias Galle (Group Leader, NAVER LABS Europe)
Sangchul Park (Assistant Professor, School of Law, Seoul National University)
Meeyoung Cha (Chief Investigator, Data Science Group, Institute for Basic Science)