HyperscaleFAccT @ FAccT 2022
Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)
Last updated
Was this helpful?
Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)
Last updated
Was this helpful?
Extremely large-scale pretrained generative language models (LMs), called hyperscale LMs, such as GPT-3, PanGu-α, Jurassic-1, Gopher, and HyperCLOVA show astonishing performances in various natural language generation tasks under in-context few-shot or zero-shot settings. However, although hyperscale LMs largely contribute to various aspects of both research and real world business, many researchers have concerns on their severe side effects as well. In aspects of FAccT, in particular, many researchers argue that hyperscale LMs include the potential risks of fairness, accountability, and transparency on AI ethics such as data and model bias, toxic content generation, malicious usages, and intellectual property-related legal issues.
Our CRAFT, entitled “Fairness, Accountability, and Transparency in Hyperscale Language Models (HyperscaleFAccT)” addresses these limitations and potential risks to develop hyperscale LMs and apply them into real-world applications for users. The ultimate goal of HyperFAcct CRAFT is to explore the solutions to what efforts we need to make for solving these issues. Experts with diverse backgrounds, such as machine learning, software engineering, business, law, AI ethics, and social computing, participate in our CRAFT as contributors.
Three presentations deal with legal and ethical issues, bias problems, and data transparency in hyperscale LMs for twenty minutes each. Seven panels discuss the presented topics in depth and seek the solutions to alleviate the potential risks in the viewpoint of both research and application deployment. In particular, we aim to derive the detailed execution policy and action items for better and safer hyperscale LM applications. The discussion of our CRAFT will be a helpful reference for many other research groups and AI companies that want to leverage hyperscale AI in the world.
Time: 21st of June, 2022. (Tue) 11:00 AM - 2:30 PM
Timezone: Korean Standard Time - GMT+9
Only attendees who registered for the conference can join this event
This workshop will be held at in Seoul, South Korea on the 21st of June, 2022.
Venue:
Live Stream:
📅
👉
(Professor, School of Law, Seoul National University)
(Associate Professor, New York University)
(Founder, Ethical AI)
(Professor, School of Computer Science, KAIST)
(Assistant Professor, School of Law, Seoul National University)
(Executive Director, NAVER CLOVA)
(Principal Senior Researcher, KT)
(Research Scientist, Anthropic)
(Institute for Basic Science)
In-person: You can register for
Online:
(Research Head, NAVER CLOVA)
(Research Scientist, NAVER CLOVA)
(Group Leader, NAVER LABS Europe)
(Assistant Professor, School of Law, Seoul National University)
(Chief Investigator, Data Science Group, Institute for Basic Science)