Extremely large-scale pretrained generative language models (LMs), called hyperscale LMs, such as GPT-3, PanGu-α, Jurassic-1, Gopher, and HyperCLOVA show astonishing performances in various natural language generation tasks under in-context few-shot or zero-shot settings. However, although hyperscale LMs largely contribute to various aspects of both research and real world business, many researchers have concerns on their severe side effects as well. In aspects of FAccT, in particular, many researchers argue that hyperscale LMs include the potential risks of fairness, accountability, and transparency on AI ethics such as data and model bias, toxic content generation, malicious usages, and intellectual property-related legal issues.