As reported by The South China Morning Post, China is gearing up for a future that includes artificial intelligence (AI) and is laying out the framework to regulate this new technology. Beijing intends to have legislation in place before the end of this year.
The Cyberspace Administration of China(CAC), seeks to implement protocols that mandate companies to procure a license before they release generative AI models. The mandate stiffens draft regulations announced in April, which stated groups would be given 10 working days to register a product with authorities after launch.
The 2023 legislation plan of the State Council, China’s cabinet, includes the submission of a draft AI law, among more than 50 measures up for review by the National People’s Congress (NPC) Standing Committee, according to a document published on the council’s website this week.
The Standing Committee – the permanent body of the NPC, China’s national legislature – would review the draft of a new law three times “in normal cases” before putting the measure for a vote, according to the NPC’s website. It said this process could be extended and entail more reviews if there remain “significant issues to be further studied”.
Per report by Financial Times, “It is the first time that authorities in China find themselves having to do a trade-off” between two Communist party goals of sustaining AI leadership and controlling information, said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace.
One person close to the CAC’s deliberations said: “If Beijing intends to completely control and censor the information created by AI, they will require all companies to obtain prior approval from the authorities.”
But “the regulation must avoid stifling domestic companies in the tech race”, the person added. Authorities “are wavering”, the person said.
China is aiming to develop its regulatory approach to AI technology, which has the ability to swiftly generate humanlike text, images and other material effortlessly, before it becomes widespread.
According to Financial Times, the Cyberspace Administration of China’s draft rules published in April said AI content should “embody core socialist values” and not contain anything that “subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity”.
The CAC needed to ensure that AI was “reliable and controllable”, its director Zhuang Rongwen said recently.
The draft regulations also required that the data used by companies to train generative AI models should ensure “veracity, accuracy, objectivity and diversity”.
Entities such as Baidu and Alibaba, which released AI applications this year, have communicated with regulators in recent months to confirm their AI did not violate any rules, said two other people close to the regulators.
Angela Zhang, associate professor of law at the University of Hong Kong, said: “China’s regulatory measures primarily center on content control.”
Other governments and authorities are moving to legislate against potential misuse of the technology. The EU has put forth some of the most stringent regulation globally, causing great concern from the region’s companies and executives, while Washington has been deliberating on procedures to oversee AI and the UK is initiating a review.
The caliber of the information utilized to train AI is a crucial area of regulatory investigation.
Due to AI’s ability to fabricate content, Beijing has established a higher standard for what AI should offer its users. In turn, Chinese companies will need to invest more diligence into filtering the data used to train artificial intelligence.
The deficiency of information needed to fulfill those requests has created a gridlock hindering some companies from establishing and refining large language models, which is the technology underlying chatbots such as OpenAI’s ChatGPT and Google’s Bard.
Businesses are likely to be “more cautious and conservative about what AI they build” because the consequences of violating the rules could be severe, said Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.
Chinese authorities implied in their draft regulations that tech groups making an AI model would be almost fully responsible for any content created. That would “make companies less willing to make their models available since they might be held responsible for problems outside their control”, said Toner.
Comments