Thu, Nov 21, 2024 | Jumada al-Awwal 20, 1446 | DXB ktweather icon0°C

Chinese regulators give AI firms a helping hand

Already a global leader in AI (trailing only the United States), China has big ambitions in the sector – and the means to ensure that its legal and regulatory landscape encourages and facilitates indigenous innovation.

Published: Mon 9 Oct 2023, 10:35 PM

  • By
  • Angela Huyue Zhang

Top Stories

If a Chinese tech firm wants to venture into generative artificial intelligence it is bound to face significant hurdles arising from stringent government control, at least according to popular perceptions. China was, after all, among the first countries to introduce legislation regulating the technology. But a closer look at the so-called interim measures on AI indicates that far from hampering the industry, China’s government is actively seeking to bolster it.

This should not be surprising. Already a global leader in AI (trailing only the United States), China has big ambitions in the sector – and the means to ensure that its legal and regulatory landscape encourages and facilitates indigenous innovation.

The interim measures on generative AI reflect this strategic motivation. To be sure, a preliminary draft of the legislation released by the Cyberspace Administration of China (CAC) included some encumbering provisions. For example, it would have required providers of AI services to ensure that the training data and the model outputs be “true and accurate,” and it gave firms just three months to recalibrate foundational models producing prohibited content.

But these rules were watered down significantly in the final legislation. The interim measures also significantly narrowed the scope of application, targeting only public-facing companies and mandating content-based security assessment solely for those wielding influence over public opinion.

While securing approval from the regulatory authorities does entail additional costs and a degree of uncertainty, there is no reason to think that Chinese tech giants – with their deep pockets and strong capacity for compliance – will be deterred. Nor is there reason to think that the CAC would seek to create unnecessary roadblocks: just two weeks after the interim measures went into effect, the agency gave the green light to eight companies, including Baidu and SenseTime, to launch their chatbots.

Overall, the interim measures advance a cautious and tolerant regulatory approach, which should assuage industry concerns over potential policy risks. The legislation even includes provisions explicitly encouraging collaboration among major stakeholders in the AI supply chain, reflecting a recognition that technological innovation depends on exchanges between government, industry, and academia.

So, while China was an early mover in regulating generative AI, it is also highly supportive of the technology and the companies developing it. Chinese AI firms might even have a competitive advantage over their American and European counterparts, which are facing strong regulatory headwinds and proliferating legal challenges.

In the European Union, the Digital Services Act, which entered into force last year, imposes a raft of transparency and due-diligence obligations on large online platforms, with massive penalties for violators. The General Data Protection Regulation – the world’s toughest data-privacy and security law – is also threatening to trip up AI firms. Already, OpenAI – the company behind ChatGPT – is under scrutiny in France, Ireland, Italy, Poland, and Spain for alleged breaches of GDPR provisions, with the Italian authorities earlier this year going so far as to halt the firm’s operations temporarily.

The EU’s AI Act, which is expected to be finalized by the end of 2023, is likely to saddle firms with a host of onerous pre-launch commitments for AI applications. For example, the latest draft endorsed by the European Parliament would require firms to provide a detailed summary of the copyrighted material used to train models – a requirement that could leave AI developers vulnerable to lawsuits.

American firms know firsthand how burdensome such legal proceedings can be. The US federal government has yet to introduce comprehensive AI regulation, and existing state and sectoral regulation is patchy. But prominent AI companies such as OpenAI, Google, and Meta are grappling with private litigation related to everything from copyright infringement to data-privacy violations, defamation, and discrimination.

The potential costs of losing these legal battles are high. Beyond hefty fines, firms might have to adjust their operations to meet stringent remedies. In an effort to preempt further litigation, OpenAI is already seeking to negotiate content-licensing agreements with leading news outlets for AI training data.

Chinese firms, by contrast, can probably expect both regulatory agencies and courts – following official directives from the central government – to take a lenient approach to AI-related legal infringements. That is what happened when the consumer tech industry was starting out.

None of this is to say that China’s growth-centric regulatory approach is the right one. On the contrary, the government’s failure to protect the legitimate interests of Chinese citizens could have long-term consequences for productivity and growth, and shielding large tech firms from accountability threatens to entrench further their dominant market position, ultimately stifling innovation. Nonetheless, it appears clear that, at least in the short term, Chinese regulation will act as an enabler, rather than an impediment, for the country’s AI firms. — Project Syndicate

Angela Huyue Zhang, Associate Professor of Law and Director of the Center for Chinese Law at the University of Hong Kong, is the author of Chinese Antitrust Exceptionalism: How the Rise of China Challenges Global Regulation (Oxford University Press, 2021).



Next Story