The restriction of AI technologies to just China seems ironic as the country is currently heavily investing in advanced technologies like big data, internet of things, and artificial intelligence. Perhaps Beijing’s justification for these restrictions is their concern over these technologies being used for surveillance or subversive purposes. Other countries such as the United States are also moving aggressively into AI, so it will be important to monitor how this technology develops and focuses on compliance with international norms and regulations.
While the initial reception of Chinese tech companies’ general purpose large language models seems slightly lagging behind those of their western counterparts, the industry there is evidently equally dedicated to developing these capabilities. The latest example comes in the form of SenseTime’s release of its own version of a general purpose large language model, which claims to be able to converse in natural language and carry out a surprising number of tasks. While it’s still some way off from rivals such as Google’s GPT-4 and Microsoft’s Cambria, it’s clear that Chinese tech companies are intent on building these systems into their platforms – signalling that they may soon become even more dominant on the global stage.
Tongyi Qianwen is an AI model that Alibaba has released for public use. The Cyberspace Administration of China proposed restrictions that may smother relevant innovations and the Chinese AI industry’s ambitions along with them. This poses a major threat to the development of Tongyi Qianwen and other similar models, as they could be stifled without adequate justification. If this trend continues, it could have a negative impact on China’s economy as a whole, hindering technological innovation and growth.
While this draft rule seems to prohibit AI from violating government power and authority, the technology could still be used for other nefarious purposes such as disrupting national unity or undermining the legitimacy of government. It will be important for regulators to closely monitor AI technology in order to ensure that it does not undermine democratic processes or erode trust in government institutions.
One potential pitfall of using AI to create moral codes is that the machines are unable to escape the biases of their creators. Any code created this way is likely to be tainted with the biases of whoever creates it, and will be unenforceable in a world where people have different values. This could lead to difficult situations where computersdictate what is admired or frowned upon, potentially undermining traditional systems of morality.
Chinese technology firms are often quick to jump on newly invented intellectual property, especially if it can help them secure an edge in the lucrative international market. Although there have been cases of Chinese companies copying patented intellectual property from other countries, it is more common for China’s tech sector to simply develop its own versions of existing products or services. This is because China’s government has long provided generous financial backing for R&D, making it a leading innovator in many cutting-edge fields. As a result, Chinese companies tend to be well ahead of their industry counterparts when it comes to developing new technologies.
Many experts believe that developing artificial intelligence in countries with stringent privacy laws and restrictions on technology exports is difficult, if not impossible. Countries like China and Russia, which tend to be leaders in AI development, often have policies that inhibit the sharing of sensitive information between companies and scientists. Additionally, many AI experts believe that countries like these are struggling to maintain
As the data generated by AI models become more difficult to measure, regulators and providers alike are beginning to require greater transparency and accountability from those who produce and use these models. CAC’s draft rules reflect this concern by requiring that providers assume liability for the training data of their models, verifying users as real people, protecting personal information and reputation, labeling generated content as such, and many other restrictions.
Many experts believe that a responsible AI industry will require some very difficult and perhaps impossible to implement requirements. One such requirement is the need for companies to obtain permission from rights holders of text and media they use in their models. OpenAI has achieved success partly because it is working in an almost complete regulatory vacuum, but if the law required that the company, say, obtain permission from the rights holders of the text and media it used to train its models, it would probably still be waiting to build GPT-2.
The AI startup or company may decide to forgo China altogether in light of these new regulations. This fast-moving industry can be quickly derailed by a setback like this, so it’s important for startups and companies to be cautiously optimistic about their prospects while operating in China. Even if they manage to stay within the law, they may not be able to sustain growth given how quickly the industry moves.
Alibaba has always been a company embraced by regulatory skeptics. Its business practices have been scrutinized since it first began operations and its founder, Jack Ma, has consistently faced accusations of not abiding by Chinese regulations. Yet, Zhang’s assertion that the company is all on the same starting line may show that it is adapting to new regulations and pushing boundaries to stay ahead in a rapidly changing industry.
China’s proposed draft rules for regulating online expression are likely to stifle free speech and criticism of the government, prompting concern from rights groups. The rules would require site owners to obtain government approval before publishing any content, restrict the kind of content that can be published, and mandate that users wear identification tags when posting comments or messages on websites. The rules are open for comment (by parties in China, obviously) for the next month, after which they may or may not be revised. If adopted as written, they could come into effect later this year.