
Amid the burgeoning new-generation technology of artificial intelligence (AI), countries including China are rolling out ethical principles to guide AI development and application. However, it remains unclear how authorities will "translate" these abstract principles into practical instructions for companies to execute, said experts at the World Economic Forum in Dalian, China on Wednesday.
"AI for good, ethical AI, and responsible AI – All these terms have been thrown around," said PwC global artificial intelligence lead Anand S. Rao at the forum. "There are a number of organizations that have come up with ethical principles. Those principles, I think, broadly state the business accent. But how do you translate those ethics into something that different companies at the frontline can execute?"
The question came after China issued a series of principles in June to regulate the research and application of AI, attempting to ensure the "safe, controllable and responsible use" of the technology amid rising privacy concerns in the country.
The principles include eight tenets, namely harmony and friendliness, fairness and justice, inclusiveness and sharing, respect for privacy, security and controllability, shared responsibility, open cooperation, and agile governance. These abstract tenets, however, did not provide more detailed guidance for the technological research and business operation of AI companies in China despite the country is already home to the largest number of AI unicorns, a term describing private startups valued at US$1 billion or above.
Beijing introduced a national AI plan in 2017 that encourages Chinese AI researchers leading an industry expected to be worth over US$150 billion by 2030. The number of venture financing deals in China’s AI sector already stood at 496 in 2018, with total deal value of US$15.7 billion.
"Technology for good is a beautiful vision, as well as a direction of efforts for us. It requires authorities to take actions and draft corresponding laws and regulations," said Chen Liming, chairman of Greater China Group at IBM Corporation, during a panel at the Annual Meeting of the New Champions co-organized by MIT Technology Review. "The European Union (EU) and many other countries have introduced relevant regulations including GDPR. But to my knowledge, many other countries haven’t set up any laws and regulations in this regard."
GDPR, fully known as General Data Protection Regulation, was implemented in late May 2018 to stipulate data protection and privacy for all individual citizens of the European Union and the European Economic Area. It also addresses the export of personal data outside the EU and EEA areas.
The government needs to be "agile" in making progress, setting rules, and bringing changes to regulations to keep pace with the "day-to-day evolution" of technologies, said Satsuki Katayama, minister of State for Regional Revitalization from the Cabinet Office of Japan, during the same panel.
Currently, Chinese AI firms are keeping a tight hold on the reins of the huge amount of data being generated by the consumers in the country. But the absence of effective legal governance and supervision has mounted concerns about how Chinese state-owned and private companies collect, safeguard, and utilize the trillions of data points they collect every day.
"There is a feeling [among the public] that you shouldn’t trust any corporations whatsoever. Some people have that feeling, and some people particularly don’t trust the big tech giants who know so much and are getting so much money," said Joanna Bryson, associate professor at the Department of Computer Science from the University of Bath.
Bryson drew an example of the British authority, who established the Select Committee on AI in June 2017 to further consider the economic, ethical and social implications of advances in AI, and to make recommendations. The committee published a 183-page report, "AI in the UK: ready, willing and able?" in April 2018, encouraging the British government to introduce a national AI strategy and proposes an "AI Code" with five principles.
"The most important principle, which is the fourth one, is that machine nature should always be transparent. We should know how the system works," said Bryson.
She added that the government should be communicating about accountability. The people or companies who build the systems should be obliged to show they are "using best practice and due diligence." They need to keep good records about whoever changes the codes, whoever uses the machine learning, and whatever changes are made so the government will be able to attribute blames if any consumer data is being used in the wrong way.
Alternatively, consumer data could be empowered to either the government – or more ideally to the individual – which will potentially break down monopolies.
"The fundamental idea is that we want the people or companies who generate the data to allow individuals to own the data. Individuals can, based on their evaluation and decision, give aspects of their data to a specific company for a specific purpose at a specific time," said Anand S. Rao, referring to a potential solution that is yet to be developed.
"For example, I can give my healthcare data to a life insurance company just today for them to give me a life insurance quote. After today, it vanishes, and they cannot use it for any other purposes," he said.