Building peace in the minds of men and women

UNESCO highlights the need to set human rights and ethical standards for Artificial Intelligence at the Rightscon 2018

29 May 2018

“Without human rights and ethics, uncontrolled Artificial Intelligence technology might do more harm than good”. Reflecting on current debates on developing and applying big data and AI technologies, UNESCO held an interactive session during RightsCon 2018. Participants stressed the urgent need to set human rights and ethical standards for these evolving technologies.

“UNESCO has a great interest in the topic of Artificial Intelligence because of its great potential in promoting human rights, democracy, and sustainable development,” said Xianhong Hu from UNESCO during a session on Harnessing Big Data and Artificial Intelligence to Advance Knowledge Societies and Sustainable Development held on 18 May 2018 in Toronto, Canada. This session held during RightsCon 2018 aimed to trigger debates and reflections on multiple technological, ethical, political, social or legal implications of the application of big data and artificial intelligence towards building inclusive knowledge societies and achieving the 2030 Sustainable Development Goals.

Opening and moderating the session, Xianhong Hu also pointed out the potential negative consequences of improperly developed AI technology and raised questions on freedom of expression, privacy, data safety, transparency, Internet governance, and digital literacy. “As these evolving technologies are currently being designed, we need to urgently set human rights standards for them to achieve their full potential,” she added before giving the floor to the panelists.

“There is a role for government to make sure that technologies which impact all aspects of our lives are properly designed. In the context of Canada chairing the G7 this year, we are trying to move the international conversation forward about what it means for AI to be rights-respecting,” said Tara Denham, Director of the Democracy Unit at Global Affairs Canada. “A lot of work still needs to be done to think about how AI and other technology will impact human rights, and also how governments should be engaging in this regard,” Tara Denham added.
 
“Most of the research on AI and big data has been done in high-income countries. We also see a concentration of men in the field. At the same time, many tools are being developed in developing countries but governments aren’t sufficiently involved” stressed Dhanaraj Thakur from the Web Foundation who suggested the importance to deal with those issues with a more representative and inclusive approach.  Talking about the future of big data and AI, Dhanaraj Thakur added that “we need to ensure data protection is in place and data sets are open as these technologies are being developed.”
 
“Algorithms are already changing our social structures and  we need to promote education and open source as it implies that people – and particularly children – should have the capacity to be empowered, to understand algorithms and coding ” highlighted Marie-Hélène Parizeau from Université Laval (Québec, Canada) and UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) expert. She stressed “the importance of human responsibility throughout design and production of robotics processes.”
 
“Existing inequalities, stereotypes, biases are being fed into the technology and this is problematic because this technology is being used already by police or industries. Hence the need to address imbalances which are entrenched in technologies,” said Joana Varon from Coding Rights. “Geopolitical dominance on the issue is also a problem. In the current situation, there is a monopoly of global data sets which are all located in the global North,” she added.
 
Talking about the privacy dimension of AI, Ramy Raoof, Senior Research Technologist at the Egyptian Initiative for Personal Rights, and Research Fellow with Citizen Lab, shared his concerns regarding AI as a way to “boost surveillance capabilities, and decide what happens in our lives.” “The only way to guarantee a truly multistakeholder approach is to have a proper open source solution to AI,” he said. Talking about improving transparency to achieve algorithmic accountability, Ramy Raoof added that “we can’t achieve open source status without adopting policies which can promote peer-to-peer review of the technology and open source.”

The moderator then gave the floor to a highly engaged audience who talked about governmental involvement in the designing of AI principles, business behaviors and regulations, as well as the accountability of designers. Xianhong Hu concluded the session by referring to UNESCO’s publications in the area (COMEST report in 2017 and Human Decisions: Thoughts on AI in 2018) and also UNESCO’s new publication What if we all governed the Internet: Advancing multi-stakeholder participation in Internet governance which serves a useful reference on how to enhance an open, inclusive multi stakeholder process in formulating AI-related policies and norms at national and international levels.