UNESCO hosts a workshop on Artificial Intelligence for Human Rights and SDGs at 2018 Internet Governance Forum

16 November 2018

news_161118_ai_workshop0.jpg

UNESCO Workshop on “Artificial Intelligence for Human Rights and SDGs: Fostering Multi-Stakeholder, Inclusive and
Open Approaches”
© UNESCO

On Wednesday 14 November 2018, UNESCO organized a workshop to explore the issues of the knowledge divide, openness and inclusiveness, ethical standards and policies that could guide the development and application of Artificial Intelligence (AI), insofar as these technologies may profoundly shape humanity’s path to sustainable development, access to information and knowledge, as well as communication and the practice of journalism.

 “AI has great potential to foster open and inclusive knowledge societies and promote openness in education and scientific processes, digital inclusion, and cultural diversity, (which in turn) can contribute to strengthen democracy, peace and help achieve Sustainable Development Goals. However, AI could also exacerbate inequalities and increase the digital divide”, said Indrajit Banerjee, UNESCO Director of the Knowledge Societies Division, at the beginning of the session.

Organized within the larger spirit of multistakeholder framework, UNESCO’s workshop brought together various actors from the world of academia, civil society and governmental bodies, such as the Government of Mexico and the Council of Europe, who discussed why a multi-stakeholder, inclusive and open mechanism would be needed to address a number of key issues surrounding Artificial Intelligence.

H.E. Mr. Federico Salas Lofte, Ambassador Extraordinary and Plenipotentiary and Permanent Delegate of Mexico, stressed that AI technologies have “implications on a social, economic, ethical, and legal levels, which have to be addressed now in order to maximize its benefits and to minimize the risks”. He underlined the need to strengthen and promote alliances among all stakeholders and to “foster the investment in AI infrastructure, in order to improve the quality of data, Internet connectivity and data protection”.

“AI (has to be used) for the best of humanity and to challenge the worst in humanity”, said Nnenna Nwakanma, Interim Policy Director at the World Wide Web Foundation. She drew attention to concerns about access, trust and data availability. Thomas Hughes, Executive Director of ARTICLE 19, highlighted issues such as lack of respect of the rule of law, lack of transparency, lack of accountability, and privacy matters regarding data collection and use. He also proposed the testing of UNESCO’s ROAM principles to see if they encompass the development and application of AI technologies.

Silvia Grundmann, from the Council of Europe, underlined the need to put human rights and SDGs into the focus of AI-related discussion, adding that AI technologies should always comply with human rights, the democratic principles and the rule of law. She noted the Council of Europe’s work on a pioneering study to understand the human rights implications of the use of algorithms, and a number of policy recommendations that could be translated into concrete guidelines for European AI policy and standards. 

“Since 2010, there has been an explosion of AI because of the development of deep learning”, stated Marko Grobelnik, from the AI Lab at the Jožef Stefan Institute. “AI builds on itself like Lego bricks because of permanent innovation”. This innovation has led to the emergence of a number of invisible threats (such as the possibility for automatically-generated content “to influence society and manipulate mindsets”). “AI does not produce quality contents”, according to the speaker, “but speed”.

In a video message, Mila Romanoff, data privacy and data protection legal expert at UN Global Pulse, pointed out the importance of the use of AI and Big Data in an accountable way to assist the implementation of the Sustainable Development Goals. She signaled a UN initiative aimed at addressing some of the challenges posed by the development of AI technologies, which is the report on “Building Ethics into Privacy Frameworks for Big Data and AI”.

Addressing the challenges posed by the use of AI technology to journalism, Elodie Vialle, Head of the Journalism & Technology Desk at Reporters Sans Frontières (Reporters Without Borders), stated that AI and other new technologies were amplifying online threats by the use of bots as a way to drown out reliable journalistic reporting. In the face of an upcoming “AI winter”, she said, there is a need to establish a number of guarantees to protect the public space of information, shaped by private actors.

The session ended with Guy Berger, UNESCO Director for Freedom of Expression and Media Development, urging that the ROAM principles framework could help the challenges posed by AI technologies, in a way that effectively contributes and maximizes sustainable development.  He said that assessments could be done about AI’s impact on digital rights, openness and accessibility issues. 

More than 3,000 representatives from governments, United Nations agencies, the technology sector, regulators, civil society and academia attended the 13th Annual Meeting of the Internet Governance Forum (IGF), hosted by UNESCO and organized by the Government of France from 12 to 14 November 2018. Convened by the UN Secretary-General, the IGF sought to highlight open and inclusive discussions on the need to uphold human rights in the digital environment.