UNESCO organised a panel discussion on the role of AI in the information sphere as part of the discussions on AI, Human Dignity and Inclusive Societies at the AI for Good Summit in Geneva on 29th May 2019. The discussion was organised within the context of changing norms of knowledge generation, access and use as well as their implications for human rights, openness and inclusive access to information.
Bhanu Neupane, UNESCO, invoked two contrasting but dystopian visions of the future from George Orwell’s 1984 and Aldous Huxley’s A Brave New World. Orwell painted a future of mass surveillance and telescreens keeping an eye on people in a manner similar to how personal data is used to create digital identities of people. In contrast, Huxley’s vision was of a society where no censorship would exist because in the glut of information truth would be drowned in a ‘sea of irrelevance’. Both visions present a version of the reality that exists today and the discussion focussed on how to address challenges of openness, transparency, human rights in the information sphere today.
A summary of UNESCO’s report “Steering AI and Advanced ICTs for Knowledge Societies” that analyses AI’s implications for freedom of expression, right to equality, right to privacy, openness and transparency, access to information and multistakeholder engagement was presented. While these challenges remain, there have been significant advantages of digital technologies including how universal access to information is empowering people to hold their government accountable, access educational resources online, and transfer and validate knowledge in a seamless manner.
Kathleen Siminiyu, Co-Founder of Nairobi Women in Machine Learning and Data Science Community highlighted that we do not own the data about ourselves and there is a need to educate people about collection of data and its use by governments and other organizations alike. She underlined that choice to participate in national registry’s such as the one in Kenya and in India is important for people to take control of their data and digital identities.
Nick Bradshaw, from Cortex Ventures Africa’s first AI only focused VC Fund and “Click-2-Pitch” Challenge, pointed out to the lack of knowledge about AI and spoke about the need to bridge the African AI narrative with the Northern Hemisphere narrative and create more opportunities for ideas and “good news” stories to cross over. He is working with his team to strengthen access to information about AI in Africa and have recently launched Artificial Intelligence Africa Wiki.
Nigel Hickson, Vice President, IGO Engagement at ICANN provided a historical perspective to the discussion as he drew lessons from how the Internet started as a technical project but later when its societal implications were realized, different stakeholders came together in its governance. He highlighted UNESCO’s role in humanizing the Internet and supporting its development through Internet Governance Forums. UNESCO recently launched its Internet Universality Indicators based on the ROAM approach. Thus, a role in AI Principles seemed appropriate.
Frits Bussemaker, Chair, Institute for Accountability in the Digital Age raised concerns regarding accountability and digital technologies. He asked who is going to be accountable if something goes wrong with decisions taken by AI? He stressed that the current legal systems are outdated to address the emerging challenges of accountability posed by AI.
Continuing the discussion on Principles for AI, Francesca Rossi, IBM AI Ethics Global Leader discussed her work as part of the European Commission’s High-Level Expert Group on Artificial Intelligence where they developed guidelines for a trustworthy AI. She underlined that AI should be lawful, ethical and robust for it to be trustworthy. She unpacked the often talked about concept of Openness with respect to AI. She not only stressed the importance of openness in accessing scientific knowledge through open source journals but also delineated the difference between ‘transparency and explainability’. Talking about transparency she said that it does not necessarily mean the communication of the data or algorithms, but being open about the design and the context within which AI is used. Explainability is instead the capability to justify decisions or recommendations made by an AI system. However, this is useful only if the person affected by such decisions has the capability to understand the explanation. Thus explanations need to be tailored to the terminology and the level of abstractions required by the recipient.
Discussion on AI in the information sphere organized by UNESCO at the AI for Good Summit 2019 in Geneva. Panel members discussing AI from a Rights, Openness, Access and Multistakeholder perspective. © ITU/R.Farrell
Eileen Donahoe, Executive Director, Global Digital Policy Incubator at Stanford University emphasized existing digital divide and new dependency as a reality check and noted the need to “not lose sight about how access to even computers in several parts of the world”.
The participants engaged in several rounds of questions about issues concerning disinformation, elections, freedom of expression, online content moderation, transparency and accountability. The session concluded with a message that:
- Respect for human rights is central to the development of AI
- Openness, transparency and explainability needs to be encouraged and institutionalized when it comes to the development and deployment of AI
- Urgent steps need to be taken to strengthen access to information and AI to bridge the digital divide
- Partnerships at all levels are needed to address the challenges of AI.
Within its mandate of promoting freedom of expression and building inclusive knowledge societies as well as recognizing AI as an opportunity to achieve the 2030 Sustainable Development Goals, UNESCO has initiated many initiatives to reflect and deliberate on the benefits and pitfalls of AI.
More info on UNESCO and AI: https://en.unesco.org/artificial-intelligence
Contact: Bhanu Neupane, Knowledge Societies Division