It was recently announced that the Organisation for Economic Co-operation and Development (OECD) is to host the Secretariat of the new Global Partnership on AI (GPAI).
The GPAI consists of a collection of countries (Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, Korea, Singapore, Slovenia, UK and USA) along with the European Union. It aims to “guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth.” The GPAI brings together experts from industry, civil society, governments, and academia, to work across four broad themes:
- Responsible AI
- Data governance
- The future of work
- Innovation and commercialisation
One of its short term goals will be to investigate how artificial intelligence (AI) can be leveraged to better respond to and recover from the Coronavirus pandemic. Commenting, OECD Secretary-General Angel Gurría, said “AI is a truly transformational technology that could play a catalysing role in our response to Covid-19 and other global challenges provided it is developed and used with trust, transparency and accountability”.
The hosting arrangement means that GPAI’s governance bodies, consisting of a Council and a Steering Committee, will be supported by a Secretariat housed at the OECD in Paris – as well as two Centres of Expertise, one each in Montréal and Paris.
The choice to base its Secretariat at the OECD will no doubt be in part due to the work done by the OECD to create a set of five Principles on Artificial Intelligence, namely:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
It should be noted that a well publicised ethics board which was set up to look at AI in 2019 was abandoned within less than two weeks of being launched.
Image via www.vpnsrus.com.