The past 18 months have seen growing public interest surrounding the advances of Artificial Intelligence in society. As the AI technology roll-out pace quickens, the rate at which questions are being asked and answered is comparatively lagging. Promises of positive changes are heralded; from the provision of more efficient services and sustainable business operations to a wider range of creature comforts that will make our lives better. However, AI technology embedment within society seems to have arrived with little governance in place to understand and safeguard the public against potential ethical issues and pitfalls, a matter highlighted by social scientists Crawford & Calo (2016).

In January 2018, at the World Economic Forum in Davos, Theresa May’s speech focused on the global transformation occurring from the use of AI. Aside from highlighting the many benefits, the pressing need for a governance framework to ensure the safe delivery of the technology into society was acknowledged.

The impact of AI is experienced by society and transcends national and international boundaries. Any governance framework needs to be relevant at a social level – respectfully embracing all demographics; cultures, ethnicities, genders and ages. It’s a tall order.

Understanding the relationship between AI and society will play a critical role in the creation of policies, and efforts are being made to do so. In September 2016, the Partnership on AI to benefit people and society was founded and more recently, in October 2017, DeepMind launched DeepMind Ethics and Society recognising the company’s “responsibility to support open research and investigation into the wider impacts of [their] work, in order to secure its safety, accountability, and potential for social good”.

While it is early days much about the intentions and approach of DeepMind’s Ethics and Society group can be gleaned from their website. An ‘About Us’ statement outlines their mission to “conduct interdisciplinary research that brings together experts from the humanities, social sciences and beyond, along with voices from civil society and technical insights”. There is a breakdown of the ‘Key Ethical Challenges’ faced and guiding ‘Principles’ that will guarantee the “rigour, transparency and social accountability” of their work.

Review of the group’s purpose shows it has been formed from an Interactive perspective. The expected audience (the public) and field-expert sources (academics) are clearly defined, with emphasis on the intention to educate and inform the public. Research will be interdisciplinary and include public participation. There is a Fellows panel – an advisory team tasked to provide oversight, critical feedback and guidance for the research strategy and work program. This has a Traditional perspective structure – a team of 6 exceptional, world-leading experts. However, it seems that to achieve the goals aimed for a Co-productionist perspective would be more apt.

The lack of an existing AI governance framework presents a unique opportunity for DeepMind as their findings will provide valuable insight for policy-makers. Lessons can be learned from the public’s reception of the coming to market of previous disruptive technologies, such as genetically modified foods and nanotechnology. Here, the work of Wilsdon and Wilis (2004) ‘See-through Science’, makes pertinent reading. The delivery of science is an iterative process, a to-and-fro between society and scientists striving for refinement (Jasanoff, 2004). The value of upstream public engagement, not to educate but to gain public knowledge and input from the outset, can help shift technology development from being a binary process, towards one that is co-evolutionary – where science and society advance together (Guston & Sarewitz, 2002).  We only need look at the RCUK case studies to see how research can be greatly enriched through scientists working with the public. A Co-productionist construct between scientists and the public would improve the progress of technology.

Huge benefits stand to be gained if DeepMind shifts beyond the proposed trans-disciplinary Interactive model, towards one where the end-user, the public, contributes to shaping the agenda – such that society is not told or sold a predetermined outcome, but rather is an integral cog in the technology (and policy) creation process.

While DeepMind promotes an Interactive concept, its setup and presentation to date have a Traditional structure. Yet to achieve the primary aim “to bring together the technical insights … and the diverse range of people who could be affected by AI” a Co-productionist approach seems more fitting. With this in mind, one question remains – where on the advisory panel is Joe Bloggs, the average public member who will unquestionably be affected by AI? Surely their contribution is as valuable and necessary as those of the experts, and without which it seems impossible for DeepMind to be able to achieve their objective.

 

Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature. 538(7625), 311-313.

Guston, D.H., & Sarewitz, D. (2002). Real-time technology assessment. Technology in Society, 24(1-2), 93-109

Jasanoff, S. (2004). The idiom of co-production. In S. Jasanoff, ed. States of knowledge: the co-production of science and social orderLondon: Routledge

Wilsdon, J., & Willis, R. (2004). See-through Science: Why Public engagement needs to move upstream. London: Demos.

 

Leave a comment