MPs this week debated establishing a ‘hippocratic oath’ for those working in the field of AI and automation, to help ensure that AI tools are used responsibly and ethically, with the aim of reducing bias and discrimination.
Jo Swinson, MP for East Dunbartonshire, and Margot James, the recently appointed Minister of State for Digital, Culture, Media and Sport, also called for new rules and regulations to govern AI technologies, in a bid to control the direction and use of the fastly evolving algorithms.
Swinson kicked off the debate by referencing Microsoft’s AI Twitter bot, Tay, which had to be deleted after it started making sexual references and made declarations such as “Hitler did nothing wrong”. Swinson pointed to Tay as an example of how AI can be unleashed with unknown consequences. The MP said:
I will focus on four important ethical requirements that should guide our policy making in this area: transparency, accountability, privacy and fairness. I stress that the story of Tay is not an anomaly; it is one example of a growing number of deeply disturbing instances that offer a window into the many and varied ethical challenges posed by advances in AI.
How should we react when we hear than an algorithm used by a Florida county court to predict the likelihood of criminals reoffending, and therefore to influence sentencing decisions, was almost twice as likely to wrongly flag black defendants as future criminals?
We have heard about a beauty contest judged by robots that did not like the contestants with darker skin. A report by PwC suggests that up to three in 10 jobs in this country could be automated by the early 2030s. We have read about children watching a video on YouTube of Peppa Pig being tortured at the dentist, which had been suggested by the website’s autoplay algorithm. In every one of those cases, we have a right to be concerned.
AI systems are making decisions that we find shocking and unethical. Many of us will feel a lack of trust and a loss of control.
Swinson added that major companies such as Deutsche Bank and Citigroup are turning to machine learning algorithms to streamline their recruitment processes, but highlighted that many such tools have been proven to be biased towards candidates of a particular race and gender. She added that if the algorithm is “opaque” it is hard to work out whether employment law is being broken. She added:
We must ensure that when things go wrong, people can be held accountable, rather than shrugging and responding that the computer says “don’t know”.
The MP also rightly noted that algorithms are trained using historical data to develop a template of characteristics to target – the problem being that historical data itself often reveals pre-existing biases. For example, just a quarter of FTSE 350 directors are women and fewer than one in 10 are from an ethnic minority. Swinson said that it is therefore easy to see how the “characteristics of their leaders might reinforce existing gender and race imbalances”.
She added concern that in the major Government-commissioned report, “Growing the artificial intelligence industry in the UK”, which was published in October, ethical questions were entirely omitted . It specifically said: “Resolving ethical and societal questions is beyond the scope and the expertise of this industry-focused review, and could not in any case be resolved in our short time-frame.”
Swinson than raised the prospect of a hippocratic-style oath for those working with AI. The MP said:
I say very strongly that ethical questions should not be an afterthought. They should not be an add-on or a “nice to have”. Ethical discourse should be properly embedded in policy thinking. It should be a fundamental part of growing the AI industry, and it must therefore be a key job of the centre for data ethics and innovation.
Regulation is important, and there are probably some gaps in it that we need to fill and get right, but this issue cannot be solved by regulation alone. I am interested in the Minister’s thoughts about that. Every doctor who enters the medical profession must swear the Hippocratic oath. Perhaps a similar code or oath of professional ethics could be developed for people working in AI—let me float the idea that it could be called the Lovelace oath in memory of the mother of modern computing—to ensure that they recognise their responsibility to embed ethics in every decision they take.
That needs to become part and parcel of the way industry works.
The Minister’s response
Margot James, Minister of State for Digital, Culture, Media and Sport, who was just appointed last week in the Prime Minister’s reshuffle, pointed to the government’s recently announced Office for AI as an indication of Whitehall’s commitment to taking on the challenge of automation. She said:
We are building the capacity to address the issues that accompany these technological advancements: issues of trust, ethics and governance; effective take-up by business and consumers; and the transition of skills and labour requirements.
James noted that the uses of data in AI and machine learning are “developing in valuable but potentially unsettling ways, because of the pace of adoption”. She suggested that specific answers to the challenges will vary by sector and those sectors will need to foster the necessary level of trust. James said:
We must ensure that these new technologies work for the benefit of everyone: citizens, businesses and wider society. We are therefore integrating strong privacy protections and accountability into how automated decisions affect users. A strong, effective regulatory regime is therefore vital.
We will introduce a digital charter, which will underpin the policies and actions needed to drive innovation and growth while making the UK the safest and fairest place to be online. A key pillar of the charter will be the centre for data ethics and innovation, which will look ahead to advise Government and regulators on the best means of stewarding ethical, safe and innovative uses of AI and all data, not just personal data. It will be for the chair of the centre to decide how they should engage with their stakeholders and build a wider discussion.
We expect that they will want to engage with academia, industry, civil society and indeed the wider public to build the future frameworks in which AI technology can thrive and innovate safely.
James said that the government needs to identify and understand the ethical and governance challenges posed by AI and then determine how best to identify appropriate rules, establish new norms and evolve policy and regulations.
The Minister added that a huge part of the challenge of AI will be developing the appropriate skills in the UK to match the fast pace of adoption. She said:
A study from last year suggests that digital technologies including AI can create a net total of 80,000 new jobs annually for a country such as the UK. We want people to be able to capitalise on those opportunities, as my hon. Friend suggested. We already have a resilient and diverse labour market, which has adapted well to automation, creating more, higher paying jobs at low risk of automation.
However, as the workplace continues to change, people must be equipped to adapt to it easily. Many roles, rather being directly replaced, will evolve to incorporate new technologies.
Undeniably, substantial changes lie ahead. Therefore, in terms of enabling people to reskill and take advantage of the changes and opportunities in the workplace, a national retraining scheme will help people. We also have plans to upskill 8,000 computer science teachers and work with industry to set up a new national centre for computing education, with a brief to encourage more girls to take advantage of the new technologies in their learning.
Image credit - Image free for commercial use