The government’s AI ambitions cannot ignore diversity and data bias


The British government has launched a sector deal for AI, but there are huge challenges that remain to make it a success – notably, bias and skills.

In diginonomica/government’s earlier post, we discussed the ambitions for the the UK government’s new ‘sector deal’ for AI, which should be launched today by Business Secretary Greg Clark. But what are its key challenges?

The Sector Deal for Artificial Intelligence aims to take “immediate, tangible actions” to advance the UK’s ambitions in AI and the data-driven economy, in line with the new Industrial Strategy.

It builds on – and is an extension of – the 2017 review by Professor Dame Wendy Hall and Jérôme Pesenti, ‘Growing the Artificial Intelligence industry in the UK’, which was commissioned last Spring and published in the Autumn.

News of the imminent launch was leaked by Hall herself at a Westminster eForum seminar event last week in central London. ‘Artificial Intelligence and Robotics: Innovation, Funding, and Policy Priorities’ brought together a range of speakers from academia, business, and government to discuss the challenges facing the UK in these hyper-competitive sectors.

So what were Hall and Pesenti’s guiding principles? Hall said:

We had lots of workshops. We tried to talk to as many stakeholders as possible, including lots of small companies. This was all about how we help small companies, startups in AI, to grow.

Out of scope was job losses; lots of reports were written about that and it’s extremely hard to forecast which jobs are going to be affected and in what numbers. We were asked to look at the creation of jobs and how we can best use this new technology to help the UK economy grow and to keep our legacy. We have the most fantastic legacy in AI, including of course Alan Turing himself.

We are second probably only to the Americans in terms of the track record of AI research and development, with China of course coming up on the outside.

We also ruled out work on ethics, because in parallel to us, the Royal Society and the British Academy were running the data governance report, which I happened to be a member of, and that report was released in June. And that recommended what has become the Centre for Data Ethics and Innovation.

Bias in and bias out

Diversity is core to the AI debate, and must remain so as a strategic issue, added Hall:

I’m passionate about diversity, because in this world, we used to say ‘garbage in, garbage out’, but with AI, it’s ‘bias in and bias out’.

You have bias potentially everywhere in terms of the data sets that you use. A lot of the world we live in is still very gender specific. And it isn’t just about gender, it’s about ageism and racism and sexual orientation, all these things the machines will pick up on from existing data sets that might not be neutral in any or some respects. And then it’s how do you write the algorithm.

And I’m passionate about creating inter-disciplinary teams, and not just people who can do the extreme coding involved in developing the algorithms. And getting all companies to sign up to some kind of code of practice for that.

It’s very important to get people other than mathematicians and computer scientists – who are not very diverse to start with – involved in this industry. Getting people from the humanities, from economics, from philosophy, from psychology, from all or any disciplines really.

Numbers will be a critical factor, she said:

The next big thing that industry said to us ‘we need people now’; we call them oven-ready programmers, people who can go into companies and work in them straight away. We have lots of good AI courses in the UK, but we need more people doing them. And industry will pay for this, and we have lots of offers from companies who will sponsor these types of degrees. And once we’ve launched the Sector Deal we can get on with that.

Completing a quartet of wise women at the event was techUK’s Sue Daley, who picked up on Hall’s core theme of diversity. She said:

These technologies are going to represent and interact in a society and culture that we live in. If they are all developed, produced, and written by a particular section of our society, are they going to be able to respond to the whole of our society and diversity? We need to get more women and more diversity in the AI community.

The question then becomes how do we realise the full potential of these technologies. Are we brave enough and willing enough to embrace the change that AI brings, how can we make this happen? We need confidence. The UK is leading the charge in these areas. The UK has a great position in the AI market, but we need to build on that. But we also need vigilance, because there are challenges out there.

My take

Indeed. Both the eForum debate and this week’s official debut of the Office for AI and the new Sector Deal between government and industry are welcome moves by the UK.

However, not everyone at the event was convinced by the rhetoric. One questioner in the audience – who said she worked in central government – said she was at the event “out of desperation” to try to get some clarity on policy. Hopefully she went away satisfied.

However, a prominent civil servant was less than impressed. Speaking to me candidly over coffee afterwards, the eminent gentleman (who I won’t name) said, “There are so many organisations in the UK now dealing with UK AI policy, aren’t there. It doesn’t seem very joined-up to me!”

Shortly before heading out into the snow he added, “By the way, the UK is now actively being excluded from scientific research discussions by our partners in Europe.”

Let’s hope these new initiatives don’t just look inward, but also out to the wider world, post-Brexit. Because while policymakers might be huddled together and congratulating each other – for good reason in some cases – it will be pretty chilly out there.

Image credit - Via pixabay

    Leave a Reply

    Your email address will not be published. Required fields are marked *