Artificial intelligence – and why we’re a very long way from building Skynet
Professor Shannon Vallor has been hunting for sun lamps online. A born and bred Californian, she is preparing to depart the warm climes of America’s ‘golden state’ – for a new life in Edinburgh. “It will be an adjustment,” she says by way of understatement.
Helpfully, she’s yet to meet a single person for whom the charms of Auld Reekie have not more than compensated for the weather, which in February, when she arrives, will be a test. “I will come prepared with my sun lamps and my winter gear but in all seriousness I have never met a person who has spent any time living in Edinburgh who wasn’t absolutely enchanted with the city,” she says.
Prof. Vallor’s work in the field of the ethics of data and artificial intelligence (AI) has helped some of the world’s biggest tech companies – including Google, for whom she has been a visiting researcher and consulting AI ethicist – to ensure that their technologies are rolled out in way that is in line with ethical principles underpinning them.
Working with the company for just over a year, she is understandably guarded about the detail of that work, but at the University of Santa Clara’s Markkula Center for Applied Ethics she has been influential in shaping the training courses that have been adapted by Google, particularly for its cloud service, and other large Silicon Valley companies.
What will be of interest, no doubt, is exactly how you figure out what those principles might be, but we start off by discussing her new role, which will be based at the Edinburgh Futures Institute – one of the five data driven innovation (DDI) centres attached to the University of Edinburgh, and which will come to play a large role in a £1.3bn City Region Deal economic and social stimulus programme – signed off by the Scottish and UK Governments.
Her official title will be the first Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, and she’s excited about the prospect of not only developing a new academic programme, but also working with the public sector as it grapples with how services might be deployed using AI, and how she can help local industry partners, in much the same manner she has worked with the tech companies in the US.
“I’m thrilled and couldn’t be more excited,” she says. “It’s a fantastic opportunity and one that I have been looking forward to for a couple of months now ever since the position was announced.” The multi-disciplinarity of the research base at EFI has also enticed Prof. Vallor to the role: she will be working with a cohort of academics that span computer science, machine learning and AI, and combines the social sciences, arts and humanities, and is informed by statutory and legal frameworks, in a way that has perhaps not been done in the past, and is highly experimental.
And it’s that kind of configuration that is designed to answer some of the pressing societal questions that cannot be answered from within the resources of any one academic field. What comes out of that mix will inevitably inform the way technology can be applied in new or existing industries – for example automation in medicine or financial services – but, for Vallor, the most important thing is that it’s grounded on an ethical framework.
Coming back to what that will look like, she gives a few examples of where technology has gone wrong. She mentions a case in the US where AI resulted in biases against certain racial and gender groups in the way bail was granted in a criminal justice setting, and also of a project at DeepMind – an AI company owned by Alphabet – where engineers developed an AI to make an earlier diagnosis of acute kidney disease. The problem, however, was that the training algorithm relied on data from military veterans that were heavily skewed towards male subjects, therefore the AI was much more effective at predicting disease rates in men than in women.
Vallor says: “One of the areas that have become really centrally important is to understand the ways that automated decision-making, especially when driven by large amounts of data, that are used to train machine learning or AI models, is that we’ve seen there are serious challenges in ensuring those decisions don’t amplify or perpetuate unfair biases that exist in society, whether those are gender biases, race biases, biases against certain age groups, biases against people who come from a particular region, biases against people with disabilities.”
But Vallor is keen to add the “qualification” that the unfairness doesn’t come, per se, from an unhealthy attitude among the people training the algorithms, it’s quite often simply an unintended consequence of their actions, but can be mitigated by, for example, greater diversity among the technologists and also the subjects of the data.
She adds: “It’s really important that design teams be inclusive and diverse so that we have all of the perspectives on, for example, who the user of a technology might be, so that we are really able to cover one another’s blind spots, so to speak. We all have blind spots and all designers and developers of technology are familiar with the world through a certain lens, and sometimes when you have a team who are only looking through one narrow lens it’s easier to miss some of the risks that a technology might pose.”
Vallor arrives in Edinburgh at an interesting time for AI. The Scottish Government has recently embarked on a national AI strategy, which will come to fruition in September next year, and roughly half the investment in the City Region Deal is focused on data driven innovation.
These public sector initiatives will be among her “first priorities”, she says, on taking up the new post, as she believes government has a big role to play in shaping new AI technologies, as well as using it to deliver its own services in future. She will also help identify interesting areas for her PhD candidates to focus their research on, and is excited about filling a professional gap – where demand for AI ethicists is growing in industry. She thinks Scotland is in a good place to learn from some of the “mistakes” that have been made by Silicon Valley in its desire to ‘move fast and break things’. A more cautious, considered and inclusive approach to using AI – which is grounded in consultation and having a “public conversation” about the risks and benefits – will be key moving forward.
But above all, as we approach the end of our Skype call, she is keen to present a sunnier (fittingly) vision of AI, which isn’t necessarily going to lead to the rise of the machines. Of the generation that grew up with the Terminator movies, she insists AI is not about to ‘develop into Skynet’ and that our jobs are not going to be taken over soon by robots. As someone who also grew up with Arnie as a sci-fi hero, it is an optimism I’m happy to share. But I’m sure we’ll be back.
This article appeared in the Autumn issue of FutureScot Magazine, distributed in The Times Scotland on Saturday, November 23.
To access the content of the magazine visit the following link.