Gone phishing – new £1m project aims to guard against cyber-attacks

Aberdeen University said today that it is part of a three-year research project aiming to prevent organisations and businesses falling victim to cyber-attacks.

The UK Engineering and Physical Sciences Research Council (EPSRC) has awarded the university £756,000 to support its ‘Supporting Security Policy with Effective Digital Intervention’ (SSPEDI) project.

Large-scale attacks, such as the WannaCry virus which severely affected the NHS, pose an increasing threat to organisations and businesses, and prevention is a key priority for the UK Government and research funding bodies.

Scientists at the university’s Department of Computing Sciences will be aiming to find ways to help prevent hackers enticing people into downloading malware, for example through phishing emails.

Dr Matthew Collinson, who is the principal investigator on the project, explained: “If we look at most cyber security attacks, there is a weakness relating to human behaviour that hackers seek to exploit.

“Their most common approach, and the one we are most familiar with, is the use of phishing emails to entice a user to download malware onto their computer.

“One of the main problems faced by companies and organisations is getting computer users to follow existing security policies, and the main aim of this project is to develop methods to ensure that people are more likely to do so.”

The project coincides with the launch of a new Masters degree in Artificial Intelligence at the university, and experts will be using their skills in this area as part of the project.

“The project applies our world-leading expertise in both artificial intelligence and human-computer interaction,” Dr Collinson explained.

“In the case of human-computer interaction, this specifically relates to the field of persuasive technologies, which are designed to encourage behaviour change and are more commonly applied in healthcare, for example to encourage patients to follow medical advice.

“In terms of AI, we will investigate how intelligent programmes can be constructed which can use dialogue to explain security policies to users, and utilise persuasion techniques to nudge users to comply.

“In addition we will be using sentiment analysis to detect people’s attitudes to security policies through natural language, for example through their email correspondence.

“Ultimately we are looking to employ all of these techniques to identify the issues that make us less likely to follow security advice, and make recommendations as to how these can be overcome.”