FutureScot
Data & AI

How Generative AI Can Lend a Helping Hand to IT Security Teams

AdobeStock

The State Of Play

As security teams continue their constant battle against evolving threats and attackers, growing numbers are exploring the assistance that can be provided by new generative AI tools.

Capable of creating everything from text and images to computer code and analytics, the tools are rapidly improving. Significant investments by companies such as Microsoft and Google will ensure the pace of this evolution does not slow.

There are three clear ways in which generative AI tools can add value when it comes to IT security. They are:

Potential Challenges

While generative AI tools and the large-language models (LLMs) that power them can deliver significant value, they also pose some clear challenges.

Topping the list of concerns for many IT teams is data privacy. They want to know how they can be confident that data fed into an LLM remains secure and private.

There are also concerns about the accuracy of outputs. LLMs are known to hallucinate and provide outputs that are simply incorrect. Procedures must be put in place to ensure that all outputs are checked before being used in other ways.

Another challenge is the need for users to become adept at creating effective prompts for querying LLMs. The quality of output relies heavily on the quality of the request or query and so time needs to be invested in understanding how these can best be generated.

Some security teams are also finding it challenging to check the performance of AI tools and LLMs because of the lack of repeatability on offer. The tools can provide different outputs to exactly the same request which makes validation difficult.

Cybercriminal Usage

As well as being embraced by organisations, AI tools and LLMs are also attracting increasing attention from cybercriminals. The ways in which they are being used are evolving rapidly and security teams need to understand the particular threats being generated.

One example, dubbed Black Mamba, involves the use of polymorphic malware. The code is judged to be benign when it first appears in an IT infrastructure, however it has the ability to reach out to an LLM such as ChatGPT and dynamically generate new and malicious code which is then introduced into the targeted infrastructure.

Another example, known as Deep Locker, makes use of AI capabilities to delay the launch of an attack until certain pre-defined conditions are met. This could be when an individual’s face or voice is recognised, or a certain data transfer sequence is initiated. At that point, the attack starts without warning.

A third example are so-called Generative Adversarial Networks (GANs). GANs have been designed to present code to a detection tool being used by a targeted organisation. If the tool rejects the code as being malicious, the GAN will make adjustments to the code and try again. This activity is continued until the code is accepted by the detection tool and can enter the targeted IT infrastructure.

Achieving System-wide Resilience

Faced with these threats, security teams need to aim to achieve what can be termed system-wide resilience.

The first step in this strategy is being undertaken by cloud providers who are working hard to add AI capabilities and components across their platforms. They are making it easier for their clients to run their own LLMs in a secure environment and create valuable outputs.

A second step is to use AI to automate as many processes as possible. Anywhere there is an operating script, AI-based automaton can be introduced.

Security teams can also take advantage of the growing number of AI-powered security tools that are being developed. Constantly evolving, these tools can add significant capabilities to an existing security infrastructure.

AI is delivering both challenges and opportunities to organisations. By understanding both its capabilities and limitations, it will be possible to maximise the business benefits achieved while minimising any associated risks.

Related posts

Groundbreaking technique restores ‘feel’ to surgeons performing robotic surgery

Poppy Watson
February 17, 2022

The Data Lab joins UK-wide manufacturing industry innovation consortium

Kevin O'Sullivan
June 17, 2022

Effective cyber investments in the face of challenging headwinds

Andy Sinclair
June 7, 2023
Exit mobile version