At this year’s Futurescot Digital Justice and Policing conference, I explored how artificial intelligence can help Scotland’s justice system tackle two linked pressures: growing backlogs and increased administrative load. With the surge in digital evidence requests, teams face more data to collect, review and summarise than ever before.

The question is no longer “can AI do useful work?” but “how do we deploy it safely, credibly and quickly to free capacity where it matters most?”

Knowledge vs. Action: Two types of AI that matter

To deploy it well, we must first distinguish between two types of AI:

  1. Knowledge-based AI: This boosts personal productivity. Think of Microsoft Copilot drawing on SharePoint content, general tools like ChatGPT, or task-specific tools for coding (Claude) and legal drafting (Harvey). These systems are improving rapidly in reasoning and planning. A recent study from Harvard Business School shows that AI acts as a “skills leveller,” narrowing performance gaps across high and lower performance teams.
  2. Action-based AI (Agents): This is designed to do work, not just advise on it. Agents can follow policies, call internal systems, transform data, transcribe audio/video, extract entities, triage requests and generate standard documents. They can then decide the next step based on those results. This is a promising, though early, field for relieving routine tasks that slow progression of cases in the justice sector.

How we’re Applying Action-based AI

At Storm ID, we focus on delivering these action-based solutions in three ways:

  • Custom AI Workflows built on public cloud providers like Azure.
  • AI Agents built with Microsoft’s low-code Copilot Studio for processes centred around Microsoft 365.
  • Private AI solutions, which run models and apps inside an organisation’s secure perimeter so sensitive data never leaves the network.

Our recent projects in healthcare and local government underscore the importance of explainability and service design. In customer-service settings, for example, agents can categorise inbound mail, gather missing details, route queries or steer people to self-service. These patterns map neatly to justice workflows.

Why Private AI is now a viable option

Private AI gives organisations full control over data, identity, logging and retention. This not only reduces the risk of data leakage but also offers predictable economics for high-volume work: a higher up-front investment in infrastructure, but a very low marginal cost per-task. In this scenario organisations can bring models to the data rather than pushing data to the models in the cloud.

A concrete example: Storm AI Workbench

To make this tangible, we demonstrated Storm AI Workbench, a self-hosted, project-centric platform that runs entirely within an organisation’s own estate.

Teams create secure workspaces, bring together global knowledge (like legislation) and sensitive case files, and then use AI to query, summarise and produce outputs to defined standards.

In our policing demo, the AI Workbench analysed statements, interview transcripts and evidence logs of a specific case to:

  • Produce a concise executive summary with 24-hour time references.
  • Extract key people and roles (accused, complainer, witnesses) and build a source-linked timeline, flagging inconsistencies.
  • Draft a Standard Prosecution Report and assess evidence against legal points to prove, noting corroboration and disclosure cues.

Crucially, this process keeps a human in the loop, providing reasoning summaries, confidence scores and versioned outputs for review.

Practical guidance for adopting AI in justice

  1. Start narrow and high value. Choose contained admin tasks with measurable impact and clear human review points.
  2. Keep humans in the loop. Expose the “why” (the reasoning) behind an AI’s output, not just the “what” (the answer). This builds trust and aids verification.
  3. Be model agnostic. The landscape moves fast. Design your systems so that the underlying AI models can be swapped out as better ones emerge.
  4. Do the service design. New capabilities are useless without new playbooks, training and change management to support them.
  5. Increase autonomy as trust grows. We advise starting with low-impact, human-supervised tasks and expanding deliberately as both the technology and the team mature.

Augmenting judgment, not replacing it

Applied well, AI won’t replace professional judgement. It will free up individuals and teams to work on higher value tasks.

For justice organisations facing overwhelming caseloads with limited resources, the imperative to innovate is clear. AI provides the means to move beyond incremental gains, enabling fundamental improvements in the speed and quality of justice.

This technology can surface critical insights from mountains of digital evidence, automate complex disclosure and redaction to clear backlogs, and provide data-driven tools for sentencing and risk assessment. Whether you begin with public cloud tools or a Private AI stack for your most sensitive workloads, the starting line is here. This is the opportunity to deliver faster resolutions, more equitable outcomes, and a justice system built for the modern age.