The Scottish Government has confirmed that its staff are permitted to use the new generative AI platform DeepSeek – despite privacy and data concerns.

There are “no restrictions” on employees using the application, developed by the Chinese artificial intelligence company, and they are able to download it to government devices.

Ministers confirmed however that its policy is guided by that of the UK Government’s framework on AI, which sets out 10 principles for the safe and responsible use of the technology.

A Scottish Government spokesperson said: “The Scottish Government’s AI policy adheres to the framework published by the UK Government which sets out 10 principles to guide the safe, responsible and effective use of generative AI in government organisations.

“This includes compliance with the law including intellectual property, equalities implications, and fairness and data protection implications and security. It also includes the initial UK Government guidance on GenAI use which prohibits the use of classified or sensitive data in these tools.”

In a background briefing, the Scottish Government added that in line with UK Government policy, there are currently ‘no restrictions on staff use of DeepSeek and app download on Scottish Government devices, although the mobile app cannot access official data and that discussions on AI tools policy are ‘ongoing’ with UK Government Cybersecurity and others.’

Regarding the use of AI more broadly, beyond generative AI, the Scottish Government adheres to the principles set out in Scotland’s AI Strategy, which is committed to ‘trustworthy, ethical and inclusive AI’.

A Cabinet Office spokesperson added: “National security is our foremost priority. Most government devices are highly restricted, and we have robust rules in place for the use of apps, including keeping new technologies under constant review.”

However, it emerged this week that the Irish data protection watchdog has written to DeepSeek seeking privacy assurances, and also the Australian science minister has raised concerns about the new application, which sent US AI stock prices into a tumble when it was released last week. The reason for that is that the company claimed it built its technology at a fraction of the cost of industry-leading models like OpenAI – because it needs fewer advanced chips.

Nvidia, the market-leading chip manufacturer, was particularly affected, losing almost $600bn (£482bn) of its market value on Monday – the biggest one-day loss in US stock market history.

In a detailed piece written by biometrics experts in the US, serious concerns were raised about DeepSeek, founded in December 2023 by Liang Wenfeng, and its ‘close ties’ to the Chinese Communist Party (CCP).

The article pointed to China’s ‘surveillance infrastructure and relaxed data privacy laws’, which ‘give it a significant advantage in training AI models like DeepSeek’.

As well as concerns about China’s aggressive pursuit of global AI dominance, and allegations of impropriety around business espionage, the platform could also be mobilised to carry out sophisticated cyberattacks.

Israeli threat intelligence firm Kela, which worked with Futurescot to uncover a large data dump of Scottish public sector information in 2021, warned this week that whilst very powerful, the platform – which uses the R1 reasoning model – is significantly more vulnerable than ChatGPT, to which it is similar.

In a blog post, the company said: “KELA’s AI Red Team was able to jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices. To address these risks and prevent potential misuse, organizations must prioritize security over capabilities when they adopt GenAI applications. Employing robust security measures, such as advanced testing and evaluation solutions, is critical to ensuring applications remain secure, ethical, and reliable.”

The technical test also revealed that the ‘evil jailbreak’ – the so-called ability to turn the platform into a malicious tool – could be applied to DeepSeek, whereas it has been patched on OpenAI’s technology.

When asked to write malware for cyber exploit tasks, the platform easily complied, with researchers finding: “The response also included additional suggestions, encouraging users to purchase stolen data on automated marketplaces such as Genesis or RussianMarket, which specialise in trading stolen login credentials extracted from computers compromised by infostealer malware.”

Censorship campaigners have also hit out at the platform’s apparent censure of historical events such as the Tiananmen Square protests and massacre, as well as controversial political subjects such as the independent status of Taiwan.

According to DeepSeek’s own privacy policy, it harvests large amounts of personal information collected from users, which is then stored “in secure servers” in China, including phone models, operating systems, IP addresses and keystroke patterns. Email addresses, phone numbers, dates of birth, and user text and audio inputs, as well as chat histories, are also stored.

DeepSeek‘s Android app took the number 1 spot for downloads on the Google Play Store this week, days after the company’s chatbot app did the same on Apple’s App Store.