The UK government is inviting public input on a fresh set of voluntary guidelines for AI cybersecurity. The ‘AI Cyber Security Code of Practice’ will offer developers recommendations for safeguarding their AI products and services against hacking, sabotage, and tampering. Speaking at the CYBERUK conference, technology minister Saqib Bhatti stated that these new guidelines will serve as the foundation for a global standard in AI cybersecurity, ensuring the safety of British businesses against cyber attacks.
Ensuring the Security of AI Systems
Bhatti emphasized the need for a secure environment that enables the digital economy to flourish. These new measures aim to create resilience against adversaries from the design phase of AI models. Drawing on the NCSC’s guidelines for secure AI systems published last year, the DSIT and the NCSC have developed a code of practice for the development of cyber-secure AI systems.
Publication of the Draft AI Cyber Security Code of Practice
This publication comes at a time of mixed news for the UK cybersecurity landscape. Government figures indicate that the sector grew by 13% last year. However, half of all businesses and almost a third of charities experienced breaches during the same period.
Countering Emerging Cyber Threats
The increasing demand for generative AI among businesses is likely to open up new avenues for cybercriminals to launch attacks. GenAI systems are particularly vulnerable to data poisoning and model theft, warns Kevin Curren, a cybersecurity professor at Ulster University and a senior member of the Institute of Electrical and Electronics Engineers. Lack of accountability and transparency in demonstrating how these systems work and reach their conclusions pose challenges and expose organizations to potential risks.
Addressing these challenges, the new AI cybersecurity guidelines will provide businesses with a set of best practices and recommendations. Felicity Oswald, CEO of the NCSC, stated that these codes of practice will assist the cybersecurity industry in designing AI models and software that are secure and resilient against malicious attacks. Oswald emphasized that establishing security standards will enhance collective resilience, and commended organizations for adhering to these requirements to keep the UK safe online. The call for public input will remain open until companies dealing with AI applications can take steps to enhance their security, as Curren recommended.
In addition to following the new guidelines, organizations should engage with data protection experts and stay updated on regulatory practices. This not only helps avoid legal issues but also fosters consumer trust by promoting ethical AI practices and data integrity. Other recommended best practices include minimizing and anonymizing data use, establishing data governance policies, securing data environments, providing staff with ongoing security training, and conducting regular impact assessments and audits.
The call for public opinions on both codes of practice should be seen within the broader context of the Conservative government’s efforts on AI safety, as highlighted by the Minister for AI and Intellectual Property, Viscount Camrose. While the opposition Labour Party’s specific schemes have yet to be defined, shadow DSIT secretary Peter Kyle pledged that the party will unveil its views on AI in the coming weeks as part of a policy push leading up to the general election.