A Guide to the Government's Code of Practice for the Cyber Security of AI - Cobweb

A Guide to the Government’s Code of Practice for the Cyber Security of AI

Home » Content Hub » A Guide to the Government’s Code of Practice for the Cyber Security of AI

AI is evolving fast, and so are the security risks that come with it. Keeping AI systems secure isn’t just a job for developers – it’s a shared responsibility across IT teams, security professionals, and business leaders. That’s why the government has introduced a Code of Practice for the Cyber Security of AI – a practical guide to help organisations protect their AI systems from threats.

Whether you’re setting up AI tools, managing infrastructure, or overseeing compliance, this Code provides clear steps for IT departments to boost security at every stage for your business – from initial planning to monitoring and retirement.

For the full Code of Practice, please see here.


The Structure of the Code of Practice:

PrinciplePrinciple Description
1Raise awareness of AI security threats and risks
2Design your AI system for security as well as functionality and performance
3Evaluate the threats and manage the risks to your AI system
4Enable human responsibility for AI systems
5Identify, track and protect your assets
6Secure your infrastructure
7Secure your supply chain
8Document your data, models and prompts
9Conduct appropriate testing and evaluation
10Communication and processes associated with End-users and Affected Entities
11Maintain regular security updates, patches and mitigations
12Monitor your system’s behaviour
13Ensure proper data and model disposal

Secure Design

Principle 1: Raise Awareness of AI Security Threats and Risks

AI security threats are constantly evolving. IT leaders should make AI security a key part of regular cybersecurity training and updates. Whether it’s newsletters, internal briefings, or hands-on workshops, everyone – from security teams to decision-makers – needs to stay in the loop.

Principle 2: Design your AI system for security as well as functionality and performance

Security shouldn’t be an afterthought. Before rolling out AI systems, IT teams need to assess risks, involve key stakeholders, and plan for potential security challenges. If you’re using third-party AI tools, conduct a proper risk assessment before signing off.

Principle 3: Evaluate Threats and Manage Risks

Threats like data manipulation and model poisoning within AI are real. IT security teams should continuously evaluate risks and put safeguards in place. If certain risks fall under external vendors, make sure you have clear security agreements in place.

Principle 4: Enable Human Responsibility for AI Systems

AI can enhance efficiency, but humans should always have the final say in critical decisions. IT teams must ensure AI systems are transparent, with clear (human!) oversight mechanisms in place.


Secure Development

Principle 5: Identify, track and protect your assets

AI-related assets (like models, datasets, and APIs) should be logged, secured, and protected from unauthorised access. IT leaders need to ensure that disaster recovery plans specific to AI are in place as well as backup plans in case of data loss or security breaches.

Principle 6: Secure Your Infrastructure

AI systems are only as secure as the infrastructure they run on. IT teams should enforce strict access controls, separate test and production environments, and have a clear vulnerability disclosure policy and AI-specific incident response plans.

Principle 7: Secure Your Supply Chain

AI systems rely on third-party models, datasets, and software components, so if you’re using external AI models or datasets, do your homework. Vendors should meet your security standards, and IT teams should regularly review third-party tools for risks.

Principle 8: Document Your Data, Models, and Prompts

Maintaining a clear audit trail for AI systems is essential for security and accountability. IT teams should document AI data sources, security measures, and any changes to prompts or configurations to track potential risks.

Principle 9: Conduct Appropriate Testing and Evaluation

Before rolling out an AI system, run it through rigorous security testing. Independent security reviews should be part of the process, and any vulnerabilities found should be addressed before launch. It must be ensured that AI outputs do not unintentionally expose non-public data or allow users to manipulate system behaviour.


Secure Deployment

Principle 10: Communication and Processes for End-Users and Affected Entities

IT teams need to ensure that end-users understand how AI systems work, what data they collect, and how to use them safely. Clear communication helps build trust and ensures security best practices are followed.


Secure Maintenance

Principle 11: Maintain Regular Security Updates, Patches, and Mitigations

AI security doesn’t end at deployment. Regular updates and security patches must be applied, and IT teams should have a plan in place for handling vulnerabilities in legacy systems. Any major changes to your AI system should trigger a new security assessment.

Principle 12: Monitor Your System’s Behaviour

Ongoing monitoring helps detect threats early and so IT teams should track system logs and AI model performance to catch potential security issues before they escalate.


Secure End of Life

Principle 13: Ensure Proper Data and Model Disposal

When an AI system is no longer needed, IT teams must ensure that sensitive data and models are properly disposed of. If the system is being handed over to another team or vendor, security risks must be addressed beforehand.


Final Thoughts

AI security isn’t just an IT issue, it’s a shared responsibility that involves security teams, business leaders, and end-users. The Code of Practice for the Cyber Security of AI offers a solid framework to help organisations stay secure while adopting AI. By taking a proactive approach, IT teams can ensure AI systems are safe, compliant, and trustworthy in the long run.

If you need any help when it comes to implementing AI within your business, please do not hesitate to reach out to us.

Talk to us about AI