

Publish Date
30/09/2025
Categories
Blogs Hot Topic
This article was last reviewed and updated for accuracy in October 2025.
AI tools like Microsoft Copilot, ChatGPT, and Gemini are becoming part of everyday workflows, in every business. They help us write faster, analyse data, and brainstorm ideas, but as we all know, AI has introduced new risks, especially when sensitive data is involved.
That’s why Microsoft has created Data Security Posture Management (DSPM) for AI.
DSPM for AI is a feature in Microsoft Purview that helps organisations monitor and manage how AI tools interact with sensitive data. It’s important to note, it’s not about blocking AI, it’s about making sure it’s used responsibly.
Think of it like a set of guardrails: it watches how AI tools are used in your organisation, flags risky behaviour (such as uploading highly sensitive data), and helps your teams to respond before something goes wrong.
AI tools are incredibly helpful, but they don’t always know what’s sensitive or confidential. That’s where it’s up to organisations to take back control. When people use AI without clear guidance or proper safeguards, things can easily slip through the cracks:
DSPM for AI helps spot these moments and gives security teams the tools to act.
Here’s a breakdown of how it works:
To use DSPM for AI, your organisation needs:
Once these pieces are in place, you can begin applying policies, reviewing AI activity, and tailoring controls to fit your organisation’s needs.
That said, it’s not always that straightforward. Setting up DLP policies, sensitivity labels, or insider risk rules can get complex, especially if you’re new to Microsoft Purview or AI governance.
If this sounds like something you’re exploring, our team have worked with a wide range of organisations when it comes to AI readiness, from initial scoping and consultancy, to hands-on setup and long-term planning. Whether you’re just starting out or refining an existing setup, our Microsoft experts are here to support you.
AI isn’t going away anytime soon. And while it might be tempting to ignore the risks (or even ban AI tools altogether), the reality is that employees will still find ways to use them – often on personal devices, outside of company oversight. The benefits of AI are clear, but like any powerful tool, it needs structure and control.
Someone in our business summed it up perfectly: it’s controlled AI vs uncontrolled AI. And if you’re not actively managing how AI is used, you’re leaving the door open for data to be exposed or mishandled.

Tuesday 21st January 2025 | 10:00 - 10:30 GMT
In our webinar, we'll discuss:
If you'd like to learn more, please sign up via the form above.
Join our webinar to explore the rising threat of Business Email Compromise (BEC) attacks and why traditional security methods are falling short. We’ll discuss the evolving tactics of BEC attackers and the need for advanced defences. Discover how Mimecast’s AI-driven technology enhances detection, reduces false positives, and offers stronger protection against these sophisticated threats.
Tuesday 1st October 2024 | 10:00 - 10:30 GMT
AI is no longer just a buzzword—it's a critical tool in the fight against cyber threats. As malicious AI-driven attacks become more prevalent, leveraging AI to counter these threats and protect against other cyber risks is essential.
In our webinar, we'll discuss:
You'll also have the opportunity to put forward your questions to Mimecast expert, Mark Olding.
If you'd like to learn more, please sign up via the form above.


Recent Comments