Understanding DSPM for AI: What it is and why it matters - Cobweb

Understanding DSPM for AI: What it is and why it matters

Home » Content Hub » Understanding DSPM for AI: What it is and why it matters

This article was last reviewed and updated for accuracy in October 2025.

AI tools like Microsoft Copilot, ChatGPT, and Gemini are becoming part of everyday workflows, in every business. They help us write faster, analyse data, and brainstorm ideas, but as we all know, AI has introduced new risks, especially when sensitive data is involved.

That’s why Microsoft has created Data Security Posture Management (DSPM) for AI.

What is DSPM for AI?

DSPM for AI is a feature in Microsoft Purview that helps organisations monitor and manage how AI tools interact with sensitive data. It’s important to note, it’s not about blocking AI, it’s about making sure it’s used responsibly.

Think of it like a set of guardrails: it watches how AI tools are used in your organisation, flags risky behaviour (such as uploading highly sensitive data), and helps your teams to respond before something goes wrong.

Why DSPM for AI is needed

AI tools are incredibly helpful, but they don’t always know what’s sensitive or confidential. That’s where it’s up to organisations to take back control. When people use AI without clear guidance or proper safeguards, things can easily slip through the cracks:

  • A user pastes customer data into a chatbot, not realising it could be used to publicly retrain the AI’s language model.
  • An employee uses AI to summarise a confidential report.
  • Sensitive files get uploaded to third-party AI sites.

DSPM for AI helps spot these moments and gives security teams the tools to act.

What DSPM for AI actually does

Here’s a breakdown of how it works:

  • Tracks AI usage: It can show which AI tools are being used across the organisation, whether it’s Copilot in Word or ChatGPT in a browser.
  • Flags risky interactions: If someone shares sensitive data with an AI tool, DSPM can alert your IT team or even block the action.
  • Supports insider risk detection: It helps identify patterns that might suggest someone is misusing AI intentionally or carelessly.
  • Works with existing policies: It integrates with data loss prevention (DLP) and compliance tools already in place.

Some real-world examples, where DSPM for AI would come in:

  • A marketing team uses Copilot to draft emails. DSPM ensures no customer data is accidentally included.
  • A developer pastes code into ChatGPT. DSPM checks if that code contains credentials or proprietary logic.
  • A finance analyst uploads a spreadsheet to an AI summariser. DSPM can block the upload if the file contains sensitive financials.

How do I get started with DSPM for AI?

To use DSPM for AI, your organisation needs:

  • Microsoft Purview set up with auditing and DLP (with Purview Add-On for Business Premium or M365 E5 Compliance or E5 licensing)
  • Devices onboarded for endpoint protection
  • The Microsoft Purview browser extension (for tracking third-party AI usage)

Once these pieces are in place, you can begin applying policies, reviewing AI activity, and tailoring controls to fit your organisation’s needs.

That said, it’s not always that straightforward. Setting up DLP policies, sensitivity labels, or insider risk rules can get complex, especially if you’re new to Microsoft Purview or AI governance.

If this sounds like something you’re exploring, our team have worked with a wide range of organisations when it comes to AI readiness, from initial scoping and consultancy, to hands-on setup and long-term planning. Whether you’re just starting out or refining an existing setup, our Microsoft experts are here to support you.

Final Thoughts

AI isn’t going away anytime soon. And while it might be tempting to ignore the risks (or even ban AI tools altogether), the reality is that employees will still find ways to use them – often on personal devices, outside of company oversight. The benefits of AI are clear, but like any powerful tool, it needs structure and control.

Someone in our business summed it up perfectly: it’s controlled AI vs uncontrolled AI. And if you’re not actively managing how AI is used, you’re leaving the door open for data to be exposed or mishandled.