Part 3 of the AI Governance Series
“Most AI ‘governance’ tools ask questions. Wrong move. They should be building context: Who uses AI. Why. On what data. For which decisions. With what blast radius.”
Here’s what happens with most AI governance initiatives: someone creates a spreadsheet. They list AI tools. They ask departments to rate risk from 1-5. They compile results into a report.
Then nothing changes. Because the spreadsheet captured information, not context.
Context is the missing layer in AI governance. Without it, risk scores are fiction.
What Context Actually Means
Context isn’t just “what tool are you using?” It’s the complete picture:
Layer 1: Who
Who’s using AI in your organization? What’s their role? What decisions do they make? What data do they access? This isn’t about permission—it’s about understanding exposure.
Layer 2: What
What data is flowing into AI systems? Customer PII? Financial projections? Proprietary code? Strategic plans? The same AI tool has radically different risk profiles depending on the data it touches.
Layer 3: Why
Why are they using AI for this task? What problem are they solving? What decision will they make based on the output? Understanding intent shapes risk assessment.
Layer 4: Consequences
What happens if the AI is wrong? A typo in a marketing email? A miscalculation in a customer quote? A compliance failure in a regulatory filing? Same tool, vastly different blast radius.
Layer 5: Accountability
Who owns the outcome? Who reviews AI outputs before they go external? Who’s accountable if something goes wrong? Without ownership, there’s no governance.
Why Static Assessments Lie
Point-in-time risk assessments miss the reality of how AI is actually used.
Consider this scenario:
- Day 1: Marketing uses ChatGPT to brainstorm taglines. Low risk.
- Week 3: Sales starts using it to draft customer proposals. Medium risk.
- Month 2: Finance starts using it to analyze quarterly projections. High risk.
- Month 3: Someone integrates it with the CRM via API. Critical risk.
Your Q1 assessment said “low risk.” Your Q4 reality is critical exposure. Static assessments don’t capture evolution.
of AI use cases evolve beyond their original scope within 6 months
Source: McKinsey AI Survey 2024
How Context Changes Risk Scoring
Without context, risk assessment becomes a guessing game. With context, it becomes actionable.
Example 1: Customer Service Bot
- Without context: “Uses AI for customer communications. Risk: Medium.”
- With context: “Handles returns processing. Can access order history but not payment data. Human review required before refunds over $100. Escalation path defined. Risk: Low-Medium with controls.”
Example 2: HR Resume Screening
- Without context: “Uses AI to assist with hiring. Risk: Medium.”
- With context: “Initial screening of 500+ applications monthly. No human review before rejection. Training data unknown. No bias testing conducted. Affects legally protected decisions. Risk: Critical.”
Same general category. Completely different risk profiles. Context reveals what surface-level assessments hide.
How Auditors Think
Auditors don’t ask technical questions. They ask responsibility questions.
“Who approved this AI use case? Who reviewed the output? Who owns failure?” That’s context. Not prompts. Not model parameters. Responsibility and accountability.
When an auditor examines your AI governance, they want to see:
- Clear ownership. Every AI use case has an accountable human.
- Defined scope. What the AI is approved to do—and what it isn’t.
- Review processes. How outputs are validated before use.
- Evidence trail. Proof that oversight is actually happening.
- Incident response. What happens when AI produces harmful outputs.
A governance framework without context can’t answer these questions. It’s paperwork, not governance.
Building a Context Engine
Effective AI governance requires a system that builds and maintains context continuously. Here’s what that looks like:
Context Engine Components
- Discovery: Continuous detection of AI tool usage across the organization
- Classification: Automatic categorization by data sensitivity, decision impact, and use case
- Mapping: Connection between AI usage and business processes, data flows, and accountability chains
- Monitoring: Ongoing tracking of how context evolves over time
- Alerting: Notification when context changes in risk-relevant ways
This isn’t a project you complete. It’s a capability you build and maintain.
Context vs. Control Lists
Many organizations approach AI governance with control lists:
- Approved tools: ChatGPT Enterprise, Microsoft Copilot
- Blocked tools: Claude, Gemini, everything else
- Required training: Annual AI awareness module
Control lists feel like governance. They’re not.
Control lists fail because:
- They assume tool = risk. The risk isn’t in the tool—it’s in how it’s used.
- They’re binary. Approved/blocked doesn’t capture nuance. ChatGPT for brainstorming is fine. ChatGPT for legal advice is not.
- They lag reality. New tools appear weekly. Your list is always outdated.
- They ignore behavior. Having an approved tool doesn’t mean it’s used appropriately.
Context-based governance asks: “How is this tool being used in this specific case?” not “Is this tool on our approved list?”
What Context Enables
When you have rich context, governance becomes dynamic:
- Risk scores that mean something. Instead of generic ratings, you get specific assessments based on actual usage patterns.
- Targeted interventions. Instead of blanket policies, you can address specific high-risk use cases.
- Audit readiness. When auditors ask questions, you have answers backed by evidence.
- Efficient oversight. Human review can focus on high-risk scenarios instead of reviewing everything.
- Compliance automation. Context feeds automated compliance checks that actually understand what’s happening.
The MSP Context Advantage
MSPs are uniquely positioned to provide context for their clients.
You already have:
- Visibility into network traffic and application usage
- Access to endpoint telemetry
- Knowledge of business processes and data flows
- Relationships across the organization
This positions MSPs to build context engines their clients can’t build themselves. It’s not just about blocking tools—it’s about understanding usage in context and providing meaningful governance.
What Comes Next
Context without action is just surveillance. In Part 4, we’ll explore the AI Policy Pack—how to create documentation that actually drives behavior, generates evidence, and survives audits.
AI Governance Series
Part 3 of 9 | Previous: ← Why AI Governance Is Different | Next: The AI Policy Pack →