Part 5 of the AI Governance Series
“Everyone wants AI governance done in a week. That’s how it fails. You don’t rush visibility, alignment, adoption, assessment. You sequence it.”
Here’s the problem with AI governance initiatives: they either try to do everything at once and collapse under their own weight, or they start with policies nobody reads and wonder why nothing changes.
The AI 90 Playbook is different. It’s a phased approach that builds governance systematically—starting with what you can see, progressing to what you can control, and ending with what you can prove.
Why 90 Days?
Governance can’t be rushed. But it can’t take forever either. 90 days is the sweet spot:
- Long enough to build real capability, not just documentation
- Short enough to maintain executive attention and urgency
- Structured enough to show measurable progress
- Flexible enough to adapt to what you discover along the way
At the end of 90 days, you should have operational AI governance—not perfect, but functional and improvable.
Phase 1: Days 1-30 — Visibility
Goal: Understand what’s actually happening with AI in your organization.
You can’t govern what you can’t see. Phase 1 is about discovery—finding out where AI is being used, by whom, and for what.
Week 1-2: Discovery
- Deploy AI discovery tools or leverage existing CASB/endpoint visibility
- Survey departments on AI tool usage (with amnesty for honest disclosure)
- Identify known AI integrations in business applications
- Review recent software purchases and subscriptions for AI tools
- Check browser extension inventories for AI assistants
Week 3-4: Assessment
- Catalog discovered AI tools and use cases
- Classify initial risk levels (data sensitivity, decision impact)
- Identify high-risk use cases requiring immediate attention
- Document current state gaps in visibility
- Brief leadership on findings
Phase 1 Deliverables:
- AI tool inventory with risk classifications
- Use case catalog with data flow documentation
- Gap analysis report
- Executive briefing on current state
Why Phase 1 is critical: If leadership isn’t aligned in the first month, everything else is noise. No buy-in. No ownership. No enforcement. The visibility phase creates the burning platform.
Phase 2: Days 31-60 — Control
Goal: Implement foundational controls and policies.
Now that you know what’s happening, you can start shaping behavior. Phase 2 builds the governance framework.
Week 5-6: Policy Foundation
- Draft AI Acceptable Use Policy
- Define AI risk classification framework
- Establish AI tool approval process
- Create data handling requirements for AI use
- Get executive sign-off on policies
Week 7-8: Implementation
- Communicate policies to organization
- Deploy approved AI tools with proper configuration
- Implement monitoring for policy compliance
- Establish exception request process
- Launch AI awareness training
Phase 2 Deliverables:
- Approved AI policy set
- Sanctioned AI tool list with configurations
- Monitoring dashboards
- Training completion records
- Exception process documentation
of governance programs fail in the first 60 days due to lack of stakeholder engagement
Source: Gartner Governance Research 2024
Phase 3: Days 61-90 — Evidence
Goal: Establish evidence collection and continuous improvement.
Governance that can’t prove itself isn’t governance. Phase 3 builds the evidence trail and assessment capabilities.
Week 9-10: Evidence Systems
- Implement automated evidence collection
- Establish policy attestation workflows
- Configure audit trail logging
- Create compliance reporting dashboards
- Document evidence retention requirements
Week 11-12: Assessment & Improvement
- Conduct initial AI risk assessment
- Review policy effectiveness (violations, exceptions, feedback)
- Identify improvement opportunities
- Plan next quarter’s governance roadmap
- Present program status to leadership
Phase 3 Deliverables:
- Automated evidence collection system
- AI risk assessment report
- Compliance metrics dashboard
- Governance program roadmap
- Executive status report
Why Big-Bang Rollouts Fail
The temptation is to launch everything at once. New policies, new tools, new training, new monitoring—all on Day 1.
Here’s why that fails:
- Change fatigue. Organizations can only absorb so much change. Overwhelming people guarantees resistance.
- Discovery gets skipped. You implement policies for problems you assume exist, not problems you’ve actually identified.
- No feedback loops. Without phased rollout, you can’t learn and adjust.
- Evidence comes last. By the time you think about proof, you’ve missed months of evidence collection.
Phased governance builds capability incrementally. Each phase creates the foundation for the next.
The Trust-Building Element
AI governance can feel like surveillance if handled wrong. The 90-day approach builds trust:
- Phase 1 amnesty: Employees can disclose AI usage without fear of punishment
- Phase 2 enablement: You’re providing approved tools, not just blocking unapproved ones
- Phase 3 transparency: People see that governance is about evidence and improvement, not punishment
Make governance feel like surveillance and your employees will route around you. I’ve watched it happen. Someone announces “no AI tools” and three weeks later the entire sales team is using Claude through personal phones. You didn’t stop the risk – you just stopped seeing it.
Weekly Checkpoints
Each week should have a brief checkpoint:
Weekly Governance Checkpoint
- What did we discover this week?
- What blockers are we facing?
- Are we on track for phase deliverables?
- What decisions need escalation?
- What’s the plan for next week?
These don’t need to be long meetings—15-30 minutes keeps momentum without creating meeting fatigue.
Success Metrics by Phase
How do you know each phase is working?
Phase 1 Success:
- AI tool inventory is 80%+ complete
- Leadership understands current state and risks
- High-risk use cases identified and prioritized
Phase 2 Success:
- Core policies approved and communicated
- Training completion rate above 90%
- Approved AI tools deployed with proper configuration
- Monitoring producing actionable data
Phase 3 Success:
- Evidence collection is automated
- Initial risk assessment complete
- Compliance metrics are being tracked
- Improvement roadmap exists and is resourced
Common Pitfalls to Avoid
- Skipping discovery. Policies without visibility are guesswork.
- Perfect is the enemy of good. Ship 80% policies and iterate. Don’t wait for perfect.
- No executive sponsor. Without C-level backing, governance dies in committees.
- Blocking without enabling. If you ban unapproved tools, provide approved alternatives.
- Forgetting evidence. Start collecting from Day 1, not Day 90.
What Comes After 90 Days
Day 90 isn’t the end—it’s the end of the beginning. You’ve built foundational governance. Now you:
- Expand coverage to additional use cases
- Deepen controls in high-risk areas
- Mature your evidence collection
- Prepare for audits and assessments
- Continuously improve based on metrics
Governance is a program, not a project. The 90-day playbook gets you operational. What happens next determines whether governance becomes embedded or evaporates.
AI Governance Series
Part 5 of 9 | Previous: ← The AI Policy Pack | Next: MSP as AI Governance Partner →