The Assessment Blueprint: From Capability Gaps to Transformation Actions
Part 2 of 3 — Capability-First & Outcome-Driven Enterprise Roadmaps
- Part 1: Why Most Enterprise Roadmaps Fail Before They Start
- Part 2: The Assessment Blueprint: From Capability Gaps to Transformation Actions (you are here)
- Part 3: What This Looks Like in Practice: Banking and Insurance Simulations
From “Why” to “How”
Part 1 made the case for why most enterprise roadmaps fail — they start with technology and work backwards, instead of starting with business capabilities. This is the how.
That was the philosophy. Now for the mechanics.
This article lays out the assessment methodology that turns a capability map into a transformation roadmap. Not a 7-step framework designed for a consulting deck. A practical approach you can run in a discovery session with real stakeholders, producing real decisions by the end of the day.
I’ve built tooling around this methodology, and the core of it comes down to two things: how you assess capabilities and what you decide to do about them.
The Assessment Framework: Three Dimensions
Most maturity models fail in practice because they’re too abstract. A 5-level maturity scale (Initial, Repeatable, Defined, Managed, Optimizing) sounds rigorous on paper. In a room with a CTO and business leaders, it generates more debate about definitions than actual decisions.
What works instead are three concrete dimensions that anyone in the room can assess:
1/ Business Criticality
How important is this capability to the business? Not to IT. To the business.
| Rating | Meaning |
|---|---|
| High | Core to revenue, customer experience, or regulatory compliance. If this breaks, the business breaks. |
| Medium | Important but not existential. Degraded performance hurts but doesn’t stop the business. |
| Low | Supporting function. Needed but not a differentiator. |
Business criticality is set by business stakeholders, not technology teams. This is non-negotiable. If IT defines what’s critical, you end up with infrastructure components rated “High” and customer-facing capabilities rated “Medium.” That’s the IT-first trap again.
2/ Satisfaction
How satisfied are stakeholders with the current state of this capability? Rate it 1 to 5.
| Score | Signal |
|---|---|
| 1 | Broken. Manual workarounds everywhere. People actively complaining. |
| 2 | Functional but painful. Gets the job done with significant friction. |
| 3 | Adequate. Works, but not a source of competitive advantage. |
| 4 | Good. Minor improvements possible but not urgent. |
| 5 | Excellent. Best-in-class, stakeholders are happy. |
Satisfaction comes from the people who live with the capability daily, not from the team that built it. A development team might rate their own system a 4. The business users who deal with its limitations every day might rate it a 2. Trust the users.
3/ Modernization Priority
Given the business strategy and the capability’s current state, how urgently does it need to change?
| Rating | Meaning |
|---|---|
| High | Must be transformed in the next planning cycle. Business strategy depends on it. |
| Medium | Should be improved, but not blocking strategic initiatives today. |
| Low | Stable enough. Other capabilities need attention first. |
Modernization priority is a function of the first two dimensions plus strategic context. A capability that’s high criticality and low satisfaction is almost always high priority. But a medium-criticality capability with satisfaction of 2 might also be high priority if it’s blocking a key strategic initiative.
How These Three Dimensions Work Together
The power of this framework is in the combinations. Here’s how you read the assessment:
| Criticality | Satisfaction | Priority | What It Means |
|---|---|---|---|
| High | Low (1-2) | High | Red alert. Critical capability in poor shape. Top of the roadmap. |
| High | Mid (3) | High | Strategic gap. Works today but won’t scale. Transform before it becomes a crisis. |
| High | High (4-5) | Low | Protect. Working well, keep it that way. Don’t break what works. |
| Medium | Low (1-2) | Medium-High | Tactical fix. Not existential but causing real friction. Quick wins live here. |
| Low | Low (1-2) | Low | Accept or retire. Not critical enough to invest in fixing. |
This is what actually happens in a discovery session. You walk through the capability map, domain by domain, and assess each one across these three dimensions. In a well-run session with the right stakeholders — and with the capability map and pre-read distributed in advance — you can assess 15-20 capabilities in a half day and walk out with a clear picture of where the gaps are.
flowchart TD
A["Capability Map<br/>(L1 → L2 → L3)"] --> B["Assess Each Capability"]
B --> C["Business Criticality<br/>High / Medium / Low"]
B --> D["Satisfaction<br/>1-5 Scale"]
B --> E["Modernization Priority<br/>High / Medium / Low"]
C --> F["Assessment Matrix"]
D --> F
E --> F
F --> G["Prioritized Gaps →<br/>Transformation Actions"]
style A fill:#14532d,color:#e7e5e4
style G fill:#14532d,color:#e7e5e4
I’ve watched teams spend weeks building elaborate maturity assessments with dozens of sub-dimensions and weighted scoring models. They produce beautiful spreadsheets that no one acts on. Three dimensions. Three questions per capability. That’s what produces decisions.
What Sits Behind Each Capability: Applications and Pain Points
The assessment doesn’t stop at the three dimensions. Each capability is backed by actual systems, and understanding what’s running underneath is critical to deciding what to do next.
For every assessed capability, you document:
The application landscape. What systems currently support this capability? Is it a custom-built legacy system, a packaged vendor product, a SaaS platform, or does nothing exist at all? What’s the hosting model: on-premises, cloud, hybrid? What’s the maturity: legacy, adequate, modern?
The pain points. What’s actually broken? Not in technology terms, but in business terms. Customers abandon applications because identity verification takes 3 days. Claims processing takes 2-3 weeks because the FNOL system doesn’t integrate with the investigation workflow. Loan approval requires manual review even for standard products.
This layer of detail is what separates a useful assessment from an abstract one. You’re not just saying “Loan Origination has satisfaction of 2.” You’re saying “Loan Origination has satisfaction of 2 because the credit decisioning system is a 10-15 year-old rules-based engine running on-premises, manual underwriting is required for the majority of applications, and the average approval time is 5 days when digital lenders and fintechs offer near-instant decisions.”
That specificity drives the next step: deciding what to do about it.
Transformation Actions: The Decision Vocabulary
Once capabilities are assessed, the next question is: what do you do about the gaps? This is where most roadmapping exercises fall apart. They jump straight from “this is broken” to “let’s build something new,” without considering the full range of options.
I use a structured vocabulary of transformation actions. Each one represents a distinct approach to closing a capability gap:
| Action | When to Use It | Example |
|---|---|---|
| Migrate | The capability works, but the platform doesn’t. Move to modern infrastructure without changing core business logic. | Core banking system from on-premises to cloud. Replatform onto managed services. |
| Modernize | The capability needs significant improvement. Re-architect for new requirements. | Legacy claims system re-built as cloud-native with API-first design. |
| Build New | No system supports this capability today. Greenfield development or SaaS acquisition. | AI-powered personal finance management, no existing system to evolve. |
| Optimize | The current solution works but could perform better. Incremental improvement, not replacement. | Existing fraud detection tuned for better accuracy and lower false positives. |
| Retire | The capability is no longer needed, or has been absorbed by another system. Decommission. | Legacy batch reporting replaced by real-time analytics dashboard. |
| Retain | Working well, no change needed. Protect the investment. | Mobile banking app rebuilt last year, high satisfaction, keep as-is. |
If you’ve encountered Gartner’s TIME model (Tolerate, Invest, Migrate, Eliminate) or AWS’s 6Rs/7Rs (Rehost, Replatform, Refactor, Repurchase, Retire, Retain, Relocate), this vocabulary will feel familiar. I’ve simplified it to match how decisions actually get made in transformation planning. The distinction between “Rehost” and “Replatform” matters at the implementation level. At the roadmap level, both are “Migrate,” and the transition strategy details come later.
Transition Strategies
Each action type maps to specific transition strategies — the bridge between the roadmap decision and the execution plan:
- Migrate → Replatform (managed services), Rehost (lift-and-shift), Containerize
- Modernize → Rearchitect (cloud-native rebuild), Replatform (modern stack, same integrations)
- Build New → Greenfield development, SaaS acquisition, Build-vs-buy evaluation
- Optimize → Optimize in place, Performance tuning, Process improvement
These tell the delivery team how to approach the transformation, not just what to transform. The detail comes later — at the roadmap level, the action type is what matters.
From Assessment to Roadmap: Journey Planning
With capabilities assessed and transformation actions assigned, the next step is sequencing. This is where roadmaps become real.
Effort and Risk Estimation
Each transformation gets a rough-cut effort estimate and risk rating. Not a detailed project plan, but enough to sequence intelligently:
Effort is driven by the gap between current and target state. High modernization priority combined with low satisfaction typically means XL effort. High priority with moderate satisfaction is Large. Everything else scales down from there.
Risk is driven by business criticality and modernization priority. A high-criticality, high-priority capability carries high transformation risk, not because the technology is hard, but because getting it wrong impacts the business significantly.
| Priority | Satisfaction 1-2 | Satisfaction 3 | Satisfaction 4-5 |
|---|---|---|---|
| High | XL effort, High risk | L effort, High risk | M effort, Medium risk |
| Medium | L effort, Medium risk | M effort, Medium risk | S effort, Low risk |
| Low | M effort, Low risk | S effort, Low risk | S effort, Low risk |
Wave-Based Sequencing
The 3-year horizon stays, but the execution breaks into waves. Each wave has a clear scope, target dates, and business outcomes:
Wave 1: Foundation. Fix the capabilities that everything else depends on. Core systems, data platforms, integration layers. This is the “stop the bleeding” phase. Not glamorous, but nothing else works without it.
Wave 2: Core Transformation. With the foundation stable, transform the high-criticality, low-satisfaction capabilities that directly serve the business strategy. This is where the biggest business impact lives — and where the cross-functional plan matters most. Technology delivery alone won’t move the needle. Marketing needs to drive channel adoption, operations needs to redesign workflows, and change management needs to retrain teams. The capability transformation includes all of it.
Wave 3: Digital and Differentiation. Build the capabilities that create competitive advantage: AI-powered decisioning, ecosystem partnerships, embedded services, advanced analytics.
flowchart LR
A["Wave 1:<br/>Foundation<br/>Modernization"] --> B["Wave 2:<br/>Core Systems<br/>Transformation"]
B --> C["Wave 3:<br/>Digital &<br/>Differentiation"]
style A fill:#44403c,color:#e7e5e4
style B fill:#3f3f46,color:#e7e5e4
style C fill:#14532d,color:#e7e5e4
Each wave item links back to a specific capability, carries its transformation action and transition strategy, has an assigned owner, and ties to a measurable business outcome. No orphan technology projects. Every line item on the roadmap traces back to a capability gap identified in the assessment.
Dependencies Matter
Capabilities don’t transform in isolation. The integration platform must be modernized before you can build API-based distribution channels. The data platform must be in place before you can deploy AI-powered credit scoring. Core banking must be stable before you can re-architect digital lending.
Map these dependencies explicitly. They’re the constraints that determine what can run in parallel and what must be sequenced. In practice, Wave 1 items frequently block Wave 2 items, and the roadmap’s critical path runs through the foundation capabilities.
The Incentive Problem
Here’s the part most frameworks miss.
You can have the best assessment, the clearest priorities, the most elegant architecture. None of it matters if incentives are misaligned.
If the technology team is measured on infrastructure metrics and the business team is measured on revenue, they will never truly collaborate. They’ll attend the same meetings, nod at the same slides, and go back to optimizing for their own scorecards.
Remember the bank from Part 1? The technology team hit every milestone. The business asked where’s the ROI. That’s what misaligned incentives look like.
The fix isn’t a governance framework or a collaboration workshop. It’s aligning incentives.
| Tech-Only Goal (Misaligned) | Business-Aligned Goal |
|---|---|
| Migrate 50% of workloads to the cloud | Reduce infrastructure costs by 20%, improve unit economics, and enable elastic scalability |
| Implement microservices architecture | Enable faster time-to-market for new products (6 months to 3 months) |
| Achieve 99.99% system uptime | Zero customer-facing downtime during business hours. Track journey KPIs (failed transactions, drop-off rates), not server metrics |
| Deploy a company-wide API strategy | Enable embedded finance partnerships (e.g., lending-as-a-service, insurance at point of sale) to open new revenue streams |
When business and IT share the same success metrics, collaboration happens naturally. You don’t need to force it. You need to fund it.
This applies at every level: shared KPIs between business and IT teams, cross-functional roadmap design where IT is part of business-led discussions from day one, and embedded technology leaders within business units who speak the language of business, not just architecture diagrams.
In my experience, when I’ve seen business-IT collaboration work well, it was never because of a process or a governance structure. It was because the incentives were right. When incentives align, collaboration is the natural byproduct.
Keeping the Roadmap Alive
A roadmap that doesn’t evolve is a roadmap that dies. Five mechanisms keep it alive:
Quarterly strategy reviews as the communication mechanism, keeping stakeholders informed and engaged. Not annual planning cycles that produce a PDF no one opens.
Dynamic reprioritization when business priorities shift. The roadmap adapts, it doesn’t resist change. A new regulatory requirement or a competitor move can shift Wave 2 priorities in a week. The roadmap should accommodate that.
Incremental delivery to maintain agility, assuming the organization is set up to move fast. Each wave delivers measurable business outcomes, not just technology milestones.
Post-implementation reviews to learn and refine, not as blame exercises, but as genuine learning loops. What capability improvements actually moved the business metrics? What didn’t? Feed that back into the next wave.
Business capability owners assigned to ensure execution ties back to strategy. This is the single most important governance mechanism. Every capability on the roadmap has a business owner who is accountable for the outcome, not just the delivery.
Skip any of these and the roadmap becomes shelf-ware.
What’s Next
We’ve covered the philosophy (Part 1) and the methodology (Part 2). Now let’s see what this actually looks like when you apply it.
In Part 3, we bring this to life with two industry simulations: a mid-tier retail bank navigating post-merger integration and digital transformation, and an insurer tackling claims modernization and distribution digitization. Same methodology. Different industry contexts. Real capability hierarchies. Real assessment data. Real transformation decisions.
Not abstract examples. Concrete walkthroughs of the assessment and journey planning methodology in action.
/ Unni