Skip to main content
Version: 2.2 (current)
MCF 2.2 – Documentation·Last updated: 2026-02-13

Chapter 2: Innovation Maturity Assessment

What this chapter does
  • Defines assessment as evidence visibility and decision readiness.
  • Clarifies how assessment improves threshold awareness.
  • Connects assessment outputs to governance alignment and optionality.
  • Frames assessment as an evidence-first baseline, not a score.
What this chapter does not do
  • Does not provide a scoring model, certification, or benchmark.
  • Does not provide a finished tool or automated survey.
  • Does not replace leadership judgment or strategy setting.
  • Does not guarantee maturity gains without execution.
  • Does not prescribe a single cadence for reassessment.
When you should read this
  • When establishing evidence visibility for innovation decisions.
  • When leadership needs decision readiness signals.
  • When governance thresholds are unclear or inconsistent.
  • Before investing in transformation or process changes.
Derived from Canon

This chapter is interpretive and explanatory. Its constraints and limits derive from the Canon pages below.

Key terms (canonical)
  • Evidence
  • Evidence quality
  • Decision threshold
  • Optionality preservation
  • Strategic deferral
  • Reversibility
Minimal evidence expectations (non-prescriptive)

Evidence used in this chapter should allow you to:

  • state which evidence signals are visible or missing
  • explain how thresholds affect decision readiness
  • show which gaps limit reversibility or optionality
  • justify why specific investments are prioritized

Assessing innovation maturity is about making evidence visible, not assigning a score. This chapter reframes assessment as a way to surface decision readiness, threshold awareness, and the limits of current evidence.

In MCF 2.2, assessment is a discipline for clarifying what can be justified, what should be deferred, and what must remain reversible. It helps leadership and teams see where evidence is strong, where it is weak, and where it is not auditable.

Assessment does not measure "innovation" as a trait. It clarifies evidence sufficiency for specific decisions. A product expansion decision, for example, may depend on retention evidence crossing a defined threshold. Without that evidence, the decision remains reversible and should be deferred.

Why This Matters In Phase 1

Phase 1 is about orientation. An assessment clarifies what evidence exists, where it is weak, and which decisions cannot yet be justified. This reduces false confidence and protects optionality before commitments become harder to reverse.

Assessment is not benchmarking. High activity without evidence discipline is a lower maturity signal because decision integrity degrades and thresholds become unclear.

Decision thresholds are not fixed. Reversible decisions can tolerate lower evidence thresholds, while irreversible commitments require stronger evidence. Optionality preservation keeps decisions deferrable until evidence is sufficient.

What "Good" Looks Like (Explanatory)

A good assessment produces decision readiness clarity, not rankings. It typically shows:

  • Clear visibility into where evidence is strong, weak, or missing.
  • Thresholds that define when a decision is defensible or should be deferred.
  • Explicit links between evidence gaps and governance attention.
  • Recognition of reversibility constraints in critical decisions.
  • Evidence that can be audited, reproduced, and challenged safely.

In practice, this creates a shared baseline for leadership alignment: what the organization currently knows, what it cannot yet claim, and what evidence would shift thresholds.

How To Run An Evidence-First Assessment (Explanatory)

This chapter does not provide a scoring system. It provides a lightweight approach for making evidence visible and linking it to decisions.

Step 1 — Identify The Decisions That Matter

List the innovation decisions the organization is currently making or planning to make. Examples include:

  • committing budget to a new initiative
  • selecting one solution direction over alternatives
  • scaling a pilot to production
  • integrating with external systems
  • expanding to a new segment or channel

For each decision, state whether it is reversible, partially reversible, or effectively irreversible within the current constraints.

Step 2 — State The Evidence Claims Behind Each Decision

For each decision, capture the claims that would justify it. Keep claims falsifiable and specific. Examples:

  • “Users will adopt this workflow weekly.”
  • “This reduces processing time by at least 30%.”
  • “This segment will pay at the proposed price point.”
  • “We can operate this safely within compliance constraints.”

Step 3 — Map Current Evidence And Its Quality

For each claim, document:

  • current evidence sources (data, interviews, experiments, operational logs)
  • what evidence is missing
  • whether the evidence is auditable and reproducible
  • the quality limits (sample size, bias, confounds, instrument validity)

Where evidence is weak, the correct output is not confidence. The correct output is explicit deferral or continued reversibility.

Step 4 — Define Provisional Thresholds And Deferral Rules

Define what evidence would move a decision from “defer” to “commit,” and under what constraints. Keep thresholds proportional to risk and reversibility.

  • lower thresholds for reversible experiments
  • higher thresholds for commitments that cannot be undone cheaply
  • explicit deferral rules when evidence remains insufficient

Thresholds should be revisitable, and must not be treated as milestones.

Step 5 — Assign Ownership And Governance Follow-Through

Evidence visibility is useful only if ownership is clear.

  • who owns the evidence plan for each claim
  • who owns threshold decisions
  • who can authorize commitment or deferral
  • how often thresholds are revisited (cadence can vary)

Assessment outputs should translate into governance actions: decisions, deferral rules, and evidence plans.

Typical Failure Modes

Assessment failures usually stem from treating it as evaluation rather than evidence visibility:

  • Scoring drift: converting evidence into rankings that hide uncertainty.
  • Benchmark fixation: comparing to external targets instead of local thresholds.
  • Tool substitution: adopting surveys or dashboards without decision impact.
  • Governance bypass: documenting gaps without ownership or follow-through.
  • Narrative certainty: writing confident summaries without disconfirming tests.

Use /docs/book/failure-modes to interpret whether the issue is epistemic, executional, or governance-related.

Evidence You Should Expect To See

Assessment evidence should clarify decision readiness:

  • Documented evidence sources tied to specific decisions.
  • Thresholds that show when evidence is sufficient or insufficient.
  • Explicit notes on reversibility and optionality for key commitments.
  • Clear ownership for evidence gaps and follow-up decisions.
  • An evidence plan that specifies what would change a threshold decision.

Common Misuse And Boundary Notes

Boundary violations appear when assessment becomes a proxy for certainty:

  • Treating assessment results as certification or compliance.
  • Using scores to justify irreversible commitments.
  • Treating evidence gaps as minor issues instead of decision constraints.
  • Presenting tool completion as learning.
  • Declaring maturity improvements without auditable evidence shifts.

Use /docs/book/boundaries-and-misuse to keep assessment aligned with Canon.

Assessment output is not a score and should not be marketed as certification or compliance. Canon boundaries treat evidence visibility as a decision input, not a badge or external claim.

Diagram Audit Note

No diagram is introduced in this pass. This chapter relies on text-only framing to avoid ladder or scoring interpretations. Any future visual must avoid linear ladders, reinforce non-linearity and regression, and preserve decision optionality. If a diagram is added later, it must follow the global figure contract and be indexed in docs/meta/figures.mdx.

Cross-References

  • Book: /docs/book/decision-logic, /docs/book/failure-modes, /docs/book/boundaries-and-misuse
  • Canon: /docs/canon/definitions, /docs/canon/evidence-logic, /docs/canon/decision-theory, /docs/canon/epistemic-model