The canonical reference for the methodology taught at Innorve Academy. Distilled from years of building production AI for regulated enterprises. Published openly under Apache 2.0. Versioned and cited.
Without architectural decomposition, AI work accretes as one large prompt that nobody can test, debug, or trust. Skill architecture makes the work modular, reviewable, and composable — the same way object-oriented decomposition made software engineering tractable in the 1980s.
L1 · Recognizes when a prompt is doing too many things at once.
L3 · Routinely decomposes new workflows into typed skills before writing code; produces a skill graph for any project of meaningful size.
L5 · Has authored production skill catalogs that other teams adopt as reference patterns.
Governance written in Notion docs is ignored. Governance written in code is enforced. Policy-as-code is the only path to AI systems that can be audited months after deployment, that survive personnel changes, and that respond to regulatory updates without a six-month rewrite.
L1 · Can identify which decisions in an AI workflow need a policy.
L3 · Routinely authors policy specs before building the skills they govern; integrates policy enforcement into the runtime.
L5 · Has shipped policy frameworks adopted by compliance teams as the system of record.
Most AI systems produce evidence as an afterthought, retrofitted weeks before an audit. By that point the data is incomplete, the model has changed, and the answers are guesses. Evidence-by-Design makes the artifact a first-class output of the system, present from day one.
L1 · Knows what a Governance Binder is and what it should contain.
L3 · Maintains a Governance Binder for every shipped AI system; can produce audit-ready evidence on demand.
L5 · Has authored governance frameworks that map across regulatory regimes (SOC 2 + HIPAA + EU AI Act) and that other organizations adopt.
Without a typed contract, an AI skill is an opaque black box. With one, the skill becomes review-able, testable, and integratable. Type contracts are the most basic discipline of every other engineering field; they are still rare in AI work, and that's the gap.
L1 · Can read a Skill Contract and explain what the skill does and doesn't do.
L3 · Authors a Skill Contract before implementing any new skill; uses contracts as the basis for review and testing.
L5 · Has contributed extensions to the Skill Contract Schema spec that get adopted in v0.2 and beyond.
AI skills shipped without explicit maturity gates accumulate as technical debt. Teams cannot retire them, depend on them, or audit them. The Maturity Gate Model is what turns 'production-ready' from a feeling into a checklist.
L1 · Understands the four lifecycle stages and what they mean.
L3 · Runs maturity gate reviews on every shipped skill before promotion; documents pass/fail with evidence.
L5 · Has contributed gate criteria refinements adopted in spec updates.
Most organizations have AI sprinkled across teams with no shared map. Without one, gaps stay hidden, duplication accumulates, and accountability is ambiguous. The Capability Graph is the artifact that makes the gaps visible — and what makes investment decisions defensible.
L1 · Can name the seven SDLC phases and place a capability in one.
L3 · Maintains a current Capability Graph for the team; uses it to drive quarterly investment decisions.
L5 · Has contributed Capability Graph patterns that other teams reference as reusable models.
Building an AI system for a regulated bank with the same posture you'd use for a side project produces a system that fails compliance review on day one. Building a startup MVP with the posture of a regulated bank produces a system that's still in 'preparing for review' six months in. Tenant-Aware Design is the discipline of right-sizing architecture to context.
L1 · Recognizes that the same AI system needs different controls in different environments.
L3 · Routinely chooses and documents the appropriate posture for each new system; produces a Tenant Posture Card per project.
L5 · Has shipped vertical-specific tenant patterns (banking, healthcare, government) that become reference models.
The AI tool layer churns faster than any layer of software in a generation. Vendors deprecate models, change pricing, get acquired, change terms. Systems built without a multi-tool strategy are technical debt the day they ship. The discipline of declared fallbacks and migration triggers is what makes architecture survive.
L1 · Knows the names of the major model providers and can list trade-offs.
L3 · Documents fallback models and migration triggers for every skill; treats vendor-lock as a risk to actively manage.
L5 · Has shipped abstractions adopted across teams that decoupled them from specific vendors entirely.
The Method gives you eight teachable frameworks. The Mode tells you the order in which to apply them — the sequence an architect carries from one project to the next. Architects who internalize the Mode produce AI systems that ship, survive, audit, and scale. Architects who skip a tenet produce systems that hold up in demos and collapse under pressure.
Architect before automating. Evaluate before trusting. Govern before scaling. Evidence before claims. Portability before tool lock-in. Human accountability before agent autonomy.
The Mode is most easily abandoned during the moments when it matters most: under deadline, when everyone agrees, when a vendor offers a tempting all-in-one, when the agent is impressive. An architect who can hold the Mode during these moments is the architect organizations actually need.
The Method is taught at Innorve Academy through a six-level credential progression. Each level is earned by demonstrable evidence — systems shipped, evaluations passed, governance binders produced, capstones reviewed. Not by attendance, participation, or payment.
https://innorve.academy/methodIM-04 (Skill Contract Schema), v0.1.https://innorve.academy/method#im-04Versioning policy. The Method follows semantic versioning. v0.1 is the first public release. Major versions ship rarely (next: v1.0 in late 2027). Minor versions add or refine frameworks based on real-world feedback (next: Q3 2026). Anyone who studies v0.1 today will still be relevant in 2030 — versioning is for additions, not replacements. Frameworks that get deprecated are flagged for at least 12 months before removal.