COHORT 1 · STARTS JUNE 14, 2026 · 6 WEEKS
Cohort 1

Bring one workflow.
Leave with one trusted AI system.

Six weeks of guided build. One real workflow from your work. A Proof Pack at the end you can show your boss, your client, or an interviewer the same week.

The promise

What you walk out with.

In six weeks, you turn one of your real workflows into an AI system that passes your own evals, leaves a clean audit trail, and comes with everything you need to show it to a serious person.

No toy projects. No certificates that nobody reads. You bring a workflow that actually exists in your job or your studies — vendor risk reviews, support triage, policy Q&A, recruiting screens, whatever it is — and we walk you through turning it into a working AI system. By the final week, you have a Proof Pack: the system itself, the evals behind it, the governance binder a risk officer would ask for, an ROI memo, a five-minute demo, a launch post, and a portfolio page. You leave able to defend the system in any room.

Who it's for

Three people we built this for.

Student

The student who wants a real portfolio.

Last year or two of school. Used ChatGPT, done a couple of class projects. Wants to walk into the first real interview with a working system to demo, not a screenshot of a Kaggle leaderboard.

Working professional

The professional whose team is waiting on AI.

Ops, compliance, support, finance, or product. Manager keeps asking what AI can do for the team. Wants to show up with a working answer that runs on the team's real workflow, not a vendor demo.

Team lead

The team lead picking the next platform bet.

Has to recommend an AI direction this quarter. Wants to spend six weeks actually building one, end to end, on a workflow the team owns. So the recommendation is grounded in a system shipped, not a deck skimmed.

Honest exclusions

Three people who should not apply.

Anyone looking for a job-placement guarantee.

We do not place people. We help you build proof. The proof helps you place yourself.

Anyone who cannot bring a real workflow.

“I want to learn AI in general” is not enough. We need a specific task from your specific work. If you do not have one, the Quickstart is a better starting point.

Anyone who cannot ship in public.

Shipping publicly is the deal. If your work is so confidential that you cannot share even a sanitized demo, this cohort will frustrate you.

What you bring

One workflow. From your real work.

On day one, you arrive with a single workflow. Not a research topic. Not a category. A specific repeating task: “I review 20 vendor SOC 2 reports a month and pull out five risk flags.” “I screen 100 inbound support tickets a day and route them to the right team.” “I read 30 internal policies and answer questions from new hires.”

It does not have to be glamorous. It has to be real. The boring, repetitive workflows are the best ones. They have clean inputs, clean outputs, and a clear way to measure whether the system did the job.

What you leave with

Your Proof Pack.

Ten artifacts. One folder in your repo. One link you can paste anywhere.

01

Working AI system

Code in your repo. Runs on your data. Solves the workflow you brought in.

02

Eval suite

A test suite for the system. Happy paths, edge cases, failure modes. Pass / fail you can defend.

03

Audit trail

Every input, every decision, every refusal, logged so anyone can replay what happened.

04

Governance binder

Model card, prompt register, risk register, eval reports. The folder a risk officer asks for.

05

ROI memo

One page. What the workflow used to cost. What it costs now. The math behind the number.

06

Five-minute demo

Recorded walkthrough. The problem, the system, the evals, the result.

07

LinkedIn launch post

Drafted with you. Not a brag. A clean explanation of what you built and why it works.

08

Portfolio page

Public link. Hosted. Yours forever. Pull it up in any interview or sales call.

09

Capability Graph profile

Where your skills sit on the seven-phase AI SDLC, mapped to evidence you can point at.

10

Boss-ready explanation

A one-paragraph version, a one-sentence version, and a one-line version. So you never fumble the answer.

The 6 weeks

Week by week.

WEEK 1

Pick the workflow.

You arrive with a candidate workflow. We help you scope it down to something we can actually finish in six weeks. You learn the Skill Contract pattern and write the first draft for your workflow.

Deliv: Scoped workflow + draft Skill Contract
WEEK 2

Design the system.

You decompose the workflow into skills with clear inputs and outputs. You write the policy: what the system can and cannot do, what it must escalate, what it must refuse.

Deliv: Skill graph + Policy-as-Code file
WEEK 3

Build it on your data.

You implement the skills against real (or sanitized) data from your work. Code reviews from instructors who have shipped AI in regulated production.

Deliv: Working end-to-end pipeline on at least one real case
WEEK 4

Write the evals.

You build the test suite. Happy path. Edge cases. The failures you want the system to catch and refuse. You wire it to run on every change.

Deliv: Eval suite with at least 15 cases, green on the current build
WEEK 5

Governance and evidence.

You generate the governance binder, the audit trail, and the Capability Graph profile. You write the ROI memo. You record the five-minute demo.

Deliv: Governance binder + audit trail + ROI memo + demo video
WEEK 6

Ship and defend.

You publish the portfolio page, draft the launch post, and present your system to the cohort plus invited reviewers. You leave with an Architect-level credential reflecting what you actually shipped.

Deliv: Public Proof Pack + cohort presentation + Architect credential entry
Workflow ideas

Workflows other people have brought.

Bring your own. If you do not have one yet, here are workflows that work well in this format.

Customer support triage

Inbound tickets in. Category, urgency, suggested response, routing decision out.

Vendor risk review

SOC 2, DPA, questionnaire in. Draft assessment, risk flags, compensating controls out.

Sales research

Company name in. Funding history, decision-makers, recent launches, tailored outreach angle out.

Policy Q&A

Internal policies in. Plain-English answers with citation back to the source paragraph.

Loan document intake

Borrower documents in. Extracted fields, completeness check, missing-document checklist, risk pre-screen out.

Marketing research

A market or competitor in. Sourced summary, positioning gaps, three campaign hypotheses out.

Recruiting screen

Resume and JD in. Skill match, gaps, structured interview questions, pass/hold/no recommendation out.

Compliance checklist

A regulation or policy in. Mapped requirements, current evidence, missing-evidence list out.

Product feedback analysis

Raw user feedback in. Themes, severity, suggested fixes, prioritized list out.

Internal knowledge assistant

Your team's docs in. Trustworthy answers with citations, refusals when unclear, feedback loop out.

Time

8–10 hours/week.

Two live sessions a week (90 minutes each). One office hours block. The rest is you, building. Most people land in the eight-to-ten-hour range. If you can give it twelve, you will go deeper. If you can only give six, you will fall behind — we would rather you wait for a later cohort than start and stall.

We design the calendar around three timezones (Americas, EMEA, India) so the live sessions land at a sane hour wherever you are.

Pricing

Three tiers. Visible. No surprises.

Launch
₹9,999/ ~$249

Best for students and first-time builders.

  • Same six weeks. Same Proof Pack.
  • Smaller cohort circle.
  • One office hours slot per week.
  • Group code reviews.
Apply to Launch →
Assure
₹1,49,999/ ~$3,499

Best for senior practitioners and team leads with high-stakes workflows.

  • Six weeks plus a private review of governance posture and tenant model.
  • Reviewers from regulated-industry backgrounds.
  • Eligible for the Certified Architect track after public peer review.
  • Same Proof Pack, deeper review.
Apply to Assure →
All tiers leave with the same Proof Pack. The differences are in review depth and instructor time, not in what you build. Need to bring a whole team? See partner options →
Why this, not that

AI tool course vs Innorve cohort.

A typical AI tool courseThe Innorve cohort
What you buildA toy chatbot or a notebook demoA working AI system on your real workflow
Whose dataSample data the course gave youYour data from your real work
How you prove it worksYou watched the videosThe system passes your own evals on real cases
What you leave withA PDF certificateA Proof Pack: working system, evals, audit trail, binder, ROI memo, demo, launch post, portfolio page
Who it convincesA recruiter scanning your resumeYour boss, your client, your interviewer, your auditor
What happens afterYou forget half of itThe Proof Pack stays public. You can defend it five years from now.
Time to valueWeeks of videos to watchA workflow shipped in six weeks
FAQ

Honest answers to real questions.

Will you guarantee me a job?

No. We help you build proof. Proof helps you get the job. Anyone promising guarantees is selling you something else.

Do I need to be a developer?

You need to be comfortable reading code and willing to write some. If you have ever scripted a spreadsheet, automated something with Zapier, or written a simple Python notebook, you are in range. We do not start from "what is a variable."

What if I don't have a workflow at work?

Then start with the free Quickstart. If you genuinely cannot bring a workflow, this cohort is not the right next step. Wait until you can.

What if my workflow is confidential?

We work with sanitized versions all the time. The system you build runs on real data; the public Proof Pack uses an anonymized version. If your work is so confidential that even a sanitized demo is impossible, this cohort will frustrate you.

How much time per week, honestly?

Eight to ten hours for most people. Twelve if you want to go deeper. If you can only give six, please wait for a later cohort.

What do I actually own at the end?

Everything. The code, the evals, the binder, the demo, the post, the portfolio page. We host the portfolio page under your Architect profile, but you own the contents and can take them anywhere.

What model and tools will I use?

Whatever fits your workflow. We teach the Multi-Tool Strategy framework so you make the choice for the right reasons and document the fallback. The system is portable across providers by design.

Why "Proof Pack" instead of "certificate"?

Because a certificate is something you wave around. A Proof Pack is something you can pull up in any room and let it speak for itself. We think one is worth a lot more than the other.

What happens if I don't get accepted?

We tell you why within 48 hours and recommend the right next step — usually the Quickstart or the next cohort. We would rather under-fill a cohort than dilute it.

What if I need to miss a live session?

Sessions are recorded. Pods are mixed-timezone. The work happens async between sessions. Missing one or two over six weeks is normal; missing four or more is when we ask you to defer.

Why trust us on this

The proof is open.

We do not ask you to take this on faith. The Method is published in full under Apache 2.0 — read it before you apply. The Exemplar is a complete, working version of the Proof Pack on a fictional vendor-risk scenario, with every artifact in a public GitHub repo. The Skill Contract Schema, the Maturity Gate Model, and the Capability Graph spec are all readable today, no signup, no email gate.

If our published work does not look like the work of people who know what they are doing, do not apply. If it does, you already have a sense of what your own Proof Pack will look like in six weeks.

Ready? Apply for Cohort 1.

Reads in 48 hours. Honest answer either way. June 14, 2026, 6 weeks, 8–10 hours/week.