PACE: Plan, Agentic Context, Execute

The Missing Layer in Enterprise AI Adoption

Anshul Shanker

April 2026

Preface

I wrote this for peers and leaders in enterprise engineering and architecture — whether you are enabling teams with AI for the first time or already in deep adoption and working out how to do it well. AI agents are fast, capable, and fundamentally unsupervised: they optimise for the instruction in front of them, not for the system around it.

Without structure, that nature produces predictable consequences — pattern drift, diverging conventions, and intent that never reaches a reviewer. Not because anyone lacks prompting skill, but because session-level speed outruns shared intent. This paper proposes a structure for that problem: design stays explicit and reviewed before implementation, while agents keep doing what they do best — producing output at speed.

It presents PACE (Plan, Agentic Context, Execute) as a framework for preserving agents’ execution power while human intuition and ownership sit at the review gate. The approach is vendor-neutral, composable with existing cadences, and designed around crawl, walk, run — start simple, prove the model, expand as context and confidence grow. The Abstract states the thesis; what follows develops it.

If anything here is useful — to borrow, adapt, or argue against — I will be glad of it.

— Anshul Shanker, Software Architect, OneMain Financial

Table of Contents

  1. Abstract
  2. Introduction
  3. The Software Development Lifecycle
  4. The PACE Lifecycle
  5. The Context Layer
  6. The Blueprint
  7. Solutioning and Implementation
  8. The Feedback Loop
  9. PACE in Practice
  10. Enterprise Adoption
  11. Future Directions
  12. Conclusion
  13. Glossary
  14. References

1. Abstract

AI coding assistants have accelerated individual developer productivity [1][2][11], but delivery quality — at the team and organisation level — has not improved at the same rate [3][12]. The gap is structural: agents operate without shared context, design decisions live in ephemeral chat sessions, and no formal gate exists between a product requirement and an AI-generated branch. Left ungoverned, AI-assisted development can reshape codebases in ways that are difficult to reverse — altering established patterns, introducing inconsistent styles, and accumulating technical debt faster than teams can identify it [10][13].

This paper introduces PACE (Plan, Agentic Context, Execute), a framework that governs AI-augmented software delivery through two version-controlled artifacts: a context layer of persistent, machine-readable agent instructions that define how agents behave, and a solution blueprint — a versioned design document reviewed by discipline owners before any implementation begins. Critically, the agent does the heavy lifting — producing context files, drafting blueprints, and structuring task assignments — while humans review and gate.

Together, these artifacts make design explicit, gate change at its lowest cost, and produce a knowledge base that improves agent behaviour over time. PACE is not about improving prompts — it is about providing a structured way to leverage AI so that differences in prompting skill between team members do not produce divergent outputs.

PACE is vendor-agnostic, applicable to organisations of any shape — from a single repository to multi-repo, multi-domain enterprises — and composable with any existing methodology or sprint cadence. The framework’s thesis is simple: give agents structured context, review the design before the code, and let every approved blueprint make the next one better.

This is not an end state. The AI tooling landscape is evolving rapidly. PACE’s goal is to enable agentic AI adoption while ensuring organisations learn to crawl and walk before they run — building institutional muscle for AI-augmented delivery without accumulating the massive technical debt that ungoverned adoption produces.

2. Introduction

Every engineering organisation adopting AI coding tools faces the same question: how do we move faster without losing control?

The tools themselves are impressive. Studies report 26–55% reductions in task completion time for well-scoped implementation work [1][2][11]. But “well-scoped” is doing heavy lifting in that sentence. In practice, most features are not well-scoped at the agent level — they span multiple services, touch shared state, and carry implicit architectural constraints that exist in engineers’ heads but nowhere an agent can read them.

The result is predictable: individual developers ship faster; teams and organisations ship differently. Integration failures increase. PR reviews become design arguments. Decisions made by one engineer’s AI session contradict decisions made by another’s — and none of it is recorded anywhere a future engineer or agent can find it. Without guardrails, a frontend engineer might unknowingly approve backend decisions that a backend specialist would reject. Codebase patterns and styles drift quickly as each agent session introduces its own interpretation of how things should be built [10][13].

The root of the problem is not the quality of any individual’s prompts — there is no silver bullet in prompt engineering any more than there was in any prior generation of tooling [8]. It is the absence of structure around how AI is used. Two engineers with different prompting habits, working on adjacent features, will produce architecturally divergent output — not because either prompt was wrong, but because there was no shared specification for the agent to follow.

PACE is a response to this pattern. It is not a development methodology. It is a governance and enablement layer that structures how AI agents plan and what context they receive. The agent produces the artifacts — blueprints, task structures, even the initial context layer itself — while discipline owners review and gate. The mechanism is deliberately low-tech: Markdown files, version control, and a consistent structure. The effect is high-leverage: every feature goes through a reviewed design before agents touch application code, and every approved design makes the system smarter for the next feature.

The name encodes the sequence: Plan (produce the blueprint), Agentic Context (make it available as structured input), Execute (implement against it).

3. The Software Development Lifecycle

The SDLC is the repeating sequence of phases that takes a product idea from concept to running software. Terminology varies across methodologies; the core phases are universal [14]:

flowchart LR
  A[Requirements] --> B[Design] --> C[Implement] --> D[Test] --> E[Deploy]
Phase Who Outcome
Requirements Product What to build and why
Design Engineering How to build it
Implement Engineering Working code
Test Engineering / QA Verified behaviour
Deploy Engineering / Ops Running in production

In well-run teams, design has always happened before implementation — through architecture discussions, ADRs, whiteboard sessions, or design documents. That is not new.

What is new is what happens when AI enters this lifecycle. ADRs and design documents capture architectural decisions — high-level, infrequent choices about technology and structure. They do not capture feature-level design: for a given ticket, which repositories change, which patterns apply, which contracts shift, and who owns each piece. Feeding these documents to an AI agent as context is a natural first step — and a useful one — but it does not govern the dozens of feature-level decisions the agent makes implicitly during implementation. The only gate that remains in most organisations is the pull request review — after implementation, before merge:

flowchart LR
  A[Requirements] --> B[Design] --> C[Implement]
  C --> D{PR Review}
  D --> E[Test] --> F[Deploy]

This gate was designed for human-paced development. In AI-augmented delivery, the PR review carries the full burden of catching both implementation errors and design errors — at a moment when the cost of correcting either is highest [7][15]. The next section introduces how PACE addresses this — not by replacing ADRs or existing design practices, but by adding a feature-level design artifact that is reviewed before implementation begins.

4. The PACE Lifecycle

At the centre of PACE is the solution blueprint — a design document that an AI agent drafts and discipline owners review before any implementation begins. The blueprint captures what will change, why, and how — in a structured format that agents and engineers can implement from. Alongside it, a context layer of persistent rules and prior decisions ensures agents produce consistent output. Section 5 covers the context layer and Section 6 the blueprint in detail; what matters here is where they sit in the lifecycle.

Building on the SDLC introduced in Section 3, PACE keeps every phase intact. It adds the blueprint and one review gate:

flowchart LR
  A[Requirements] --> B[Blueprint]
  B --> C{SME Review}
  C --> D[Implement] --> E{PR Review}
  E --> F[Test] --> G[Deploy]
What PACE adds Where Why
Solution blueprint Between Requirements and Implement Makes Design explicit, versioned, and reviewable
SME review gate After blueprint, before any branch Catches design problems at lowest cost
Context layer Alongside all phases Gives agents persistent, structured guardrails

The comparison that matters is not PACE vs. traditional SDLC — it is PACE vs. AI-augmented delivery without PACE. Both use AI. The difference is structure.

flowchart TB
  subgraph UNGOVERNED["AI without PACE"]
    direction LR
    X1[Requirements] --> X2[AI implements] --> X3{Design + code reviewed at PR}
  end
  subgraph GOVERNED["AI with PACE"]
    direction LR
    Y1[Requirements] --> Y2{Design reviewed at blueprint} --> Y3[AI implements] --> Y4{Code reviewed at PR}
  end
AI without PACE AI with PACE
Design decisions Implicit; made by each agent session independently Explicit; captured in a reviewed blueprint before any code
Prompt quality variance Different engineers get different outputs; no shared baseline Agents read the same context and the same blueprint; outputs converge
Codebase drift Patterns and styles shift with each AI session; debt accumulates over time Context layer defines patterns; blueprint locks the approach; drift is surfaced at blueprint review and caught again at PR review
PR review burden Must catch design errors and implementation errors in AI-generated volume Design already approved; PR review focuses on implementation fidelity
Knowledge retention Decisions live in chat histories; lost between sessions Decisions archived in versioned blueprints; available to future agents and engineers
Agent onboarding Each session starts blank; behaviour depends on the engineer’s prompt Agent reads the context layer and inherits the full PACE workflow — modes, gates, conventions — before any work begins

The design review moves left — from PR time to blueprint time. The PR review still exists, but its job narrows. Two lighter gates instead of one overloaded gate. That is the core of PACE’s value.

What stays the same: Product owns requirements. Engineers write code. PRs get reviewed. QA tests. Ops deploys. PACE changes when design decisions become explicit and reviewed — not who does the work.

5. The Context Layer

The PACE lifecycle relies on agents having access to shared rules and prior decisions at session start. The context layer is that infrastructure. It has two parts:

Agents read both layers — shared context first, then local context — so they operate with the full picture before producing a blueprint or writing code. The subsections below describe how each layer is structured and why.

5.1 The context repository

A context repository is a version-controlled repository that contains no application code. Its sole purpose is to carry persistent documentation that agents read before beginning any work. The context files themselves are typically agent-produced — bootstrapped using agentic templates (§9.4) and refined by engineers over time.

It is organised around an entry point — a single file (conventionally INDEX.md) that every agent reads at session start. This file links to everything else: workflow rules, routing constraints, planning questions, and the blueprint archive. Application repositories point to it with a single line in their root config (the format varies by AI vendor — e.g. .cursorrules, AGENTS.md, or equivalent):

Read agentic-context/INDEX.md before making changes.

That is the entire root config. All substance lives in the context layer.

This means the context files do not just describe the codebase — they teach PACE to the agent. The entry point and linked rules are, in effect, a set of prompt-level instructions that explain the workflow, the modes, the gates, and the expected behaviour. Any agent that reads INDEX.md understands how to operate within PACE from the first session — no vendor integration, no training, no special tooling. Switch assistants tomorrow; as long as the new one reads the same files, it inherits the same workflow.

For example, the entry point in this paper’s own context repository includes rules like:

Always-ask checklist (blocking):
  For planning or solution design, the first assistant action must be
  to ask clarifying questions (Step 1). Do not skip because the user
  pasted only a ticket link. Do not search, read files, or draft
  until those answers exist.

Solutioning session boundary:
  In Solutioning, the agent does not edit application code.
  Allowed writes are only what archives the accepted blueprint.

These are prompt-level instructions in plain Markdown — readable by any agent, any vendor. The same file links to workflow rules, clarifying questions, blueprint structure, and the archive. Together they define how the agent behaves within PACE before a single line of application code is considered.

5.2 Context repository structure

The specific file structure, naming, and content of the context layer is entirely flexible. Teams decide their own Markdown file structure and content based on their domain, architecture, and working style. PACE is not prescriptive about filenames or directory layout — it prescribes the pattern (entry point → linked documents → blueprint archive), not the implementation. The examples below illustrate one way to organise each layer. Simpler teams may use fewer files; complex organisations may add domain-specific sections. What matters is consistency within a team or domain, not conformity to a universal template.

context-repo/
└── agentic-context/
    ├── INDEX.md                 ← Entry point: links to all docs below;
    │                               the only file agents must read first
    ├── workspace-rules.md       ← Domain routing, API ownership,
    │                               enhancement policies, constraints;
    │                               for multi-repo setups, maps which
    │                               repo serves which purpose
    ├── plan-questions.md        ← Clarifying questions agents must ask
    │                               before planning (scope, mode, focus)
    ├── git-workflow.md          ← Branch naming, commit conventions,
    │                               PR templates, merge policies
    └── blueprint/
        ├── overview.md          ← How blueprints work; links to rules
        ├── rules/
        │   ├── structure.md     ← Required blueprint sections and order
        │   ├── task-format.md   ← Per-discipline task block format
        │   └── archive-rules.md ← Naming conventions, catalog updates
        └── archive/
            ├── catalog.md       ← Searchable index of all blueprints
            ├── 2026-04-01_feature-a.md
            └── 2026-04-15_feature-b.md

Key principle: The context repository defines shared rules — cross-repo routing, blueprint format, planning workflow. It contains no application code. For organisations with multiple products or domains, each domain can have its own context repository, with templates (§9.4) ensuring structural consistency across domains.

5.3 Application repository structure

Each application repository carries its own local context layer — repo-specific patterns, architecture decisions, and testing conventions that complement the shared rules in the context repository. The connection is lightweight: a single line in the repo’s root config (the format varies by AI vendor — e.g. .cursorrules, AGENTS.md, or equivalent) points agents to the shared context. All substance lives in the context layer itself.

app-repo/
├── .cursorrules / AGENTS.md     ← One line: read INDEX.md
│                                   (format varies by AI vendor)
└── agentic-context/
    ├── INDEX.md                  ← Entry point: repo-specific rules +
    │                                link to context repo for shared rules
    ├── architecture-and-patterns.md  ← Tech stack, directory structure,
    │                                    code patterns, naming conventions
    ├── testing-overview.md       ← Test strategy, frameworks, file
    │                                locations, coverage expectations
    ├── feature-planning.md       ← Repo-specific planning steps that
    │                                extend the shared planning workflow
    └── response-engineering.md   ← Tone, conciseness, and style rules
                                     for agent-generated text and code

Agents read both layers — shared context first, then local context — so they operate with the full picture before producing a blueprint or writing code.

Single-repo teams can place the entire context layer inside the repo itself (e.g. a context/ directory). The value of a separate context repository increases with scale — it becomes the single source of truth that no individual application repo can override. It also positions the organisation for a future where the context repository is exposed as a queryable MCP service (see §11.1), making governance artifacts available to CI pipelines, review bots, and planning tools beyond the IDE.

5.4 Why Markdown

The context layer is deliberately low-tech: Markdown files in version control. No proprietary platform, no database, no SaaS dependency. The reasons are practical:

6. The Blueprint

With the context layer (Section 5) providing the foundation of shared rules and prior knowledge, the solution blueprint is the artifact that puts PACE into action for each feature. It is where the design is captured, reviewed, and approved — the document that bridges the gap between a product requirement and disciplined AI-assisted implementation.

6.1 What it is

A solution blueprint is a version-controlled document that records the complete shared design for a feature before implementation begins. It is the primary artifact PACE introduces. Everything else in the framework exists to produce, review, store, and consume it.

The blueprint serves three roles simultaneously: it is a design specification that agents and engineers implement from, a review surface that discipline SMEs approve, and a knowledge record that future agents and engineers can consult. In this last role, it functions similarly to an Architecture Decision Record [9] — but scoped to feature-level design and produced as a natural part of the delivery workflow rather than as a separate documentation exercise.

6.2 What it contains

The sections below represent one example of how a blueprint can be structured. Organisations should define their own blueprint format based on their domain, team structure, and level of detail needed. What matters is that the foundation is consistent — teams share a common baseline so reviewers and agents know what to expect — while each team adapts the blueprint to their own structure, domain, and level of detail.

The following structure is designed for teams with distinct discipline ownership (e.g. Frontend, Backend, QA). Teams organised differently — by feature area, platform, or layer — can adapt the sections and task blocks to match their ownership model.

Section Purpose
Title Matches the story or ticket
Current state Baseline before this change — what exists today
Decisions Key design choices with rationale; the “why” behind the approach
Feature-specific Analytics, feature flags, persistence — when applicable
Flow At least one diagram showing the change in context
Repos and changes Which repositories change and how (including explicit “no change” entries)
API / Contracts Endpoints and contract file updates (or explicit N/A)
Implementation steps Numbered, file-level steps; the handoff spec for Implementation mode
Test updates Per-change test file and type
Context layer update New facts surfaced during research that should propagate back into context files — or N/A if nothing new was learned. When a user revision corrects a blueprint assumption, the agent checks whether that correction reveals a gap in the context layer and records it here. Shared-context updates are applied at archive time; product-repo context updates are carried into the discipline’s Implementation task.
Discipline tasks Structured task blocks per SME area (see §6.4)
Complexity Score 0–100 with one-line rationale

Sections can be marked N/A when not applicable — but they are never silently omitted, because documenting what is not changing is as valuable as documenting what is. A repository listed as “no changes” prevents agents and engineers from assuming it was simply overlooked.

Complexity rubric (illustrative):

Score Level Signal
0–20 Trivial Config change, single-file fix
21–40 Small One component, few files
41–60 Medium Multiple components, new feature
61–80 Large Cross-cutting, new flows
81–100 Major Architecture change, migration

The agent produces a complexity score for every blueprint. The rubric is defined in the context layer’s Markdown files, so each team can tailor the scale, thresholds, and dimensions to their domain — the agent reads whatever rubric the team has committed and applies it consistently.

The score serves two purposes. First, it gives the team immediate signal about review depth — a trivial blueprint needs a lighter touch than a major one. Second, over time it becomes a measurement axis: teams can track agentic success rates by complexity tier (e.g. “blueprints scored 0–40 proceed to implementation with minimal revision; those above 60 require two review cycles on average”). This data informs where to invest in richer context and where to trust the agent to move faster. The rubric above is deliberately simple — teams can keep it high-level or expand it with weighted dimensions (number of repos, new vs. existing patterns, cross-team dependencies). Because the rubric lives in version-controlled Markdown, changes to it are reviewed like any other context update — and every agent session uses the same version.

6.3 The human gate

The blueprint is reviewed by subject-matter experts from each in-scope discipline. Each SME approves only their slice. This is a critical property: it ensures that the person with the deepest expertise in an area is the one approving changes in that area — regardless of who authored the blueprint or which AI tool generated the initial draft.

For a team with Frontend, Backend, and QA disciplines:

Other organisations may use different discipline boundaries — by platform, by service, by domain. The principle is the same: the owner of the area reviews and approves the work in that area. No SME approves outside their domain. This is a deliberate application of Conway’s Law [5]: the review structure mirrors the intended architecture.

The gate rule: No application-repo branch is created and no work is assigned until the blueprint is reviewed and accepted. In version-controlled workflows, this means the blueprint PR is merged to the context repository before implementation begins.

6.4 Discipline tasks and work assignment

A key function of the blueprint is to produce structured task blocks that can be assigned directly to individual engineers or discipline teams. The AI agent that produces the blueprint knows (from the context layer) who owns what — which discipline is responsible for which repositories, which SME covers which area. The blueprint’s task section translates the overall design into actionable, assignable work.

For example, in a three-SME model (Frontend, Backend, QA), the blueprint produces three task blocks — each containing a title, repositories, scope, key files, and test specifications. For a team organised differently (e.g. two platform engineers and a QA lead), the task blocks adapt accordingly.

The format of task blocks is defined in the context layer’s blueprint rules (e.g. task-format.md), so the AI agent produces tasks in the team’s chosen format every time — not in a different shape with each session or each engineer’s prompt. This eliminates the variance that occurs when different engineers ask an AI to “break this into tasks” without a shared template.

6.5 Filing

Accepted blueprints are archived with a naming convention:

archive/YYYY-MM-DD_short-slug.md
archive/YYYY-MM-DD_TICKET-NNN_short-slug.md

A catalog file indexes all blueprints by date, title, optional ticket, and filename. This catalog is readable by both humans and agents — making the archive searchable for future Solutioning sessions.

7. Solutioning and Implementation

With the blueprint structrue defined (Section 6) and the context layer in place (Section 5), PACE distinguishes two agent session types. The mode is determined at the start of each session — each with a clear boundary:

Mode Purpose Boundary
Solutioning Produce the blueprint Agent does not edit application code; session ends with a blueprint added ready for peer review.
Implementation Execute from the blueprint Agent reads the approved blueprint and executes only steps for the in-scope discipline

This separation is the mechanism that makes PACE’s design gate work. The Solutioning session’s only output is a blueprint document ready for peer review. The Implementation session’s only input is the approved blueprint. By enforcing this boundary, organisations ensure the design is complete and reviewed before any implementation pressure exists — and before any agent writes application code.

The context layer enforces the boundary: its rules instruct the agent which mode to operate in. When an engineer opens a session with a planning or design intent, the agent enters Solutioning mode automatically — it will not touch application code. When an engineer opens a session referencing an approved blueprint, the agent enters Implementation mode and scopes its work to the steps defined in the blueprint. Figure 1 (Section 9.1) shows this in practice — the agent presents a structured mode selection before any research begins.

This is what prevents the common failure mode where design and implementation blur together in a single AI session — producing code that was never reviewed at the design level.

8. The Feedback Loop

The blueprint archive described in Section 6 does more than store past decisions. It feeds directly back into the context layer, creating a compounding cycle that distinguishes PACE from static governance frameworks. Most process frameworks are defined once and age. PACE is designed to self-improve.

flowchart TB
  A[Agent reads context + prior blueprints] --> B[Agent produces blueprint]
  B --> C[SMEs review + approve]
  C --> D[Blueprint archived in context repo]
  D --> E[Context layer is now richer]
  E --> A

8.1 Blueprints improve context

Every accepted blueprint adds to the archive. The next Solutioning session has access to prior decisions, prior API shapes, and prior complexity assessments. An agent designing a feature adjacent to a past one can read that blueprint and avoid contradicting established decisions.

Over six months of active delivery, the archive becomes one of the richest knowledge bases in the organisation — richer than commit history, more structured than chat logs, more honest than post-hoc documentation. This is especially powerful against the codebase drift problem introduced in Section 2: agents working from a rich archive produce output that is consistent with prior decisions rather than inventing new patterns each session.

8.2 Agents surface gaps

During Solutioning, agents read the context layer’s rules and constraints. When a rule is missing — a routing constraint that should exist but doesn’t, a testing convention that isn’t documented — the blueprint is the mechanism for surfacing it. The reviewing SME sees the gap in the blueprint and either:

Either way, the context layer improves. The next agent session starts from a better baseline.

The blueprint makes this explicit with a dedicated Context layer update section (Section 6.2). When the agent’s research reveals facts that are missing from or contradict the existing context files, it records the learning, the affected file, and the proposed wording — directly in the blueprint. Crucially, user revisions act as a trigger: when an engineer corrects a blueprint assumption, the agent treats the correction as a signal that context may be incomplete and checks whether the fix should propagate back into a context file. Context updates that belong to the shared context repository are applied when the blueprint is archived; updates that belong to a product repo’s local context are carried into the discipline’s Implementation task — so the context fix ships in the same PR as the code it describes. The gap is closed where it was found, by the person doing the work, at the moment they know the answer.

8.3 The compounding effect

Cycle Context quality Blueprint quality Agent output quality
First blueprint Minimal rules Sparse; agents guess Requires heavy human correction
Fifth blueprint Core rules established; prior blueprints available Consistent format; references prior work Fewer structural errors
Twentieth blueprint Rich constraint set; deep archive Agents cite prior decisions; complexity calibrated Human review focuses on edge cases, not basics

The system does not require perfection on day one. Capturing every architecture pattern in one pass is not realistic — nor should it be the goal. Context is built incrementally: with each solution, a pattern is re-informed, a convention is documented, and the context layer grows. This is the “crawl, walk, run” progression: teams start with lightweight rules and sparse blueprints, and the framework becomes more capable with each cycle.

9. PACE in Practice

The previous sections described what PACE is. This section shows how it works — the concrete sequence an engineer follows from receiving a ticket to producing a reviewed blueprint. The workflow below is illustrative; specific tools and integrations vary across organisations, but the sequence and gates are consistent. The screenshots in this section were captured using the Cursor IDE, but the workflow applies to any AI-capable development environment.

9.1 From ticket to blueprint: the workflow

Step 1 — Link the work item. An engineer receives a ticket from their issue tracker (Jira, Linear, Azure DevOps, or equivalent). They open an AI agent session — in their IDE, a CLI tool, or any AI-capable environment — and provide the ticket. This can be as simple as pasting a link. If the agent has an integration with the issue tracker (e.g. via MCP or API), it can pull the ticket’s title, description, acceptance criteria, and linked context automatically. If not, the engineer pastes the relevant content.

The key point: the agent does not start coding. This is not because the engineer wrote a careful prompt — it is because the context layer’s rules (Section 5) already instruct the agent that any planning or design request triggers Solutioning mode, not implementation. The same rules prescribe the clarifying questions in Step 2, the blueprint structure in Step 4, and the boundary between modes described in Section 7. The agent behaves this way because the context files taught it PACE before the session began. The agent’s first action is to ask what mode the engineer needs (see also Section 7 for the Solutioning / Implementation boundary):

Figure 1: The agent presents a structured mode selection — Solutioning or Implementation — before any research begins.

Step 2 — Clarifying questions. Before the agent researches the codebase, reads files, or queries any external system, it asks a structured set of clarifying questions defined in the context layer. These are not free-form — the context file prescribes the exact questions and options:

Figure 2: Follow-up questions — layers, focus, diagrams, deliverable format — presented as structured options the engineer selects from.

This step is critical. It is what prevents prompt quality variance (Section 2) from producing divergent outputs. Every engineer — regardless of their prompting skill — goes through the same structured intake. The agent cannot skip these questions; the context layer enforces the sequence.

These questions are not improvised by the agent — they are prescribed in a context file (plan-mode-questions.md) that the entry point links to. For example:

Step 1 — Mode: Use structured options so the user picks Solutioning or Implementation. No other tools first.

If Solutioning → ask: Layers (Fullstack / FE / BE / QA), Focus, Diagram (Yes / No), Deliverable (Chat-only / Blueprint + catalog).

Forbidden before answers: Codebase search, MCP, reading project files, or drafting a plan.

Because these rules are version-controlled Markdown, changing the intake sequence is a reviewed PR — not a prompt tweak that one engineer discovers and another never sees.

Figure 3: The completed scope — all answers captured in a structured summary before the agent begins research.

Step 3 — Agent researches. Only after the scope is established does the agent begin reading code, querying APIs, checking existing blueprints in the archive, and pulling data from connected systems (issue trackers, documentation platforms, design tools). The agent works within the boundaries the engineer defined and the context layer constrains.

Step 4 — Blueprint draft. The agent produces a blueprint following the structure defined in the context layer (Section 6.2). It fills in each required section: current state, decisions, repos and changes, implementation steps, test updates, and discipline-specific task blocks. The output format is consistent because the agent is reading the same structure rules every time — not interpreting a different prompt from each engineer.

Step 5 — Human review. The blueprint is submitted for review (typically as a PR to the context repository). Each SME reviews their discipline’s slice (Section 6.3). Feedback is incorporated, and the blueprint is revised until approved.

Step 6 — Archive and hand off. The approved blueprint is archived with a standard naming convention and added to the catalog. Implementation sessions can now reference it. The blueprint becomes part of the context layer, available to future agents and engineers.

9.2 What the engineer actually experiences

From the engineer’s perspective, the workflow feels like a guided conversation:

  1. Paste a ticket link or describe the feature
  2. Answer structured questions — chips and buttons, not free-form prompts
  3. Wait while the agent researches and drafts
  4. Review a complete blueprint with decisions, steps, and task assignments
  5. Iterate with the agent — challenge decisions, add missing detail, refine scope — until the blueprint reflects the intended design
  6. Submit for SME review — each discipline owner approves their slice
  7. Switch to Implementation mode — once approved, execute from the blueprint

Step 5 is where much of the value concentrates. The engineer and the agent go back and forth — the engineer brings domain judgement; the agent brings speed and structure. A blueprint rarely needs to be perfect on the first pass; what matters is that the iteration happens before SME review, not after code is written.

The blueprint’s Context layer update section (Section 6.2) makes this concrete. When research or a user revision reveals a fact that is missing from the context files, the agent records it in the blueprint — what was learned, which file to update, and the proposed wording. Shared-context updates are applied when the blueprint is archived; product-repo context updates become part of each discipline’s Implementation task, so the context fix ships in the same PR as the code it describes.

Every implementation session leaves the context layer better than it found it. This turns implementation into a feedback mechanism, not just an execution step — and it is what makes PACE a compounding system rather than a static process.

The entire Solutioning session typically produces no application code — only the blueprint document. This separation (Section 7) is what makes the design gate lightweight: the agent writes the design doc; the engineer validates it; the code comes later.

9.3 Why this works at scale

The practical power of this workflow becomes clear when multiple engineers across multiple teams follow it:

9.4 Bootstrapping context with agentic templates

Getting started with PACE does not require engineers to manually author Markdown files from scratch. The context layer itself is produced by an agent — using a reusable agentic template that instructs the agent to learn from the repository’s codebase and structure its findings in a prescribed format.

A template is a structured prompt document that tells the agent: read this repository, identify its architecture, testing conventions, patterns, and directory structure, and produce context files in the following shape. The agent does the research and drafting; the engineer reviews and refines. The same template that bootstrapped one repository’s context can be reused across every other repository and domain — each run produces context tailored to the specific codebase, but consistent in structure.

There are two template levels:

The rollout pattern:

  1. Create templates once — a platform or architecture team authors the templates.
  2. Run the domain-level template — an agent produces a context repository with the prescribed structure, populated with domain-specific content the agent discovered.
  3. Run the repository-level template — an agent produces repo-specific context files, tailored to each repo’s stack and architecture.
  4. Engineers review and refine — the first feature goes through PACE end to end, validating the context layer against real delivery.
  5. Version templates centrally — subsequent domains and repos onboard from the same starting point, each producing context fitted to their codebase.

The result: Organisations do not manually copy Markdown files to adopt PACE. They run a template that drives the agent to produce both context-repo and app-repo layers — consistently structured, locally relevant. The workflow described in §9.1 works identically in every domain because the structure is shared, even though the content is unique. Enterprise rollout — pilot footprint, domain autonomy, and crawl-phase adoption — is discussed in Section 10.

10. Enterprise Adoption

This section addresses enterprise rollout explicitly — not because PACE is only for large organisations (it is not), but because how you onboard determines whether adoption sticks. The intent is twofold: templates make the first steps repeatable (Section 9.4); domains and teams retain control over how context and blueprints fit their reality.

10.1 Templates, bootstrap, and team autonomy

Adoption does not depend on manually authoring every Markdown file before value appears. Agentic templates (§9.4) let an organisation bootstrap context repositories and repo-specific instructions by running agents against existing codebases — central teams often author templates once, then each domain or squad runs them locally so structure stays aligned while content stays theirs.

That combination matters at enterprise scale:

Together, template-led bootstrap and local ownership mean enterprises can standardise the practice without cloning every detail — the pattern scales; the implementation stays fit for purpose.

10.2 Additional pressures at scale

The dynamics summarised in Section 2 — drift, prompt variance, design intent stuck outside the repository — do not disappear when headcount and footprint grow. They tend to show up under scrutiny — audits, cross-team coordination, long-lived codebases — alongside pressures that a single squad may feel less acutely:

10.3 How PACE maps to those pressures

Pressure How PACE responds
Auditability Every blueprint is a versioned document with full history. Every decision has a rationale. Every approval is recorded.
Vendor posture The governance layer is Markdown in version control. Switching AI tools requires changing one config line per repo.
Surface area Agents produce context documents using agentic templates (§9.4). Context repositories are domain-scoped. The pattern is the same at 5 repos or 500.
Legacy depth Context files convert tacit knowledge to explicit, machine-readable rules. The blueprint archive accumulates design decisions over time.

10.4 Governance without friction

PACE does not introduce a review board, a committee, or an approval workflow outside the existing engineering toolchain. The blueprint is a PR. The review is a code review. The approval is a merge. Engineers use the same tools they already use — version control, their IDE, their issue tracker. The governance is in the process, not on top of it.

11. Future Directions

PACE as described in this paper is a starting point — a governance layer designed for the current moment in AI-augmented delivery. The directions below represent a natural progression as organisations build confidence, accumulate blueprints, and the AI tooling ecosystem matures.

11.1 Context as a queryable service

The Model Context Protocol (MCP) [6] defines a standard interface for supplying context to AI agents. Today, the context layer is a set of files that agents read at session start. The next step is to expose the context repository as an MCP server — a queryable service that any system in the organisation can call.

This changes the context layer from a local resource to an organisational knowledge API. CI pipelines could query blueprint history before generating deployment configurations. PR review bots could check whether a change contradicts a prior design decision. Planning tools could surface relevant blueprints when a new ticket is created in an adjacent area. The context repository becomes infrastructure, not just documentation.

11.2 Organisation-wide architecture patterns as shared context

PACE treats the context layer mainly per domain or product — each team owns conventions, blueprint rules, and its archive (Section 5). That leaves a gap many enterprises already feel elsewhere: reference architectures, platform standards, integration norms, and security baselines often live in slide decks, wikis, or review minutes. Humans can align in a forum; agents need those expectations in the same supply path as squad-specific context, or every session risks optimising locally against rules nobody remembered to paste into the prompt.

A natural extension is a programme- or enterprise-scoped context slice — still plain text in version control, still reviewed — owned by technology architecture or platform engineering at pattern altitude: approved integration shapes, edge and API posture, data-handling expectations, observability baselines — not feature-level detail (that stays with the domain), but constraints every workspace should inherit. Each workspace would compose context at session start: enterprise patterns as a baseline, then domain-specific instructions on top, so truly shared architecture remains durable when teams, tools, or vendors change.

Delivery could take several shapes: a dedicated enterprise context repository referenced from each domain entry point; generated bundles or submodules that aggregate layers at bootstrap; or a resolver exposed through MCP (§11.1) that merges multiple roots by policy. The principle is the same as elsewhere in PACE: what must survive belongs in versioned, attributable artifacts — not only in domain archives, but where cross-cutting architecture is decided.

11.3 Cross-domain retrieval and indexing

As organisations accumulate blueprint archives across multiple domains, the natural extension is a retrieval layer that indexes blueprints across all context repositories. An agent working in Domain A could discover that Domain B solved a similar problem six months ago — and reference that blueprint’s decisions, API shapes, and complexity assessment.

This is the RAG pattern [4] applied to organisational design knowledge. Instead of retrieving from generic documentation or code comments, agents retrieve from reviewed, structured, human-approved design artifacts. The quality of retrieval improves because blueprints are consistent in structure (Section 6.2) and rich in context — they are the highest-signal documents an organisation produces about why something was built the way it was.

Combined with the MCP server (§11.1) and, where present, organisation-wide pattern context (§11.2), this completes a federated knowledge picture: each domain owns its repository, enterprise-wide constraints stay loadable alongside domain rules, and a cross-domain index makes prior decisions discoverable across boundaries. An agent does not need to know which domain to search — it queries the index and gets relevant prior decisions regardless of where they originated.

11.4 From blueprint to code: the automation trajectory

Today the blueprint gate exists because AI-generated designs still need human validation (Section 6.3). As the feedback loop runs (Section 8), context and the blueprint archive compound: for low-complexity, well-precedented work, agent output increasingly matches what SMEs would have approved — so the organisation can calibrate how much review each change needs.

Trust level Gate When
Low (early adoption) Full SME review of every blueprint Default
Medium SME review for medium+ complexity; trivial blueprints that match precedent may auto-approve Archive deep enough
High Low-complexity paths automate blueprint → implementation; humans focus on novel or cross-cutting work Evidence that auto-approved blueprints match SME expectations

The gate stays — it becomes risk-proportional. The blueprint complexity score (Section 6.2) can route work to the right level of review instead of one-size-fits-all ceremony.

Trajectory: ticket → agent blueprint (with MCP-fed context where applicable) → policy/rubric routes low-risk designs → implementation and PR — with humans concentrated where risk warrants, not on every trivial repetition. That is not autonomous coding or gate-free agents; it is governed delivery whose leverage points move as institutional knowledge grows.

12. Conclusion

PACE introduces a governance and enablement layer for AI-augmented software delivery built on the simplest possible mechanism: structured context files and versioned blueprints in version control.

The framework’s contribution is not a new methodology. It is the observation that feature-level design — what each change implies for repositories, contracts, and ownership — often stays implicit in AI-assisted workflows (in tickets and chat), even when architectural decisions are recorded elsewhere. Making that layer explicit at the right leverage point requires only three things: a context layer that agents read, a blueprint that humans review before code, and a feedback loop that improves both over time.

The SDLC does not change. Requirements, implementation, testing, and deployment remain as they were. What changes is when design becomes visible and who approves it: before the code, by the discipline owners, in a versioned artifact that the next engineer and the next agent can read.

The AI tooling landscape is evolving rapidly. PACE is not an end state — it is a starting framework designed for the current moment, when organisations must learn to leverage AI agentic capabilities without accumulating the technical debt that ungoverned adoption produces. As tools mature, the context layer and blueprint archive provide the institutional muscle and knowledge base that will adapt to whatever comes next.

PACE is deliberately low-tech because governance that depends on a platform will be abandoned when the platform changes. Markdown in version control will outlast any AI vendor. The rules, the blueprints, and the decisions they record will still be readable — by humans and agents alike — long after today’s tools are replaced.

Plan first. Give agents context. Then execute.

13. Glossary

Term Definition
Agent An AI system that takes actions in a software environment in response to instructions
Agentic template A structured document prescribing the shape of agent instruction files at repo or domain level
Blueprint A versioned Markdown document recording the complete shared design for a feature before implementation
Blueprint archive A directory storing all accepted blueprints as versioned Markdown files
Catalog An index file listing all accepted blueprints
Complexity score A 0–100 estimate of a change’s breadth and risk
Context layer Persistent, machine-readable documentation supplied to agents at session start
Context layer update A blueprint section that records new facts to propagate back into context files; user revisions trigger gap detection
Context repository A version-controlled repo containing no application code; stores agent instructions and the blueprint archive
Discipline A functional area of engineering ownership (e.g. Frontend, Backend, QA)
Enterprise architecture context Organisation-wide technology patterns and constraints, supplied to agents alongside domain-specific context
Human gate A required human-approval step in the delivery lifecycle
Implementation mode Agent session executing from an approved blueprint
MCP Model Context Protocol — a standard interface for supplying context to AI agents
PACE Plan, Agentic Context, Execute
SME Subject-Matter Expert; approves their discipline’s slice of a blueprint
RAG Retrieval-Augmented Generation — using external knowledge sources to improve LLM outputs
Solutioning mode Agent session that produces a blueprint ready for peer review; does not edit application code

14. References

[1] Ziegler, A., Kalliamvakou, E., Li, X. A., et al. (2022). Productivity assessment of neural code completion. MAPS ’22. ACM.

[2] Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity: Evidence from GitHub Copilot. arXiv:2302.06590.

[3] Forsgren, N., Storey, M.-A., Maddila, C., et al. (2025). DORA State of AI-Assisted Software Development 2025. Google Cloud. https://dora.dev/research/2025/dora-report

[4] Lewis, P., Perez, E., Piktus, A., et al. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. NeurIPS 2020, 33, 9459–9474.

[5] Conway, M. E. (1968). How do committees invent? Datamation, 14(4), 28–31.

[6] Anthropic. (2024, November). Model Context Protocol specification. https://modelcontextprotocol.io

[7] Boehm, B. W. (1981). Software Engineering Economics. Prentice-Hall.

[8] Brooks, F. P. (1987). No silver bullet: Essence and accident in software engineering. Computer, 20(4), 10–19.

[9] Nygard, M. (2011). Documenting architecture decisions. Cognitect Blog.

[10] Harding, B. (2024). Coding on Copilot: 2023 data suggests downward pressure on code quality. GitClear Research. https://gitclear.com/coding_on_copilot_data_shows_downward_pressure_on_code_quality

[11] Demirer, M., Kalliamvakou, E., Peng, S., et al. (2025). The productivity effects of generative AI coding tools: Causal evidence from randomized controlled trials across Microsoft, Accenture, and a Fortune 100 company. Working paper. https://demirermert.github.io/Papers/Demirer_AI_productivity.pdf

[12] Uplevel Data Labs. (2024). A controlled experiment on the impact of GenAI coding tools on developer productivity and code quality. Uplevel Research. https://uplevelteam.com/blog/genai-developers

[13] Harding, B. (2025). AI Copilot code quality: 2025 data suggests 4x growth in code clones. GitClear Research. https://gitclear.com/ai_assistant_code_quality_2025_research

[14] Forsgren, N., Humble, J., & Kim, G. (2018). Accelerate: The Science of Lean Software and DevOps. IT Revolution Press.

[15] Shull, F., Basili, V., Boehm, B., et al. (2002). What we have learned about fighting defects. Proceedings of the 8th International Software Metrics Symposium (METRICS 2002). IEEE.


© OneMain Financial. Authored by Anshul Shanker.