Change Intent Records: The Missing Artifact in AI-Assisted Development

The Gap in Our Documentation

Git tells us what changed and when. ADRs tell us why we made architectural choices. But AI-assisted development has created a new gap that neither addresses.

When I instruct an AI agent to build a feature, the resulting code captures what you built. The commit history shows when. But neither captures why I instructed the agent that way. What constraints did I give it? What did I reject along the way? What was I actually trying to accomplish?

The answer lives in the conversation. But conversations are unbounded, unstructured, and unmaintainable. They’re the raw material, not the artifact.

Without this, you repeat past mistakes. You revisit decisions you already made. You lose the reasoning that shaped the code, and six months later you’re guessing at your own intent.

We solved a similar problem before with Architecture Decision Records. It’s time to solve it again.


Code as Output, Intent as Source

When I can regenerate code from a good specification and a conversation with an agent, which is the source of truth? The code, or the intent that produced it?

Traditional development treated code as the primary artifact. We documented it, versioned it, reviewed it. The thought process that produced it was ephemeral: we discussed it in meetings, scattered it across Slack threads, and eventually forgot it.

AI-assisted development inverts this. The code becomes derivative of intent. If you can delete the code and regenerate something materially equivalent from the same intent, then the code is no longer the primary artifact. The durable artifact isn’t the implementation. It’s the intent.

This means our documentation practices are pointed at the wrong thing. We’re meticulously tracking the output while letting the input evaporate.


What ADRs Got Right

Architecture Decision Records succeeded where other documentation efforts failed. They’re simple, they’re structured, and they require no tooling.

Michael Nygard’s original proposal was just a markdown template:

  • Title: A short noun phrase
  • Context: The forces at play
  • Decision: What we decided
  • Consequences: What happens as a result

That’s it. No framework to install. No special syntax. No workflow to adopt. Just a file in your repository that answers “why did we choose this?”

The tooling came later (adr-tools and similar utilities) but it was always optional. The practice stood on its own. You could adopt ADRs with nothing but a text editor.

This matters because documentation practices that require setup don’t get adopted. If you need to npm install something before you can write down why you built a feature, most developers won’t bother.


What’s Emerging (And What’s Missing)

The need for this kind of documentation is already surfacing. Developers are independently creating specs/ folders to hold markdown files that capture their intent. Projects like OpenSpec are formalizing the pattern.

But these solutions are coupling two things that should be separate: the format and the tooling.

OpenSpec, for example, requires specific runtime dependencies, prescribed workflows, and generates multiple artifact types. It’s solving real problems, but it’s solving them with machinery.

Compare this to ADRs. You can adopt ADRs in five minutes. You create a folder, copy a template, and start writing. The simplicity is the point.

We need the same approach for capturing intent in AI-assisted development. A format, not a framework.


Change Intent Records

A Change Intent Record (CIR) captures why a change was made the way it was. It’s the structured distillation of a conversation, not the conversation itself.

CIRs work for traditional development too, but they’re particularly valuable when working with AI agents. The agent doesn’t know what you rejected, what constraints you had in mind, or why you steered it in a particular direction. The CIR captures that.

Here’s the template:

# CIR-001: Add rate limiting to API endpoints

## Intent
Prevent abuse of public API endpoints by implementing per-user rate limiting.

## Behavior
- GIVEN a user within their request quota
- WHEN they make an API request
- THEN the request succeeds

- GIVEN a user who has exceeded their quota
- WHEN they make an API request
- THEN they receive a 429 response with Retry-After header

## Constraints
- Use existing Redis infrastructure
- Follow the auth middleware pattern
- Limits configurable per endpoint

## Decisions
- Sliding window over fixed window (smoother limiting)
- Rejected token bucket (the agent's initial proposal was too complex for our traffic)

## Date
2026-01-31

Five sections, each capturing something git doesn’t:

Intent answers “what were you trying to accomplish?” Not the implementation details, but the goal. This is what you’d tell a colleague if they asked why this code exists.

Behavior answers “what should happen?” Using the familiar given/when/then pattern from BDD, this section makes the expected behavior concrete and testable. It bridges intent and implementation.

Constraints answers “what boundaries did you set?” These are the guardrails you gave the agent: patterns to follow, technologies to use or avoid, requirements to satisfy. Constraints often explain why the code looks the way it does.

Decisions answers “where did you steer?” AI agents propose things. You accept some and reject others. The decisions you made along the way are the human judgment that shaped the final result.

Date anchors the record in time. Context matters. What made sense in January may look odd by June. The date helps future readers understand the constraints you were working under.

One optional addition: RFC 2119 keywords (MUST, SHOULD, MAY) can sharpen your Constraints section. “MUST use Redis” is clearer than “use Redis.” Not required, but helpful when precision matters.


CIRs and ADRs

CIRs don’t replace ADRs. They complement them. And CIRs are not chat transcripts, prompt logs, or execution traces. They’re the distilled reasoning, not the raw session.

CIRs also differ from design documents. Design docs are prospective: you write them before implementation to describe what you’ll build and how. CIRs are retrospective: you write them after implementation to capture why you made the change the way you did. A design doc might say “here’s how we’ll build the caching layer.” A CIR says “when we built the session cache, here’s why we chose sliding window over LRU.”

ADRCIR
ScopeArchitecture-level choicesFeature/change-level choices
Question answeredWhy did we choose this technology/pattern?Why did we build this feature this way?
LongevityLong-lived, rarely supersededSuperseded when features are reworked
TriggerSignificant architectural decisionAI-assisted implementation

Here’s how they work together in practice.

Six months ago, your team wrote ADR-012:

# ADR-012: Use Redis for all caching layers

## Context
We need a caching strategy for the platform. Options considered:
- Redis (existing infrastructure, team familiarity)
- Memcached (simpler, but no persistence)
- Application-level caching (no additional infrastructure)

We already run Redis for job queues. The team has operational experience with it.

## Decision
Use Redis for all caching layers.

## Consequences
- Operational simplicity (single caching technology)
- Can leverage existing monitoring and alerting
- Trade-off: less flexibility than a mixed approach

The decision captured the context and consequences.

Now you’re building a user session cache. You fire up an AI agent and start working. You tell it to use Redis (because of ADR-012). You constrain it to the sliding window pattern your team prefers. You specify a 15-minute TTL to match your auth token expiry. The agent builds it.

Three months from now, someone asks: “Why sliding window? Why 15 minutes?” The code doesn’t say. ADR-012 doesn’t say. Git doesn’t say.

The CIR says:

# CIR-047: User session cache

## Intent
Cache active user sessions to reduce database load on the auth service.

## Behavior
- GIVEN a user with an active session
- WHEN their session is requested
- THEN it returns from cache within 5ms

- GIVEN a user whose session expired
- WHEN their session is requested
- THEN the cache misses and auth service is queried

## Constraints
- Use Redis per ADR-012
- Sliding window expiration (team standard)
- 15-minute TTL to match auth token expiry

## Decisions
- Rejected LRU eviction (expiry-based is simpler for sessions)
- Chose hash storage over string (easier to extend later)

## Date
2026-07-15

The ADR set the architectural context. The CIR captured the implementation intent within that context. Together, they answer the full question: why Redis (ADR), and why this particular Redis implementation (CIR).


When to Write a CIR

Not every AI interaction needs a CIR. Use judgment.

Write a CIR when the constraints aren’t obvious from the code, you rejected reasonable alternatives, or someone might later wonder “why was it built this way?”

Skip the CIR when the change is trivial, the intent is obvious, or you’re just exploring.

A simple heuristic: if you’d explain the reasoning to a teammate in Slack, it probably deserves a CIR.

You write CIRs for future readers, not for approval. They’re explanatory artifacts, not gates. They sit between commits and ADRs: closer to the work than architecture, but more durable than conversation.


Objections

“This is just more documentation that won’t get written.”

Maybe. But CIRs have the same advantage ADRs have: they’re small, structured, and written at the moment of highest context. You’re not reconstructing your reasoning weeks later. You’re capturing it while it’s fresh. And if you’re working with AI agents, the agent can draft the CIR as part of the workflow.

“Why not just let the AI generate it automatically?”

The AI can draft it. But the value of a CIR is in the human judgment it captures: what you rejected, what constraints you imposed, why you steered the agent in a particular direction. An AI-generated CIR without human review is just a summary of what happened, not a record of intent.

“Git commit messages should capture this.”

Commit messages capture what changed. CIRs capture why you made the choices you made. A good commit message might say “add rate limiting middleware.” A CIR explains why you chose sliding window over token bucket, why you set the limit at 100 requests, and why you rejected the agent’s first approach. Different artifacts, different purposes.

“The Behavior section is just tests. Why not just write tests?”

Tests verify behavior. The Behavior section documents expected behavior for humans reading the CIR. Tests can be dense, scattered across files, and focused on edge cases. The Behavior section gives a quick, readable summary of what the change is supposed to do. Write both.


Getting Started

  1. Create a docs/cir/ folder in your repository (or use whatever location works for your project)
  2. Copy the template above into a file like CIR-001-rate-limiting.md
  3. Fill it out after completing an AI-assisted implementation
  4. Commit it alongside your code changes

Number CIRs sequentially. The number is just an identifier, not a priority or ordering. Use a short descriptive suffix in the filename to make browsing easier.

That’s it. No tooling required.

If you’re working with AI agents, add instructions to your AGENTS.md file:

## Change Intent Records

After completing a non-trivial feature or change, create a CIR in `docs/cir/`.

Use this template:
- **Intent**: What were we trying to accomplish?
- **Behavior**: Given/when/then scenarios for expected behavior
- **Constraints**: What boundaries shaped the implementation?
- **Decisions**: What alternatives were considered and rejected?
- **Date**: When was this written?

When reworking a feature, mark the old CIR as superseded and reference the new one.

The agent that helped build the feature can help maintain its CIR. This keeps the practice lightweight while ensuring CIRs stay current as code evolves.

If your team finds CIRs valuable, you can add conventions later: numbering schemes, review processes, templates for different kinds of changes. But start simple. The practice matters more than the process.


The Conversation Is Raw Material

The shift to AI-assisted development is real. Conversations with agents are generating more and more code. The developers who thrive in this environment won’t be the ones who write the most code. They’ll be the ones who give the clearest intent.

That intent deserves to be captured. Not as a transcript of every message, but as a structured record of what you were trying to build, what constraints you set, and what decisions you made along the way.

ADRs taught us that documenting the “why” behind architectural decisions was worth the effort. CIRs extend that lesson to the age of AI-assisted development.

The conversation is raw material. The CIR is what survives.