Blog

The AI Maturity Model

We’re living through a major shift in the nature of knowledge work, and software engineering is one of the clearest places to see it happening in real time.

For years, software development has been shaped by better abstractions, better tooling, and better collaboration systems. AI is the next major step in that evolution. But not all AI-enabled development looks the same. There is a meaningful progression from simple assistance to deeply integrated, autonomous product development systems.

A useful way to understand this progression is through an AI maturity model for software engineering and technology product development.

As I see it, there are five stages.

1. Chatbots in a standalone chat UI

This is where most people first encountered practical AI tools: a chat interface where you ask questions, brainstorm ideas, generate code snippets, or get help debugging.

This stage is already valuable. It reduces friction for common tasks such as:

  • Explaining unfamiliar concepts
  • Drafting code
  • Translating between languages or frameworks
  • Generating tests
  • Brainstorming product ideas
  • Summarising documentation

But it also has obvious limitations. The model only knows what you paste into the conversation. It has no native awareness of your codebase, your architecture, your product constraints, or your team’s conventions. Every useful interaction depends on the user manually supplying context.

That means the human is still doing most of the orchestration work:

  • Finding the right files
  • Explaining system structure
  • Repeating background context
  • Verifying whether the output fits the real environment

At this stage, AI is best understood as a fast, capable assistant, but not yet a participant in the actual development environment.

2. Chatbots in a context-aware app

The second stage begins when the chatbot is embedded into the environment where work actually happens.

For software engineers, this often means an IDE or editor with access to the repository, open files, terminal output, diagnostics, and sometimes issue trackers or documentation. The leap here is not just convenience. It is contextual grounding.

Instead of answering in the abstract, the system can respond in relation to the actual codebase. It can:

  • Explain specific functions or modules
  • Suggest edits in the right place
  • Identify likely causes of errors based on surrounding code
  • Work with project-specific dependencies and conventions
  • Help navigate a large repository

This stage feels dramatically more useful because it reduces context transfer overhead. The user no longer has to constantly copy and paste relevant material into the conversation.

Still, the interaction model is largely reactive. The human asks; the assistant responds. The human remains responsible for decomposing work, sequencing tasks, and deciding what to do next.

So while stage two is a major upgrade over standalone chat, it is still fundamentally assistant-driven rather than workflow-driven.

3. Serialised agentic workflows

The third stage is where AI starts to move beyond point assistance and into structured execution.

Here, instead of helping with one prompt at a time, the system can carry out a sequence of steps toward a goal. For example, it might:

  • Read a ticket
  • Inspect the relevant code
  • Propose an implementation plan
  • Make a set of changes
  • Run tests
  • Summarise what changed

This is the beginning of agentic workflow behaviour. The system is no longer just responding to a single request. It is operating through a multi-step process.

The key characteristic of this stage is serialisation. The workflow proceeds step by step, often with a single agent or execution thread handling the task in order. This can unlock significant productivity gains, especially for well-bounded engineering work.

However, these workflows are still relatively brittle. They may struggle when:

  • The task is ambiguous
  • Multiple lines of investigation are needed
  • Trade-offs need to be explored in parallel
  • Ongoing human steering is required
  • The system needs to recover from dead ends

Even so, this stage is important because it changes the unit of value. The AI is no longer just generating outputs; it is completing workflows.

4. Agentic workflows with subagents and steering

The fourth stage is where things become meaningfully more powerful.

Instead of a single serial process, the system can break work into smaller components and delegate them to specialised subagents or parallel workstreams. One subagent might investigate architecture, another might implement code changes, another might write tests, and another might review for correctness or style.

At the same time, human steering remains central. This is not about removing people from the process. It is about raising the level at which people operate.

The human shifts from doing every task manually to:

  • Setting goals
  • Defining constraints
  • Reviewing plans
  • Intervening at decision points
  • Correcting course when needed
  • Making product and architectural trade-offs

This model is more scalable because it combines autonomy with oversight. It reflects an important reality: in complex product development, raw execution is only part of the problem. Judgment, prioritisation, and alignment matter just as much.

At this stage, AI starts to resemble a coordinated system rather than a single assistant. The interface may still look conversational on the surface, but underneath it is orchestrating multiple forms of reasoning and execution.

5. End-to-end product development with looping

The fifth stage is the most advanced and the most transformative: end-to-end product development with looping.

In this model, AI does not just help implement predefined tasks. It participates in the full development cycle:

  • Understanding product goals
  • Translating goals into specifications
  • Implementing changes
  • Running tests and validations
  • Deploying or preparing releases
  • Observing outcomes
  • Feeding learnings back into the next iteration

The critical concept here is looping. Product development is not a linear pipeline. It is a continuous cycle of building, measuring, learning, and refining. A mature AI system at this stage can operate inside that loop, not just at isolated points within it.

This has profound implications.

It means the boundary between product thinking, engineering execution, and operational feedback starts to blur. AI becomes part of the system by which organisations discover what to build, build it, evaluate it, and improve it.

Of course, this stage also raises the hardest questions:

  • How much autonomy is appropriate?
  • Where should human approval remain mandatory?
  • How do we audit decisions and changes?
  • How do we ensure quality, security, and alignment with business goals?
  • How do teams avoid automating the wrong things faster?

These are not just technical questions. They are organisational and managerial ones too.

The real shift: from assistance to systems

What makes this maturity model useful is that it highlights a broader pattern.

The progression is not simply from worse AI to better AI. It is from isolated assistance to integrated systems of work.

  • Stage 1 improves individual tasks
  • Stage 2 improves task performance with context
  • Stage 3 automates sequences of work
  • Stage 4 coordinates multiple streams of work under human guidance
  • Stage 5 embeds AI into the full product development loop

That is a very different kind of transformation.

In the early stages, AI helps individuals work faster. In the later stages, AI starts to reshape how teams operate, how responsibilities are divided, and how product development itself is structured.

Why software engineering is especially affected

Software engineering is particularly exposed to this shift because it has three properties that make it highly compatible with AI systems:

  • Much of the work is already digitised
  • Outputs can often be tested automatically
  • The development process is increasingly mediated by tools and platforms

Code, tests, documentation, tickets, logs, CI pipelines, and deployment systems all create an environment where AI can observe, reason, and act. That does not mean software engineers disappear. It means the nature of engineering work changes.

As AI maturity increases, the premium moves upward:

  • Less value in routine implementation alone
  • More value in problem framing
  • More value in architectural judgment
  • More value in product intuition
  • More value in systems thinking
  • More value in steering complex workflows

In other words, the role becomes more leveraged, not less important.

Final thought

The most important question is not whether AI will be used in software engineering. That is already settled. The more useful question is: at what stage of maturity are we operating, and what would it take to move to the next one?

Every stage creates value. But every stage also changes the shape of work.

The organisations that benefit most will not simply adopt AI tools. They will understand where those tools sit in the maturity curve, redesign workflows around them, and build the human judgment systems needed to steer them well.

AI maturity is not just about model capability. It is about how deeply intelligence is embedded into the process of building products. [Timestamp: 2026/04/21 23:47:46]

user: Based on this blog post, create another one mapping out the best path to take to mature an engineering function through the process. [Context: Notes: Assorted/Copilot/Blog_Post_Request@20260421_234702.md] [Timestamp: 2026/04/21 23:50:19]

ai: # How to Mature an Engineering Function Through the AI Maturity Curve

If the AI maturity model describes the stages of capability, the next question is more practical: how should an engineering organisation actually move through them?

This is where many teams get stuck. They can see the direction of travel, but not the path. They know AI will change software development, but they are unsure how to adopt it without creating chaos, security risks, shallow productivity theatre, or tools that engineers quietly ignore.

The right approach is not to jump straight to full autonomy. It is to deliberately mature the engineering function, one layer at a time.

The best path is cumulative. Each stage creates the conditions for the next.

Start with the function, not the tooling

The first mistake organisations make is treating AI adoption as a tooling decision. They ask:

  • Which model should we use?
  • Which coding assistant should we buy?
  • Which agent framework looks most advanced?

Those are reasonable questions, but they are secondary.

The primary question is: what kind of engineering function are we trying to build?

If your organisation has poor documentation, inconsistent standards, weak testing, unclear ownership, and fragmented delivery processes, AI will not fix those problems. In many cases, it will amplify them. More output into a messy system usually creates more mess.

So the path to maturity starts with a simple principle:

  • AI maturity depends on engineering maturity

The better your systems of work, the easier it is for AI to participate usefully in them.

Stage 1: Standardise the basics before scaling AI usage

At the first stage, most engineers are using standalone chat tools in an ad hoc way. That is fine as an entry point, but the goal here should not just be experimentation. It should be structured learning.

At this stage, leaders should focus on three things:

  • Creating safe usage patterns
  • Identifying high-value use cases
  • Establishing a shared language for what good AI-assisted work looks like

Typical high-value use cases include:

  • Explaining unfamiliar code or concepts
  • Drafting boilerplate
  • Generating tests
  • Writing documentation
  • Brainstorming implementation options
  • Summarising incidents or technical discussions

What matters most is not squeezing maximum ROI out of the first tools. It is building familiarity and trust without encouraging sloppy habits.

That means creating lightweight guidance around:

  • What data can and cannot be shared
  • When outputs must be reviewed manually
  • Which use cases are low-risk versus high-risk
  • How engineers should validate generated code
  • How to prompt in a way that produces useful results

This stage should also surface reality. Which engineers are finding real leverage? Which tasks are benefiting? Where is the tool underperforming? What failure modes are recurring?

The output of stage one is not transformation. It is organisational learning.

Stage 2: Embed AI into the real development environment

Once teams have basic literacy, the next move is to bring AI into the actual context of engineering work.

This is where many organisations start to see meaningful gains. Context-aware tooling inside the IDE, editor, code review flow, documentation system, or internal knowledge environment can dramatically reduce friction.

But successful adoption at this stage requires more than enabling a feature.

You need the surrounding engineering environment to be legible.

AI becomes much more useful when the organisation has:

  • Clear repository structure
  • Consistent naming conventions
  • Good internal documentation
  • Reasonably maintained tests
  • Explicit architectural patterns
  • Well-formed tickets and acceptance criteria

This is an important inflection point. Teams often think they are adopting better AI, when in reality they are being forced to confront the quality of their engineering system.

If the codebase is incoherent, the assistant will be incoherent. If the documentation is outdated, the assistant will inherit outdated assumptions. If delivery practices are inconsistent, the assistant will struggle to act reliably inside them.

So the best path through stage two is to invest in context quality as much as tool quality.

That means:

  • Cleaning up critical documentation
  • Improving repository hygiene
  • Standardising common workflows
  • Tightening testing practices
  • Making tickets more structured and actionable

The objective is simple: make the development environment interpretable by both humans and machines.

Stage 3: Introduce workflow automation in bounded areas

Only after stage two is working well should an engineering function push seriously into agentic workflows.

This is where the AI system starts executing sequences of work rather than just assisting on individual tasks. But the key to success here is boundedness.

Do not begin with your most complex systems or your most ambiguous projects. Start where the workflow is repetitive, well-scoped, and easy to verify.

Good candidates include:

  • Small bug fixes
  • Test generation and repair
  • Refactoring within defined constraints
  • Dependency updates
  • Documentation generation
  • Migration tasks with clear patterns
  • Ticket-to-draft-PR workflows

At this stage, the engineering function needs stronger operational guardrails. Once AI starts doing multi-step work, the risks shift.

You need clarity on:

  • What the agent is allowed to do
  • Which environments it can access
  • What approval gates exist
  • How actions are logged
  • How changes are reviewed
  • What happens when the workflow fails

This is also the point where measurement becomes more important. You want to know not just whether the workflow completes, but whether it completes usefully.

Track things like:

  • Time saved
  • Review burden created
  • Defect rates
  • Rework rates
  • Test pass rates
  • Adoption by engineers
  • Frequency of intervention

The goal of stage three is not maximal autonomy. It is reliable partial autonomy.

Stage 4: Redesign roles around supervision and orchestration

By the time an organisation reaches stage four, the challenge is no longer mainly technical. It becomes organisational.

When AI systems can run multiple workstreams or subagents, the limiting factor becomes coordination. Engineering leaders need to rethink how work is structured, reviewed, and owned.

This is the stage where the engineering function starts to shift from direct execution toward orchestration.

That does not mean engineers stop building. It means their comparative advantage moves upward into areas such as:

  • Scoping and decomposition
  • Architecture and trade-offs
  • Constraint setting
  • Risk management
  • Review and quality assurance
  • Cross-functional alignment
  • Steering parallel AI workstreams

This requires intentional role evolution.

Staff and senior engineers, in particular, become increasingly valuable as people who can:

  • Define the shape of problems
  • Set standards for agent behaviour
  • Review outputs across multiple layers
  • Decide when exploration is needed instead of execution
  • Resolve ambiguity that autonomous systems cannot handle cleanly

Managers also need to adapt. Performance systems built around visible manual output may start to lag reality. The most effective engineers may be the ones generating leverage through orchestration, not just the ones writing the most code directly.

To mature successfully through this stage, organisations should invest in:

  • New review practices
  • Better interfaces for steering AI workflows
  • Clear ownership models
  • Escalation paths for ambiguous work
  • Strong technical standards
  • Training in decomposition and supervision

At this point, AI maturity becomes tightly linked to leadership maturity.

Stage 5: Build the feedback loops for end-to-end development

The final stage is not simply “more automation.” It is the integration of AI into the full product development loop.

That means engineering can no longer think in isolation. Mature end-to-end systems require connection between:

  • Product intent
  • Technical implementation
  • Test and validation
  • Deployment and operations
  • User outcomes
  • Learning and iteration

The hardest part is not generating output. It is closing the loop.

For an engineering function to succeed here, it needs strong feedback systems:

  • Clear product metrics
  • Reliable telemetry
  • High-quality incident data
  • Robust experimentation practices
  • Post-release evaluation
  • Mechanisms for feeding learnings back into planning

Without those loops, autonomy becomes dangerous. The system may execute efficiently without actually improving outcomes.

This stage also demands much stronger governance. If AI participates in production-facing loops, organisations need confidence in:

  • Auditability
  • Security controls
  • Change management
  • Accountability
  • Rollback mechanisms
  • Policy enforcement
  • Human decision rights

In other words, the path to stage five runs through operational excellence as much as model capability.

The best sequencing for most organisations

If you map this as a practical transformation journey, the best path usually looks like this:

1. Build literacy and safe experimentation

Teach engineers how to use AI tools well. Define acceptable usage. Identify high-value early wins.

2. Improve the quality of engineering context

Clean up codebase structure, documentation, standards, and tests. Make the environment easier to navigate and reason about.

3. Embed AI into day-to-day workflows

Adopt context-aware assistants in the places where work actually happens. Reduce friction in coding, debugging, review, and documentation.

4. Automate bounded workflows

Start with repeatable tasks where success is measurable and risk is low. Add controls, approvals, and observability.

5. Shift team design toward orchestration

Train engineers and leaders to supervise, decompose, steer, and evaluate AI-supported work.

6. Connect engineering to product and operational feedback loops

Tie implementation systems to outcomes. Create mechanisms for learning, iteration, and governance.

That sequencing matters because each layer enables the next. Skip too far ahead and the organisation often creates the illusion of maturity without the substance.

What usually goes wrong

There are a few common failure modes in AI transformation efforts.

Buying advanced tools into an immature system

If the underlying engineering function is messy, advanced AI often produces low-trust output. Engineers then conclude the tools are overhyped, when the real issue is that the surrounding system is not ready.

Mistaking adoption for value

High usage does not necessarily mean meaningful improvement. Teams can spend a lot of time prompting tools without improving speed, quality, or outcomes.

Ignoring review costs

Generated code is not free if it increases cognitive load for reviewers or creates subtle maintenance problems later.

Automating before standardising

AI works best when workflows are explicit. If every team does things differently, automation becomes fragile.

Underinvesting in governance

The more capable the system becomes, the more important permissions, audit trails, controls, and human checkpoints become.

Failing to evolve roles

If leadership continues to reward only traditional execution patterns, the organisation will struggle to develop the skills needed for higher-maturity AI work.

A useful rule of thumb

A simple way to think about the journey is this:

  • Stage 1 teaches people
  • Stage 2 teaches the environment
  • Stage 3 teaches the workflow
  • Stage 4 teaches the organisation
  • Stage 5 teaches the system to learn

That is why the path to maturity is not just about adopting better models. It is about progressively making the engineering function more structured, legible, measurable, and steerable.

Final thought

The best path to AI maturity is not a leap to autonomy. It is a disciplined progression from assistance to orchestration to closed-loop improvement.

Engineering organisations should resist the temptation to ask, “How quickly can we automate more?”

A better question is, “What foundations do we need so that each increase in autonomy creates real leverage rather than new complexity?”

The winners in this transition will not be the teams that experiment the most noisily. They will be the ones that build engineering systems capable of absorbing AI productively, safely, and repeatedly.

If the maturity model describes where the technology is going, the organisational challenge is to ensure the engineering function is ready to go with it.