Blog

How to Mature an Engineering Function Through the AI Maturity Curve

In [[The AI Maturity Model]] I described the stages of an AI maturity model, explaining the value of each stage of progress.

If the AI maturity model describes the stages of capability, the next question is more practical: how should an engineering organisation actually move through them?

This is where many teams get stuck. They can see the direction of travel, but not the path. They know AI will change software development, but they are unsure how to adopt it without creating chaos, security risks, shallow productivity theatre, or tools that engineers quietly ignore.

The right approach is not to jump straight to full autonomy. It is to deliberately mature the engineering function, one layer at a time.

The best path is cumulative. Each stage creates the conditions for the next.

Start with the function, not the tooling

The first mistake organisations make is treating AI adoption as a tooling decision. They ask:

  • Which model should we use?
  • Which coding assistant should we buy?
  • Which agent framework looks most advanced?

Those are reasonable questions, but they are secondary.

The primary question is: what kind of engineering function are we trying to build?

If your organisation has poor documentation, inconsistent standards, weak testing, unclear ownership, and fragmented delivery processes, AI will not fix those problems. In many cases, it will amplify them. More output into a messy system usually creates more mess.

So the path to maturity starts with a simple principle:

  • AI maturity depends on engineering maturity

The better your systems of work, the easier it is for AI to participate usefully in them.

Stage 1: Standardise the basics before scaling AI usage

At the first stage, most engineers are using standalone chat tools in an ad hoc way. That is fine as an entry point, but the goal here should not just be experimentation. It should be structured learning.

At this stage, leaders should focus on three things:

  • Creating safe usage patterns
  • Identifying high-value use cases
  • Establishing a shared language for what good AI-assisted work looks like

Typical high-value use cases include:

  • Explaining unfamiliar code or concepts
  • Drafting boilerplate
  • Generating tests
  • Writing documentation
  • Brainstorming implementation options
  • Summarising incidents or technical discussions

What matters most is not squeezing maximum ROI out of the first tools. It is building familiarity and trust without encouraging sloppy habits.

That means creating lightweight guidance around:

  • What data can and cannot be shared
  • When outputs must be reviewed manually
  • Which use cases are low-risk versus high-risk
  • How engineers should validate generated code
  • How to prompt in a way that produces useful results

This stage should also surface reality. Which engineers are finding real leverage? Which tasks are benefiting? Where is the tool underperforming? What failure modes are recurring?

The output of stage one is not transformation. It is organisational learning.

Stage 2: Embed AI into the real development environment

Once teams have basic literacy, the next move is to bring AI into the actual context of engineering work.

This is where many organisations start to see meaningful gains. Context-aware tooling inside the IDE, editor, code review flow, documentation system, or internal knowledge environment can dramatically reduce friction.

But successful adoption at this stage requires more than enabling a feature.

You need the surrounding engineering environment to be legible.

AI becomes much more useful when the organisation has:

  • Clear repository structure
  • Consistent naming conventions
  • Good internal documentation
  • Reasonably maintained tests
  • Explicit architectural patterns
  • Well-formed tickets and acceptance criteria

This is an important inflection point. Teams often think they are adopting better AI, when in reality they are being forced to confront the quality of their engineering system.

If the codebase is incoherent, the assistant will be incoherent. If the documentation is outdated, the assistant will inherit outdated assumptions. If delivery practices are inconsistent, the assistant will struggle to act reliably inside them.

So the best path through stage two is to invest in context quality as much as tool quality.

That means:

  • Cleaning up critical documentation
  • Improving repository hygiene
  • Standardising common workflows
  • Tightening testing practices
  • Making tickets more structured and actionable

The objective is simple: make the development environment interpretable by both humans and machines.

Stage 3: Introduce workflow automation in bounded areas

Only after stage two is working well should an engineering function push seriously into agentic workflows.

This is where the AI system starts executing sequences of work rather than just assisting on individual tasks. But the key to success here is boundedness.

Do not begin with your most complex systems or your most ambiguous projects. Start where the workflow is repetitive, well-scoped, and easy to verify.

Good candidates include:

  • Small bug fixes
  • Test generation and repair
  • Refactoring within defined constraints
  • Dependency updates
  • Documentation generation
  • Migration tasks with clear patterns
  • Ticket-to-draft-PR workflows

At this stage, the engineering function needs stronger operational guardrails. Once AI starts doing multi-step work, the risks shift.

You need clarity on:

  • What the agent is allowed to do
  • Which environments it can access
  • What approval gates exist
  • How actions are logged
  • How changes are reviewed
  • What happens when the workflow fails

This is also the point where measurement becomes more important. You want to know not just whether the workflow completes, but whether it completes usefully.

Track things like:

  • Time saved
  • Review burden created
  • Defect rates
  • Rework rates
  • Test pass rates
  • Adoption by engineers
  • Frequency of intervention

The goal of stage three is not maximal autonomy. It is reliable partial autonomy.

Stage 4: Redesign roles around supervision and orchestration

By the time an organisation reaches stage four, the challenge is no longer mainly technical. It becomes organisational.

When AI systems can run multiple workstreams or subagents, the limiting factor becomes coordination. Engineering leaders need to rethink how work is structured, reviewed, and owned.

This is the stage where the engineering function starts to shift from direct execution toward orchestration.

That does not mean engineers stop building. It means their comparative advantage moves upward into areas such as:

  • Scoping and decomposition
  • Architecture and trade-offs
  • Constraint setting
  • Risk management
  • Review and quality assurance
  • Cross-functional alignment
  • Steering parallel AI workstreams

This requires intentional role evolution.

Staff and senior engineers, in particular, become increasingly valuable as people who can:

  • Define the shape of problems
  • Set standards for agent behaviour
  • Review outputs across multiple layers
  • Decide when exploration is needed instead of execution
  • Resolve ambiguity that autonomous systems cannot handle cleanly

Managers also need to adapt. Performance systems built around visible manual output may start to lag reality. The most effective engineers may be the ones generating leverage through orchestration, not just the ones writing the most code directly.

To mature successfully through this stage, organisations should invest in:

  • New review practices
  • Better interfaces for steering AI workflows
  • Clear ownership models
  • Escalation paths for ambiguous work
  • Strong technical standards
  • Training in decomposition and supervision

At this point, AI maturity becomes tightly linked to leadership maturity.

Stage 5: Build the feedback loops for end-to-end development

The final stage is not simply “more automation.” It is the integration of AI into the full product development loop.

That means engineering can no longer think in isolation. Mature end-to-end systems require connection between:

  • Product intent
  • Technical implementation
  • Test and validation
  • Deployment and operations
  • User outcomes
  • Learning and iteration

The hardest part is not generating output. It is closing the loop.

For an engineering function to succeed here, it needs strong feedback systems:

  • Clear product metrics
  • Reliable telemetry
  • High-quality incident data
  • Robust experimentation practices
  • Post-release evaluation
  • Mechanisms for feeding learnings back into planning

Without those loops, autonomy becomes dangerous. The system may execute efficiently without actually improving outcomes.

This stage also demands much stronger governance. If AI participates in production-facing loops, organisations need confidence in:

  • Auditability
  • Security controls
  • Change management
  • Accountability
  • Rollback mechanisms
  • Policy enforcement
  • Human decision rights

In other words, the path to stage five runs through operational excellence as much as model capability.

The best sequencing for most organisations

If you map this as a practical transformation journey, the best path usually looks like this:

1. Build literacy and safe experimentation

Teach engineers how to use AI tools well. Define acceptable usage. Identify high-value early wins.

2. Improve the quality of engineering context

Clean up codebase structure, documentation, standards, and tests. Make the environment easier to navigate and reason about.

3. Embed AI into day-to-day workflows

Adopt context-aware assistants in the places where work actually happens. Reduce friction in coding, debugging, review, and documentation.

4. Automate bounded workflows

Start with repeatable tasks where success is measurable and risk is low. Add controls, approvals, and observability.

5. Shift team design toward orchestration

Train engineers and leaders to supervise, decompose, steer, and evaluate AI-supported work.

6. Connect engineering to product and operational feedback loops

Tie implementation systems to outcomes. Create mechanisms for learning, iteration, and governance.

That sequencing matters because each layer enables the next. Skip too far ahead and the organisation often creates the illusion of maturity without the substance.

What usually goes wrong

There are a few common failure modes in AI transformation efforts.

Buying advanced tools into an immature system

If the underlying engineering function is messy, advanced AI often produces low-trust output. Engineers then conclude the tools are overhyped, when the real issue is that the surrounding system is not ready.

Mistaking adoption for value

High usage does not necessarily mean meaningful improvement. Teams can spend a lot of time prompting tools without improving speed, quality, or outcomes.

Ignoring review costs

Generated code is not free if it increases cognitive load for reviewers or creates subtle maintenance problems later.

Automating before standardising

AI works best when workflows are explicit. If every team does things differently, automation becomes fragile.

Underinvesting in governance

The more capable the system becomes, the more important permissions, audit trails, controls, and human checkpoints become.

Failing to evolve roles

If leadership continues to reward only traditional execution patterns, the organisation will struggle to develop the skills needed for higher-maturity AI work.

A useful rule of thumb

A simple way to think about the journey is this:

  • Stage 1 teaches people
  • Stage 2 teaches the environment
  • Stage 3 teaches the workflow
  • Stage 4 teaches the organisation
  • Stage 5 teaches the system to learn

That is why the path to maturity is not just about adopting better models. It is about progressively making the engineering function more structured, legible, measurable, and steerable.

Final thought

The best path to AI maturity is not a leap to autonomy. It is a disciplined progression from assistance to orchestration to closed-loop improvement.

Engineering organisations should resist the temptation to ask, “How quickly can we automate more?”

A better question is, “What foundations do we need so that each increase in autonomy creates real leverage rather than new complexity?”

The winners in this transition will not be the teams that experiment the most noisily. They will be the ones that build engineering systems capable of absorbing AI productively, safely, and repeatedly.

If the maturity model describes where the technology is going, the organisational challenge is to ensure the engineering function is ready to go with it.