Navigating the AI landscape in software engineering

Navigating the AI landscape in software engineering



Software engineering has moved past the initial wave of AI disruption and entered a practical phase of real opportunity. Foundational models are advancing so quickly that the tools and frameworks development teams rely on are evolving month by month, changing the way we build digital products. ClearPoint CTO Rob Cleghorn and David Corbett recently sat down to unpack what this new landscape looks like in practice. Their conversation covers a range of distinct but interconnected topics, from the reality of multi-agent workflows to rethinking legacy modernisation.



Adapting to the multi-agent reality


Engineers are navigating through distinct stages of AI adoption as they adapt to this new environment. We have moved quickly from manually accepting basic code suggestions to working alongside intelligent agents within our IDE’s. However, we are now seeing a significant shift toward command-line interfaces where engineers manage multiple AI agents operating in parallel.

This multi-agent approach significantly increases throughput, and the required engineer skillset is quickly evolving. Managing several agents simultaneously requires constant context switching, and engineers are experiencing real ‘AI fatigue’ as they attempt to review the massive volume of code these agents produce. As David points out, the technology is moving so fast that resting on recent knowledge is risky. “If you’re still using January models, you’re kind of slipping behind,” he notes, reinforcing how quickly foundational models are paving the way for more autonomous workflows.

 

The return of the detailed specification


To overcome the challenge of reviewing significant volumes of AI-generated code, the focus has to shift from writing code to defining exactly what we want to build. This means getting back to writing clear, detailed specifications. Over the past decade, agile methodologies often prioritised conversations over comprehensive specifications, but AI agents require a more structured approach to succeed.

If you want an autonomous agent to build a feature from start to finish without constant human intervention, you need a highly detailed Product Requirements Document and a clearly defined architectural pattern. As Rob explains, “If you think it’s easy to write a spec for AI, then you haven’t written one that can fully get what you want all the way through to build, test it and have the outputs exactly as you wanted it without any human intervention.” Taking the time to craft precise inputs directly leads to much higher quality code. We are already at a point where specialised AI models can write and carefully review these specifications long before anyone starts coding.

 


Rethinking legacy modernisation


The impact of these tools extends far beyond building new features; they are completely changing how we approach legacy modernisation. Upgrading an outdated system used to be a high-risk project known for long delays and the danger of simply copying over old architectural flaws. Today, we can use AI to read the business logic of existing systems and create a clean, modern specification for the future.

This means teams finally have the space to step back and rethink the architecture entirely, rather than just carrying forward decades of old logic. As David explains, “Modernisation becomes a lot more possible now because you could reverse engineer the spec of an existing system with the help of AI,” Beyond major rewrites, these tools are also transforming everyday maintenance. They can access the risks of upgrading packages or shifting APIs in real time, reducing friction and keeping systems secure without forcing teams to rely on slow quarterly patch cycles.

 

Building a strong foundation


The real-world impact of these advancements is clear, but it relies heavily on strong engineering fundamentals. A recent internal survey at ClearPoint revealed complete adoption of AI tooling among our engineers, but the velocity gains depended somewhat on the operational environment. We found that true value AI is further enhanced when robust engineering practices are already in place.

Engineers working in fully enabled AI environments accelerated their output significantly compared to those working within restrictive frameworks. “Those engineers working with clients who are fully enabled with AI, were able to accelerate 48 percent more than those working in environments where AI was allowed, but it was restricted,” David shares. CI/CD pipelines, decoupled architectures, and strong observability tooling are essential for AI to truly thrive. Good engineering practices remain the foundation of good software, regardless of whether a human or an artificial intelligence is writing the code.

 

Moving forward with AI


Transitioning into this new era of software engineering requires a shift in mindset above all else. To successfully navigate this evolving landscape, engineering teams should focus on five core areas:

  • Adapt to multi-agent workflows: Prepare your engineers to manage multiple AI agents and support them through the cognitive load of context switching
  • Prioritise detailed specifications: Recognise that precise, comprehensive requirements are the essential foundation for high-quality, autonomous AI generation.
  • Rethink legacy modernisation: Use AI to safely reverse-engineer existing business logic, creating the space to build clean, modern systems
  • Double down on the fundamentals: Ensure your CI/CD pipelines, decoupled architectures, and observability tools are robust enough to support AI-driven velocity.
  • Foster safe experimentation: Give your development teams the time and environment to be curious, test new tools, and continually rethink what is possible for your customers.

 

Empower your digital journey