Agentic Coding Compresses Cognitive Effort

Agentic Coding Compresses Cognitive Effort

Wasn't AI supposed to make software development easier?

Yes: with the right guardrails and practices, routine programming tasks can now be handed off to an LLM.

And no: something else is happening at the same time.

I've heard from a few developers now that the work has not become easier. To the contrary: with the actual programming now taken care of, all that is left to do are the harder decisions.

  • What architecture do we want to implement so things scale reliably?
  • What edge cases are we not covering?
  • Does this actually solve the customer problem?

Previously you'd see these questions sprinkled throughout the day of a developer. You think it through, maybe try a few things, and then decide on a direction.

And then — you write code for a while.

Ideally, you get into flow. It's an enjoyable state, and at the end of it you feel proud of the progress you've made. Your brain performed at an optimal challenge level and then got rewarded at the end. You took a quick break, and then it was time for the next hard decision.

Only Hard Decisions

Both my personal experience and the reports I'm hearing from other engineers show that the nice relaxing middle part is now: gone.

You make a decision, maybe assisted by some research the LLM has done for you, and then you prompt it to do the thing. In the meantime you context-switch to a different task or even a different project. There, again you think through a question and make a difficult decision. You send off the prompt. You context-switch again. Rinse, repeat.

This is a new, a very different kind of work.

We get more done in the same time, and it often is exciting and rewarding. Exhilarating, even. But it's also cognitively very, very taxing.

And this is not just a personal anecdote – there's in-progress research that is finding exactly this effect. A huge part of the effect might be due to our existing practices not being ready yet for the new tools we're rolling out.

This might also explain why we've been seeing vastly different outcomes in studies on the effects of AI adoption. By now, many engineers use LLMs daily, but few have a clear workflow for using them effectively. We're adopting tools without adopting the practices that make them sustainably useful: because we're still busy discovering those.

The New Trap: Cognitive Debt

So only hard decisions are left, and while the agents figure things out for us we context-switch from task to task and project to project.

We get things done — or feel like it — and are incredibly exhausted. We're constantly wired up, enjoying the rush of momentum, always moving forward.

We lose sight of the details. But we're moving forward, so ... this might be fine?

Margaret-Anne Storey has called this out: we're building up cognitive debt. She reminds us of Peter Naur's 1985 article Programming as Theory Building.

[...] a program is more than its source code. Rather a program is a theory that lives in the minds of the developer(s) capturing what the program does, how developer intentions are implemented, and how the program can be changed over time.

We take on cognitive debt — losing touch of what we believe the program does and what it actually does — to move faster. Similarly to technical debt it's a trade-off that you can make for a while, but if you never pay back the debt it can grind you to a halt and leave you unable to move forward nor backward.

From what I've experienced in personal projects and at work, we'll need to learn — or re-learn — an oscillating motion around the essence of what our software is supposed to be doing.

We let the agents do their thing, implementing what we prompted them to do. We move fast and lose sight of what it actually is that they've built. Tests, linters, types — all these guardrails help us not to stray too far technically, but understanding and capturing the actual product intent, with all its implications on both architecture and user experience, has always been a different beast.

We have to learn to notice this drift of our mental model and correct for it, like a float vent always trying to reach equilibrium but never really hitting it precisely.

It's a tough balancing act — one that will ask for a new kind of tolerance for ambiguity from engineers. This is where I think generalist product engineers will shine in the next few years.

Old Practices and New Tools

Many of these discussions remind me of an Extreme Programming course I took in university many moons ago. The agile label didn't have the connotation of excessive corporate process yet. Things had a genuinely pragmatic and also very unstructured flair.

Going in, we thought that this project would be a super relaxing week.

However, after a few days of working in a simulated XP project with a daily standup — as in, actually standing during the meeting —, a quick round of planning poker, lots of pair programming, talking to a customer proxy, and so on, it became pretty clear: this requires a huge amount of discipline to do well.

This surprised everyone. Despite the unstructured vibe, engineering still required a huge amount of structure. But instead of coming from a static process, we had to create a structure: invent it — or improvise it, really — in the moment. That meant the structure would be a far better match for the problem of the hour, but it was also much more effort to constantly keep iterating on it.

I see a parallel to how we work with LLMs today, or, better: how we learn to work with LLMs. The external structure is minimal: there are no proven practices yet. We are co-discovering and co-inventing them as an industry as we go.

Here, too, structure has to come from within. From the workflows that we build, from the terminology that we invent, from building an intuition and heuristics about what works and what doesn't when steering agents.

The agile manifesto and XP created a set of relatively generic values, constraints, and practices that expert practitioners would translate into project- and even task-specific structures. This enabled them to ship high-quality software at high speeds.

What will the values, constraints, and practices be that come out at the end of this Cambrian explosion of working with LLM-based agents? I have a suspicion that we'll be hearing echos of agile development in its original conception.

Let's find out — and shape them — together.

Read more