There's been no shortage of commentary recently about the impact of AI on software companies, knowledge work, even entire industries. The concern that we are struggling to fully comprehend AI’s implications for a wide range of work is legitimate. The pace of capability improvement is exponential and the implications are significant even if they are hard to predict. Consequently, much of the commentary is expressed in terms of fear or anxiety.
I think we can direct these conversations more constructively. Rather than focusing on what's under threat, the more useful question is: what actually becomes more important in a world where AI handles more of the execution work?
That reframe opens up a much more productive conversation.
I've worked across a lot of different environments over my career. National security. Big tech. Now a startup. One thing has been remarkably consistent: people are people, and we face the same challenges in relationships and communication regardless of where we are or what we're doing.
AI is remarkably good at tasks – drafting, summarising, analysing, building. The recent commentary about its acceleration captures the impressive scope of these capabilities, and both the excitement and anxiety it's generating is understandable. Tasks that once required significant expertise and time can now be executed rapidly by AI systems. This is genuinely transformative. The efficiency gains are real and will reshape how work gets done.
But organisations don't just run on tasks. They run on shared understanding – the ability to reconstruct why a decision was made, what was agreed, and what context shaped the thinking. They run on communication and coordination across teams, across time, across the messy reality of humans working together.
That coordination layer – the “infrastructure” through which knowledge persists and stays accessible – is where our thinking in relation to AI should also be directed; not just features or interfaces.
This is where communication systems become central to the conversation.
Most communication tools are optimised for message transmission and reception, not necessarily to preserve organisational knowledge over time or capture the complex networks of relationships within which work actually happens. The result is familiar to every professional: context fragments, decisions become impossible to reconstruct, and enormous effort goes into re-explaining what different people understand about the same thing.
AI doesn't necessarily fix this. It operates within communication environments – often based in legacy systems or legacy assumptions – and depends on them for context. If, for example, organisational knowledge was never structured to persist, AI risks simply executing faster on incomplete understanding. Indeed, a very real scenario is that AI “papers over” the fundamental flaws in our communications ecosystems, and hides the real problems beneath a facade of efficiency.
There's another dimension that deserves attention. As AI takes on more execution, humans need to remain genuinely embedded in the thinking and decision-making. Legal accountability requires identifiable human decision-makers. Regulatory compliance demands human oversight. Effective decisions under ambiguity require human judgment about trade-offs that we cannot abrogate to AI on our behalf.
Our communication tools aren't peripheral to the AI story; they're a crucial part of it.
Finally, we should be asking a harder question: is AI making us better at communicating and thinking – or quietly creating dependency without building capability?
I started my career doing genuinely unglamorous work. Writing draft briefs for others to speak to. Taking the notes that others used to track decisions. Sitting in meetings feeling completely out of my depth and only getting more confident and competent with practice. Having my draft policy work rewritten and learning from why. Being completely convinced I had the right answer in a brief, only to have it changed because I hadn't considered the broader context or nuance.
That work felt junior at the time. Looking back, it was how I developed judgement. There's no shortcut to that. Experience builds perspective, and perspective builds the kind of judgement that actually matters when the stakes are high.
If AI does that entry level work instead, where does the judgement come from? How do we develop the next generation of people who can make genuinely good decisions under ambiguity and pressure if they never have the experience of doing the thing, making the mistakes, and learning what actually matters?
And judgement doesn't develop in isolation. It develops through real human communication experiences. Through the discomfort of being in the room and not knowing enough. Through getting things wrong and understanding why. Through building the kind of relationships where honest feedback is possible. If we lose those experiences, we don't just lose the judgement. We lose the human communication layer that was developing it.
Even if AI ends up communicating on our behalf, that doesn't solve the problem. It doesn't address why we communicate as human beings in the first place. Communication isn't just information transfer. It's how we build relationships, earn trust, and make sense of the world together. An AI doing that for us isn't a solution. It's a loss.
The organisations that navigate this well won't necessarily be those adopting AI fastest (though I strongly believe we all need to use and understand it, and as leaders we need to create the space for this in our teams). They'll be the ones investing in infrastructure that makes coordination, memory, and human judgment more resilient.

