AI & Future

    What Remains When AI Takes Over.

    About the human contribution in a world where machines think faster, cheaper and often more precisely.

    What Remains When AI Takes Over.

    On the human contribution in a world where machines think faster, cheaper and often more precisely.

    There is a question I have heard in nearly every executive meeting in recent months — sometimes asked openly, sometimes only audible as an undertone: if AI takes over more and more of what we used to call "knowledge work," what is left for humans?

    The question sounds rhetorical. It isn't.

    It has a very real backdrop: tools like GPT-5, Claude or Gemini today complete tasks in seconds that consultancies used to need weeks for. Market analyses, pitch structures, code, reports, translations, legal research, model validation. Much of it not just faster, but qualitatively comparable — sometimes better.

    The honest answer that's rarely spoken out loud

    A lot of what we have called "knowledge work" was, in reality, not thinking. It was processing. Pattern recognition. Reproduction. Synthesis from known sources. Exactly what large language models excel at.

    A substantial part of white-collar bureaucracy — including in consulting, banking, insurance — consisted of tasks labelled as "intellectual work" but rarely involving real judgement. These tasks aren't losing value because AI is "so good." They are losing value because they were never as demanding as we treated them.

    That's uncomfortable. But if you take it seriously, the interesting conversation only starts there.

    What machines structurally cannot do

    There are three areas where machines — for the foreseeable future and probably fundamentally — are not competition:

    First: carrying responsibility. An AI can give a recommendation. It cannot, in any legal, ethical or human sense, be accountable for what follows from that recommendation. Responsibility is bound to subjectivity. To a being that experiences consequences. Machines experience nothing.

    Second: holding relationships. Trust forms between people. It doesn't form through the correct answer but through the held relationship. Through the sense that someone perceives me — as the person I am, in the situation I'm in. That is precisely not algorithmic.

    Third: making meaning. An AI can summarise values. It cannot generate meaning. Meaning emerges in the question of what people get up for in the morning, what binds them, what they believe in. That question must be answered by humans — and in organisations, it's usually decided by leadership, not by strategy decks.

    What this means concretely for leadership

    If these three areas hold — and I believe they do — the meaning of leadership shifts dramatically.

    In a world where knowledge work is done en masse by machines, leadership is not less important. It becomes more important. But differently.

    What loses importance: mastery of detail knowledge. Information gathering. Steering standard processes. Machines do all that better.

    What gains importance: clarity in ambiguity. Responsibility in uncertainty. Relationship at speed. Judgement when the data contradicts itself. Meaning when the market goes wild.

    In other words: leadership returns to what it actually is. A deeply human activity. Not a second, worse machine.

    The reflex that's currently wrong

    In many organisations I currently see a reflex pointing in exactly the wrong direction: trying to make leaders "AI-fit" by sending them into tool training. Prompt engineering. Co-pilot workshops. Use-case inventories.

    That isn't wrong. But it's the smaller half.

    The bigger half would be: making leaders fit for what machines cannot do. More time for reflection, not less. More space for real conversations, not less. More training in perception, conflict resolution, self-regulation. More investment in what is harder to scale.

    Otherwise a paradoxical situation emerges: an organisation has thousands of AI licences, but no leaders left who can take responsibility in a difficult meeting.

    What leaders should take from this phase

    I'm often asked what the most important recommendation for leaders is in this phase. My answer has solidified over the past twelve months:

    Invest in exactly the capabilities that won't be replaced. Relationship. Judgement. Willingness to take responsibility. Self-regulation. Meaning-making.

    These capabilities have always mattered. They were just not scarce enough to create competitive advantage. That is changing.

    In a world where everyone has access to intelligent machines, what connects humans becomes the differentiator. Not the knowledge. The relationship to it.

    An unfamiliar perspective on competitive advantage

    Whoever bets on AI today doesn't gain a competitive advantage. Whoever doesn't, loses one.

    The actual competitive advantage emerges elsewhere: in the question of whether your organisation builds a culture in which people can deliver their best contributions — alongside the machines, not in competition with them.

    That demands leadership that is more than steering. It demands clarity, presence and the willingness to confront one's own impact.

    What remains when AI takes over? Not the tasks. The question of what we're doing all of this for. And the people who embody that.

    That's more than enough work. Probably for an entire generation of leaders.

    Keep reading

    More perspectives.

    Cookies & privacy

    This website only uses cookies that are technically necessary for the site to function. No tracking, no analytics, no sharing with third parties. Privacy.