Posted on Leave a comment

Delivery intelligence: The missing link between AI agents and strategic alignment

The way that work is done is changing. People are beginning to rely on AI-based agents to do a lot of the heavy lifting in their work. Jobs are becoming more about directing those agents than about doing the details of the work. Teams of implementers are giving way to teams of designers who manage entire products or initiatives. Collaboration between people is still crucial, but the lowest level – the purely technical collaboration – is disappearing.

AI agents are greatly accelerating the speed of work, immensely raising the stakes of misalignment. A Gallup study found that only 41% of employees know what their organisation stands for, which probably explains why Kaplan and Norton’s research found that 90% of organisations fail to execute their strategies successfully. This isn’t a niche problem — it is universal — and it presents a huge risk when employees’ actions are amplified by AI.

A blizzard of agent tools has arrived to provide “agentified” capabilities. But as they say, “a fool with a tool is still a fool”. Alignment still matters. Transparency still matters. Good decisions still matter – perhaps more than before because the speed of work has accelerated. More capability without better alignment doesn’t solve the alignment problem — it amplifies it.

a fool with a tool is still a fool

We need a model for agentic and human collaborative work. I propose the term delivery intelligence.

Also Read: Why trust is the only currency that matters in the AI era

Delivery intelligence has these traits:

  • Objectives, strategies, and execution plans and actions are all linked.
  • Fully transparent, complete line-of-sight: anyone (with visibility controls for sensitive plans) can peruse the network of linked objectives, strategies, and execution plans and actions. That visibility enables people to self-align.
  • Agent-based tools can also peruse the network of plans and actions. They can spot problems, make suggestions, and execute where they are given permission to do so. They can act intelligently.
  • Agents detect misalignment, find critical paths, and suggest ways to optimise – ways that are aligned with the values and strategies of the organisation (including the leadership styles that it desires).
  • Agents complete work that is agent-doable (including software development, analysis, and planning), when you give them permission to do so.
  • Agents are fully transparent in what they do, and you can rely on them.
  • Agents collaborate with each other and with people.
  • Employees feel responsible and autonomous, because work is goal-oriented, not task-oriented, and they are still in charge.
  • Decisions are holistic: the ability to detect misalignment makes it possible to define outcome-oriented incentives.
  • Rapid pivots are possible – instead of an interlocking mesh of tasks, people have goals, which they thoughtfully and responsibly delegate to agents.
  • People can ask “what if? questions, and agents give informed answers, often querying with other agents before answering.
  • People become so vastly more productive: it will be like everyone having a team of informed and connected geniuses working for them, available on demand.

Unfortunately, most agent-based tools are missing a key thing: the why. They do not have access to an authoritative network of objectives, strategies, and plans. The risk is that people across the organisation unleash armies of agents that are unaligned with strategic intent. That is why agent-based systems need to directly incorporate awareness of strategic intent.

Unfortunately, most do not. The agent platform must also provide governance that enables the organisation to define policies that constrain agent behaviour, just as policies govern human behaviour.

Also Read: On-chain data and Web3 security: Insights from industry experts

Awareness of intent is critical because those who execute make decisions, too. Execution is a process of myriad low-level decisions intended to turn the higher-level intentions into reality. If agents are executing, then without a backbone of authoritative intention, they are guessing – they have to sort through myriad sources of information and opinions, many of which contradict each other or represent earlier stages of thought. That’s chaos, and that leads to misalignment – potentially more rapidly than before, since agents act so quickly.

The solution

The solution must have these components:

  • An agent-based platform that enables people to collaboratively state objectives, strategies, goals, and plans – enabling both people and agents to access all of that context.
  • Governance: a system for making sure that the agents do not do things that they should not do.

Together, these make delivery intelligence possible.

Be wary of AI agent platforms that present a free-for-all, where agents operate without an understanding of what you are trying to accomplish, as well as how and why.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.

Join us on WhatsAppInstagramFacebookX, and LinkedIn to stay connected.

The post Delivery intelligence: The missing link between AI agents and strategic alignment appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *