Posted on

From automation to Agentic AI: Trust starts at the data layer

When I first started my career in the early 2000s, I was given what felt like a painfully mundane task: enter postcodes into an internal business website, extract data on telephone exchanges, and type it into a spreadsheet to design metropolitan networks.

Being the bright-eyed graduate I was, I decided there had to be a better way. Over one weekend, I wrote a small Visual Basic app that took postcode inputs from the spreadsheet, screen-scraped results from the website, and parsed the data back. What once took two weeks could now be done in five minutes. That small hack freed me to spend more time with customers, and it helped win millions of dollars in deals. It also earned me a stern lecture from the IT team, who saw my tool as a denial-of-service risk.

That was my first taste of automation. Eliminating repetitive work did not just save time. It shifted where I could add value.

The rise of no-code automation

Fast forward to today, and we have tools like Zapier and n8n. Instead of writing code, anyone can drag and drop “if this, then that” rules to automate workflows. APIs that once needed developers are now accessible through simple interfaces. Internal processes that used to take hours can be chained together in minutes.

Once you have tasted automation, the next question comes quickly: Can we do more? Can we make these flows less brittle, more adaptive, even self-healing when something changes? Can we move beyond scripts toward systems that understand goals and figure out the steps themselves?

That is where I see agentic AI entering the conversation. It feels like my childhood dream of a Star Trek computer coming to life.

What agentic AI really means

Agentic AI refers to tools that do not just follow rules but act autonomously to achieve objectives. The leap is from executing tasks to making decisions. At its core, agentic AI is:

  • Goal-oriented: An agent decides what actions to take to achieve an outcome. Example: “Reduce overcrowding by 15 percent this year” might lead it to adjust escalator directions, modify turnstile access, or recommend alternative exits.
  • Adaptive and flexible: If one method fails, it explores alternatives instead of stopping with an error code.
  • Reasoning-capable: It weighs trade-offs, infers missing information, and runs simulations on historical data to propose better steps.
  • Autonomous: It runs continuously and can initiate workflows without explicit human triggers. A train delay, for example, could prompt an agent to adjust signage and entry gates to prevent bottlenecks.
  • Multi-tool orchestration: It selects and connects tools across systems, even ones you did not hard-code to work together.

If automation is about rules, agentic AI is about goals.

Also Read: The digital lag: How traditional consulting is failing to grasp the agentic AI revolution

The data bottleneck

For all the excitement, one truth remains: an agent is only as good as the data it receives.

Take smart cities. Train stations are busy, complex hubs that are prone to overcrowding and often need manual intervention when something goes wrong. CCTV is everywhere, but cameras have blind spots, struggle in low light, and cannot reliably capture depth. The result is incomplete, sometimes misleading data.

At Curium we deployed LiDAR sensors across European stations to provide accurate insights into footfall, crowding, and hazards. An investor once asked me, “So what? Is this just a nice to have?” The question stuck with me. Not everyone sees why LiDAR adds value, or how agentic systems depend on high-fidelity baseline data.

When you add LiDAR to CCTV, the picture changes. You get full coverage, depth perception, and real-time accuracy. That coherent data unlocks predictive analysis: what is normal, what is not, and how conditions are evolving. With agentic AI, you can then test interventions dynamically, shift escalator directions, close entry gates, reroute passengers, and watch outcomes in real time.

What changes in practice

Forward-thinking operations managers are starting to see the opportunity. With richer data, agentic AI moves from buzzword to practical decision support. It is not only about efficiency. It is also about safety and resilience.

Imagine:

  • Highway systems that adjust traffic lights and lane allocations dynamically.
  • Public transport hubs that respond to surges before they spill into dangerous overcrowding.
  • Driverless systems that adapt in real time to disruptions.

These are actual early signals of how work, infrastructure, and customer experience will be reshaped.

What leaders should pay attention to

The implications go beyond technology. Teams will shift from doing tasks to overseeing autonomous systems. Culture will need to balance trust in agents with accountability when mistakes happen. Regulators will ask the hard question: if an AI agent makes the wrong call, who is responsible?

Practical steps to take now:

  • Start small: Pilot in low-risk domains with clear success criteria.
  • Design for oversight: Keep a human in the loop, define escalation rules, and log decisions for review.
  • Harden the data layer: Invest in sensor quality, data validation, and observability before you attempt autonomy.
  • Focus on customer trust: Efficiency gains mean little if people do not feel safe relying on the system.

We are already seeing first steps in Singapore with fixed-route autonomous shuttles in Punggol. The lesson is simple: the path to autonomy starts with reliable data and measured rollouts.

Also Read: Agentic AI in action: How Southeast Asia’s startups are turning constraints into strengths

Trust starts from the baseline

I often show a simple pyramid when pitching autonomous systems. At the base is sensor data, above it sits automation and agentic AI for perception and decision making, and at the top is public trust.

If the base is weak, the whole pyramid crumbles. Faulty data leads to flawed decisions, which lead to failures that erode trust. Get the basics right: high-fidelity data acquisition and validation, and the rest of the pyramid can stand.

Public perception ultimately rests on outcomes: did the car crash, was the station overcrowded? By building robust data foundations before handing more authority to AI agents, we stand a better chance of not only achieving success but also earning lasting trust in the systems we build.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva Pro

The post From automation to Agentic AI: Trust starts at the data layer appeared first on e27.