
AI is everywhere, finding applications in unexpected places. It also sparks conflicting arguments about how it should be applied and what safety implications follow.
Take this thoughtful post I found on LinkedIn, penned by Heman Gorgi. He reflects on how Elon Musk has justified using a single sensor type by claiming sensor fusion poses safety risks. To me, that position feels self-serving given Tesla’s decision to drop additional sensors in favour of camera-only solutions.
Gorgi contrasts this by explaining how other operators are deploying multi-modal sensor suites and tailoring them to specific environments. It’s worth a read.
Why fusion matters
Different sensors bring different strengths. Cameras capture detail, but they are essentially 2D. LiDAR, radar, and IMUs add depth, velocity, and geometry. Together, they create a fuller picture of the world.
Ignoring this is not just a technical choice. It has real-world consequences. A recent lawsuit shows how dismissing sensor fusion can damage a company’s share price and erode public trust. Even Tesla’s own engineers have highlighted flaws in relying on cameras alone, as seen in this WSJ video at the 6m15s mark.
Disagreements between sensors should not be viewed as liabilities. They are often early-warning systems. When one modality is wrong and another is right, that is resilience. AI can arbitrate those disagreements, correct sensors, or initiate safety measures to bring the system to a graceful stop.
Also Read: The AI-first era: Why the model is the new runtime and how Asia can lead
A short history of the debate
This argument is not new. In the early days of autonomous driving, Waymo championed LiDAR as essential. Tesla pushed for a camera-first approach. Mobileye staked a middle ground, building perception models and sensors that could adapt to both.
The divergence reflected two philosophies: design for cost and scalability, or design for safety and redundancy. Back then, LiDAR units cost around $30,000, about the price of an entire car, so resistance from manufacturers was understandable. Prices have since fallen (and continue to fall), however entry-level LiDARs still remain more expensive than cameras.
Musk’s argument is that multiple perception models built from different sensors can lead to conflicting “realities” for hazard perception and object detection. This (in my opinion) is why sensor fusion matters. It creates a single, coherent view of the world, effectively an AI enabled virtual super-sensor. This is also where AI at the edge shows its value. Fusing and calibrating data in real time reduces hardware complexity and simplifies decision-making for higher-level AI modules.
AI at the edge in practice
At my own company, Curium, we utilised AI at the edge not to create flashy features but to enable real-time sensor fusion and calibration.
This capability could in the future help companies like Aurora, Kodiak Robotics, Zoox, Waymo etc to keep their fleets of vehicles safely on the road even when their sensors are affected by debris, vibration, or heat throughout a typical day. When a sensor drifts, our AI algorithms detect the issue and bring it back into safe operating parameters instantly.
This is the hidden side of AI. It is not about chatbots or voice assistants. It is the routine work of checking cameras, LiDAR, radar, and IMUs frame after frame. It ensures that what is in view is where it should be and corrects when it is not. This is the deepest kind of deep tech. It does not make for flashy videos on YouTube. It rarely registers in public perception, but it does create the clean data environment that all other systems depend on.
Also Read: AI in Southeast Asia: The silent force powering today and the engine for tomorrow’s growth
Beyond autonomous vehicles
The power of AI at the edge extends well beyond cars and trucks.
- Smart cities: Crowd analytics systems use edge AI to track flows of people in real time. Instead of sending every frame to the cloud, AI interprets the scene locally. This preserves privacy while still enabling insights like congestion alerts or evacuation planning.
- Healthcare: Portable imaging devices and bedside monitors now embed AI directly on the device. Critical alerts, such as a patient’s oxygen level dropping or a fall being detected, are raised immediately without waiting for cloud connectivity.
- Manufacturing: Edge AI keeps factories running safely. By fusing data from vibration sensors, cameras, and temperature gauges, it can detect when a machine drifts out of alignment and trigger corrections before defective products are produced or systems fail.
In all these domains, the theme is consistent. Edge AI adds resilience. It checks that things are where they should be, validates that signals make sense, and makes the adjustments needed when they do not.
Raising the benchmarks
The benchmark is clear. Autonomous vehicles must be safer than the average human driver. Not perfect, but measurably better. The same standard applies in other industries. AI at the edge needs to consistently outperform what humans alone can achieve. We also know that public expectations are a little unforgiving. Should one Autonomous Vehicle get into an accident and it’s a major splash across all the media outlets, denting public perception on the safety and reliability of such systems.
The power of complementary senses
In automotive use cases, cameras, radar, and LiDAR working together provide scale and robustness. The result is resilient systems that can operate in real-world conditions.
In safety-critical applications, the question is not which sensor “wins.” The measure is how well the vehicle orchestrates all sensors. Success comes from leveraging redundancy and complementary sensors to meet the benchmark of safety.
The hidden value of AI
This, to me, is the real story of AI at the edge. It is not the big, flashy demos that make headlines. It is the quiet, practical work of keeping things safe, resilient, and reliable.
AI at the edge does not need to talk back like a large language model. It does not need to generate images or text. It needs to sustain the heavy lifting that humans can’t maintain, that of: Constant calibration. Continuous anomaly detection. Intervention before failure.
This is the kind of AI that scales silently in the background. It builds trust. It enables services that touch millions of people without them ever noticing.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.
Image courtesy: Canva Pro
The post AI at the edge: Resilience over flash appeared first on e27.
