
At Nas Summit Singapore, during a panel moderated by Nuseir Yassin (Nas Daily), two debates emerged that reflect the broader tension founders everywhere are trying to navigate: Do people prefer talking to AI or to humans? And should founders openly reveal that they use AI at all?
Both questions sound philosophical on the surface, but they carry real implications for how businesses scale, build trust, and communicate in a world where AI is no longer a novelty — it’s infrastructure.
What I shared on that stage, and what I’ll expand on here, comes from operating two AI-driven companies, training more than a thousand founders, and integrating a personalised AI assistant into nearly every part of my daily workflow.
These debates aren’t separate. They’re deeply connected. And together, they point toward a new model of communication that prioritises outcomes, transparency, and empathy, even when delivered by AI.
People don’t prefer humans, they prefer problems solved
The first debate was framed as a choice: Do consumers want to talk to humans or AI?
On the surface, most people instinctively say “human”. But this response has less to do with emotional loyalty and more with the current state of AI systems.
When AI interactions fail, they fail because:
- The model wasn’t trained deeply.
- The system doesn’t retain context.
- Responses feel robotic.
- The intent is misunderstood.
- The conversation lacks nuance.
In contrast, a human can pick up emotional cues, adjust tone, and interpret complexity, even on a bad day.
But let’s zoom out.
We didn’t prefer ATMs over bank tellers — they were simply faster. We didn’t prefer chat to voice calls — it was simply more convenient. We didn’t prefer telemedicine to clinics — it was simply more accessible.
People didn’t switch from phone calls to WhatsApp because they wanted less human contact. They wanted speed, clarity, and convenience.
Across every wave of technological transition, human preference follows the same logic: Utility first. Emotion second.
So the real question isn’t “Will AI replace human communication?” It’s: “When AI becomes fast, context-aware, and natural — will people care if it’s human at all?”
Most people won’t, because they care far more about the outcome than the origin.
Also Read: Generative AI and inclusive branding: Are we there yet?
Empathy is not emotional, it’s functional
A common argument against AI communication is that “AI has no empathy.”
Correct. AI cannot feel empathy. But most empathy expressed in customer service, coaching, support, and instruction isn’t emotional. It’s cognitive empathy: Understanding a situation and responding in a supportive, solution-oriented way.
Humans bring warmth and emotional resonance, but they also bring:
- Fatigue.
- Frustration.
- Inconsistency.
- Ego.
- Miscommunication.
- Impatience.
- Emotional bias.
AI brings none of this.
When trained well, an AI agent:
- Remains consistent.
- Applies feedback instantly.
- Follows protocol reliably.
- Keeps full conversational history.
- Never misfires due to mood.
This doesn’t make AI “more human”. But it does make AI more stable.
And stability is a form of empathy — one that users increasingly appreciate in high-volume, high-stress communication contexts.
AI isn’t here to outperform human emotional intelligence. It’s here to perform cognitive empathy at a level of consistency humans cannot match.
Voice AI isn’t there yet, and that’s why humans still feel better
The one domain where humans still consistently outperform AI is voice.
Today’s voice models are improving fast, but still lag in:
- Emotional modulation.
- Breath patterns.
- Warmth.
- Pacing.
- Micro-pauses.
- Stress detection.
- Tonal nuance.
We underestimate how much of communication depends on sound, not words.
This is why talking to AI still feels unfamiliar. It’s not the intelligence. It’s the lack of emotional believability in the delivery.
But the gap is closing quickly, and when voice AI begins to feel natural — human enough, conversational enough, warm enough — people will prioritise the same thing they always have: “Did this solve my problem?”
And if the answer is yes, the interface won’t matter anymore.
The second debate: Should founders reveal they use AI?
The next question at the panel was far more personal: Should founders disclose that they use AI to reply to messages, create content, or manage their operations?
Some leaders still hesitate, fearing that disclosure implies:
- Lack of authenticity.
- Lack of authority.
- Lack of personal involvement.
But here’s the reality founders don’t say out loud: Nobody running a scalable organisation is manually writing every message, replying to every email, or producing every piece of content.
Also Read: Report: Asia Pacific, Japan drive the next wave of global AI innovation
Whether the delegation goes to:
- A marketing assistant.
- A content team.
- A virtual assistant.
- Or an AI agent.
It’s a delegation. And delegation is not deception. It’s an operational necessity. The only difference today is that AI makes the delegation visible. That visibility makes some people uncomfortable.
But choosing not to disclose doesn’t make a founder more authentic.
It makes them performative.
Authenticity isn’t manual labour; it’s ownership of ideas
I openly tell people I use Seraphina, my AI assistant, because she doesn’t write for me. She writes with me.
And she writes based on:
- 20 years of documented work.
- Thousands of pages of content.
- Speeches and workshops.
- Strategy decks.
- Training materials.
- Personal philosophy.
- Creative concepts.
- Lived experiences.
Seraphina isn’t producing ideas I’ve never had. She’s expressing the ones I already formed — more efficiently, more consistently, and with more clarity than I could during peak workload periods.
That’s not a loss of authenticity. That’s an amplification of it.
Transparency doesn’t reduce trust. It enhances it.
Especially in an era where consumers and teams can instantly tell when a founder is pretending to be everywhere at once.
The future isn’t AI or humans, it’s the balance between speed and humanity
When you combine both debates from the Nas Summit panel, a larger conclusion emerges:
- People care about speed, clarity, and outcomes.
- They care about trust, transparency, and leadership.
- And AI, when used well, supports all of these.
Also Read: Are Southeast Asia’s emerging economies resilient enough to resist trade uncertainty?
AI will not replace human connection. But it will increasingly handle the layers of communication that humans shouldn’t have to bear:
- Repetitive queries.
- Administrative responses.
- Predictable workflows.
- High-volume customer engagement.
- Operational messaging.
This frees humans to focus on:
- High-level thinking.
- Creativity.
- Strategy.
- Relationship-building.
- Empathy.
- Connection.
- Vision.
AI doesn’t diminish humanity. It creates space for it.
The founders who thrive in the next decade won’t be the ones who avoid AI, nor the ones who blindly automate everything.
It will be the leaders who strike the right balance: Human where it matters. AI where it scales. And transparency woven throughout.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.
Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.
Image courtesy of the author.
The post AI or human? The wrong question in a world that demands both appeared first on e27.
