Posted on

AI Pulse Exclusive: How CoBALT is designing AI that teams can actually trust

An interview with Stella Seohyeon Kim COO and Co-Founder of CoBALT, on building AI as operational infrastructure, earning user trust, and applying AI in real workflows, part of e27’s AI Pulse coverage.

In this interview, e27 speaks with Stella Seohyeon Kim, COO and Co-Founder of CoBALT, a company building AI-native systems that help organisations turn everyday interactions into tangible business opportunities. Through its flagship product REALIZER.ai, CoBALT operates at the intersection of sales, business development, and operations, offering a grounded perspective on how AI is being embedded into real workflows as trusted operational infrastructure rather than surface-level features.

This conversation sits within e27’s broader AI coverage, which examines how organisations across the region are building, deploying, and governing AI in practice.

Turning first meetings into real business opportunities

e27: Briefly describe what your organisation does, and where AI plays a meaningful role in your work or offering.

Stella: Cobalt operates REALIZER.ai, an AI-native assistant that turns the people you meet at work into real business opportunities.

Business developers and sales teams meet dozens, sometimes hundreds, of potential customers, partners, and investors through meetings, conferences, and industry events. REALIZER elevates those first encounters from simple contact exchanges into qualified opportunities.

After a meeting, a user can scan a business card, enter an email address, or leave a short voice note about the interaction. From there, Realizer quietly organises the contact, researches the person and company, evaluates the opportunity, and drafts the first follow-up message. The user simply reviews and sends it.

There is a golden window after meeting someone, roughly 48 hours. When meaningful touchpoints are created within that time, the chance of converting the relationship increases dramatically. Realizer is designed to help teams act within that window.

Making individual interactions organisational assets

e27: What is one concrete way AI is currently creating value within your organisation or for your users or customers?

Stella: The greatest value Realizer delivers is turning every individual interaction into a reliable organisational asset.

Instead of relying on personal intuition or fragmented experience, REALIZER enriches and verifies information about prospects, partners, and investors using consistent criteria. It applies a shared logic for evaluating opportunities and recommending next actions.

As a result, teams view opportunities through a common lens, improve pipeline predictability, and move faster without missing critical moments. On an individual level, AI supports not only labour-intensive tasks but also work that requires higher-level reasoning, helping people achieve real outcomes, not just efficiency.

An interview with Stella Seohyeon Kim COO and Co-Founder of CoBALT, on building AI as operational infrastructure, earning user trust, and applying AI in real workflows, part of e27’s AI Pulse coverage.

Defining how humans and AI collaborate

e27: What was a key decision or trade-off you had to make when adopting, building, or scaling AI?

Stella: The most difficult, and most important, challenge was defining how humans and AI collaborate.

For effective collaboration, people need to feel confident that they remain in control while still trusting AI-driven decisions. That requires redesigning processes and delivering an experience where AI works almost invisibly, flowing naturally, without users constantly noticing or managing it.

This is the first time in human history that we are working alongside non-human intelligence. There has been trial and error, but our guiding principle is clear. AI should not diminish human value, it should amplify it. Just as electricity became seamlessly embedded into daily life, AI should quietly integrate into workflows and elevate them.

Building trust while managing AI imperfections

e27: Looking back, what has worked better than expected, and what proved more challenging than anticipated?

Stella: Imagine hiring a new employee who executes tasks flawlessly without supervision. That would be ideal. But if you constantly need to double-check their work and clean up mistakes, they quickly become a liability.

AI, especially large language models, is a new kind of junior hire. Depending on how you instruct it, the output can range from excellent to disastrous. It never complains, can repeat tasks endlessly, but it can also hallucinate with complete confidence.

Designing instructions and systems that consistently lead to high-quality outcomes was far more delicate than expected. We believe trust is the foundation of human-AI collaboration, so we built Realizer to earn that trust. It evaluates information across more than 50 sources, applies dozens of validation criteria, and presents not only insights but also confidence levels.

What proved harder was keeping this disciplined AI mostly out of sight, allowing humans to feel effective without constantly confronting AI’s imperfections. AI makes mistakes, just like people do. Managing those failures without burdening users requires a careful balance. It’s challenging, but we believe this balance is what ultimately leads to long-term adoption and genuine affection for the product.

An interview with Stella Seohyeon Kim COO and Co-Founder of CoBALT, on building AI as operational infrastructure, earning user trust, and applying AI in real workflows, part of e27’s AI Pulse coverage.

AI requires new ways of working

e27: What is one lesson about applying AI in real-world settings that leaders or founders often underestimate?

Stella: AI is not a magic wand.

Leaders must recognise that adopting AI is not merely a technical upgrade, it is the introduction of a new way of working. No matter how advanced the model is, poorly designed instructions and workflows can make AI worse than useless.

If an organisation fails to adapt how it collaborates with AI, performance may actually decline rather than improve.

Starting small to earn trust

e27: Based on your experience, what is one practical recommendation you would give to organisations that are just starting to explore or scale AI?

Stella: Start small, at a single high-friction decision point.

Rather than pursuing large-scale digital transformation, apply AI to one area where people struggle most or repeatedly waste time. Prove real impact there first, then expand. When there is a clear owner and measurable outcome, AI earns trust and becomes embedded naturally within the organisation.

From AI features to operational infrastructure

e27: Over the next 12 months, how do you expect your organisation’s use of AI, or the role of AI in your industry, to evolve?

Stella: Over the next year, AI will move beyond task-level assistance and become core operational infrastructure.

Within Realizer, AI will increasingly reassess opportunities continuously, monitor signals across channels, and recommend next actions at the team level. Across industries, the competitive edge will shift from having AI features to building trusted, governable AI systems that organisations are willing to rely on in real operations.

An interview with Stella Seohyeon Kim COO and Co-Founder of CoBALT, on building AI as operational infrastructure, earning user trust, and applying AI in real workflows, part of e27’s AI Pulse coverage.

Why alignment matters more than speed

e27: Anything else you want to share with the audience?

Stella: The true value of AI is not in making individuals faster, it lies in making organisations more aligned and more decisive.

Working with startups as well as publicly listed Korean companies has made one thing clear. The winners are not the teams with the flashiest models, but those that design AI around trust, clarity, and execution. As AI becomes invisible infrastructure, what matters most is not how impressive it looks, but how deeply and thoughtfully it is integrated.

Stay ahead of how AI is actually being used

This conversation highlights a recurring theme in how AI is moving from experimentation to everyday use. Rather than chasing novelty, CoBALT’s approach centres on trust, alignment, and designing AI that fits naturally into how teams already work. From capturing fleeting first meetings to building shared organisational judgment, Stella Seohyeon Kim’s perspective underscores that the real challenge of AI adoption lies less in models and more in systems, workflows, and human confidence. As AI becomes quieter and more embedded, the organisations that succeed will be those that treat it as operational infrastructure, not a showcase feature.

For more interviews, analysis, and real-world perspectives on how organisations across the region are applying AI in practice, subscribe to our newsletter. You can also explore more AI stories here.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

The e27 team produced this article

We can share your story at e27 too! Engage the Southeast Asian tech ecosystem by bringing your story to the world. You can reach out to us here to get started.

Featured Image Credit: CoBALT

The post AI Pulse Exclusive: How CoBALT is designing AI that teams can actually trust appeared first on e27.

Posted on

The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis? — Part 2

As AI’s energy consumption surges, concerns over its environmental impact grow. However, AI also offers solutions — optimising data centre cooling, managing smart grids, and reducing industrial energy waste. This article explores how AI-driven efficiency can help counterbalance its own power demands, creating a path toward more sustainable energy use.

AI-driven efficiency: Mitigating the carbon toll

While AI’s energy consumption is undeniably large, AI technologies also offer powerful tools to cut energy waste and emissions across many industries. From cooling data centres to optimising factory lines and smart grids, AI-driven efficiency gains can act as a counterweight to AI’s own power use. In essence, there is an opportunity for a positive feedback loop: using AI to save energy even as we use energy to run AI.

Some notable examples of AI-enabled efficiency breakthroughs:

  • Data centre cooling optimisation: Google’s DeepMind cut data centre cooling energy by 40 per cent by predicting server loads and adjusting cooling in real time.
  • Next-gen cooling technologies: Advanced cooling solutions, such as direct-to-chip liquid cooling, have been shown to reduce server energy use by ~30 per cent , with liquid cooling now used in up to 45 per cent of new European facilities.
  • AI-managed micro-grids: In regions like Ohio and Texas, experimental micro-grids leverage AI to balance renewable energy with data centre power draw , cutting renewable curtailment by about 22 per cent.
  • Industrial and building energy management: AI applications have helped Toyota reduce energy consumption by 29 per cent on certain manufacturing processes and enabled commercial buildings (such as 45 Broadway in Manhattan) to achieve nearly 16 per cent HVAC energy savings through intelligent controls.
  • Building energy management: In commercial buildings, AI has shown impressive results in cutting power usage without sacrificing comfort. A notable case is 45 Broadway in Manhattan, where implementing an AI HVAC optimisation system led to a 15.8 per cent reduction in HVAC energy use. AI algorithms learned the building’s patterns and adjusted heating/cooling more intelligently. Similarly, AI-based controls for lighting and appliances can yield up to 30 per cent energy savings in buildings. Multiply these gains across millions of buildings and homes, and the potential energy savings are enormous.

These examples illustrate a hopeful counterpoint to AI’s energy appetite: the energy savings AI enables in other areas could, in theory, offset a significant portion of the energy AI consumes. Smarter grids, smarter buildings, smarter transportation (AI-optimised logistics, etc.) all contribute to lower overall demand.

A Shell analysis suggests AI applications could halve the carbon intensity of global energy by 2050 through such measures — coordinating renewables, improving efficiency, and innovating in materials (for example, using AI-driven design to create wind turbine blades that generate 40 per cent more power.

However, a critical question remains: Can AI’s energy-saving contributions catch up with its own growing consumption? This is the crux of the AI-energy paradox.

The AI-energy paradox: Do savings and consumption converge?

Right now, the net impact of AI on global energy is still an increase in demand. AI’s usage is growing so rapidly that efficiency gains, as valuable as they are, haven’t yet kept pace.

For instance, even as Google’s AI cut 40 per cent of cooling energy, the expansion of Google’s AI computing meant total energy use still rose. The near-term trend is divergence — AI driving more power use overall, despite localised savings.

Current figures bear this out. The US Department of Energy found that data centres (thanks largely to AI growth) consumed about 4.4 per cent of US electricity in 2023, and are on track to reach between 6.7 per cent and 12 per cent by 2028.

In other words, efficiency improvements are not projected to stop a doubling (or more) of data centres energy draw in the next five years.

A recent Electric Power Research Institute analysis likewise forecasts US data centres could hit nine per cent of national electricity use by 2030, up from ~four per cent today. Clearly, in the short run, AI’s footprint is outpacing the savings it enables elsewhere.

Also Read: A step-by-step guide to protecting your time and energy: The art of pre-qualification

Over the longer term, there is a possibility (not a guarantee) that the curves could converge. As AI matures, there’s intense research focus on efficiency: more efficient algorithms, specialised AI chips that deliver more performance per watt, better cooling, and so on. If each new generation of AI hardware is significantly more efficient, the growth in AI’s energy use could level off.

For example, tech firms are now prioritising energy efficiency over pure performance gains — a shift from the early “move fast” approach. Future AI models might be designed to be smaller or use smart techniques (like model sparsity or on-demand activation) that save energy.

Policymakers are also starting to push for convergence. The EU’s proposed AI Act will require large AI models to demonstrate 15 per cent energy efficiency improvements over previous generations — effectively slowing deployment of ultra-large models until they are more efficient (one reason rumours suggest GPT-5 might be delayed until such standards can be met). Governments may introduce carbon taxes or energy caps that make it economically unattractive to run wasteful AI systems, forcing innovation towards frugality.

So, will spending and savings converge? Optimistically, yes — but likely not until late this decade or beyond.

In a scenario where AI’s growth moderates and efficiency tech accelerates, we could see AI’s net impact plateau or even turn net-negative on emissions (especially if AI helps integrate huge amounts of renewables, as Shell’s scenario imagines.

But for the next 5-10 years, business leaders should plan for a world where AI means higher energy consumption and carbon output, and manage that reality accordingly.

The implication for corporates is twofold:

  • Invest aggressively in AI-driven efficiency projects within your own operations (to capture savings that can offset your AI usage).
  • Anticipate energy costs and capacity needs rising with AI, and incorporate that into everything from site selection (do your data centre/cloud regions have spare power capacity?) to vendor selection (choose partners with greener energy and efficient infrastructure).

In short, don’t assume the problem will solve itself. Proactive action is needed to bend the curve.

Accelerating the renewable transition to power AI

If AI is to spark a green energy revolution instead of exacerbating the crisis, a massive scale-up of clean energy is required. Renewables (solar, wind, hydro) need to grow in tandem with AI compute demand, and AI can be a catalyst to accelerate that growth. But it won’t happen automatically; it requires strategic investments and innovation.

On the plus side, AI is already helping get more out of renewables. We saw how AI can optimise wind and solar output (e.g. smarter inverters yielding 18 per cent more solar farm efficiency. AI can forecast weather and adjust operations to maximise renewable energy capture and reduce downtime.

For instance, autonomous AI-driven networks of electric vehicle (EV) chargers can collectively act as a 450 GWh battery for the grid, smoothing out renewable fluctuations by intelligently timing charging. AI is also being applied to breakthrough research — like using quantum computing and AI to design advanced materials for solar panels or wind turbines, potentially boosting their efficiency dramatically.

However, even optimistic efficiency gains won’t fully bridge the gap. The scale of new clean power needed is enormous.

A McKinsey study estimates that in Europe alone, an additional US$250-300 billion in grid infrastructure upgrades will be required by 2030 to handle 150 TWh of new AI-related electricity demand and connect enough renewables to supply it.

This includes new transmission lines, grid storage, and smarter distribution — essentially building a bigger, smarter grid to feed AI. Without such investment, renewable deployment could lag and AI would end up being powered by whatever is available (often coal or gas).

To put numbers on it: The world added about 300 GW of renewable capacity in 2022. If AI demand is rising by hundreds of TWh, we likely need to add hundreds more GW of renewables per year on top of current plans just to keep AI from increasing fossil fuel use.

Policymakers are starting to respond — the US Inflation Reduction Act, Europe’s Green Deal, China’s massive renewables build-out — all boost clean energy, which indirectly supports AI’s growth sustainably. But targeted actions may be needed, such as incentives for energy-intensive tech firms to directly finance renewable projects (as Microsoft is doing).

Also Read: Why the future of space and energy storage might be growing in a Thai hemp farm

One promising idea is direct clean power procurement for AI infrastructure. Instead of buying offsets or generic renewable credits, companies can invest in additional renewable generation that is tied to their data centres. Google has been a leader here, aiming for “24/7 carbon-free” energy by sourcing clean power in every hour and region that its servers operate. Other firms are now looking at similar models, which could drive significant new solar/wind development.

In summary, AI can accelerate the renewable transition — by necessity and by capability. It provides a strong business motive (big tech needs clean power, so they’ll fund it) and new tools (AI to optimise renewable performance). But it also raises the stakes: if renewables don’t scale fast enough, AI will end up entrenching fossil fuel use at exactly the wrong time for the climate.

For corporate leaders, this means aligning AI strategy with energy strategy. Embrace AI projects that further sustainability (smart grid, energy optimisation) and be cautious of AI expansions that outpace your access to green power. Seek partnerships in the energy sector — for example, co-develop a solar farm or wind park that can power your AI workloads. Those who proactively secure clean energy for AI will not only mitigate environmental impact but also hedge against future carbon regulations or fossil price volatility.

Geopolitical and economic crossroads

AI’s energy demands are now a factor on the geopolitical chessboard. Nations are racing to support their tech industries with reliable power (often in competition with climate goals), and energy dependencies are influencing tech policies. Three major theatres highlight this dynamic: the US-China tech competition, Europe’s regulatory balancing act, and emerging markets vying for data centre investments.

The US-China tech war’s energy dimension

China and the United States are both pouring billions into AI, and with that comes a hunger for energy. China has launched an “East Data, West Computing” initiative, investing an estimated US$75 billion to build huge data centre hubs in its inland provinces. Why inland? Because electricity is cheaper there — for example, coal-rich Inner Mongolia offers industrial power rates around US$0.03 per kWh, among the lowest in the world.

By situating AI data centre next to coal plants in the interior, China can fuel its AI growth at low cost (albeit with high emissions). This strategy effectively leverages China’s vast coal infrastructure to gain an edge in computing capacity.

Meanwhile, the US is responding with investments to support AI hotbeds at home. The Department of Energy recently announced US$2 billion for grid upgrades focused on “AI corridors” like Northern Virginia and Ohio. This includes improving transmission and reliability to ensure these regions (where many US cloud data centres cluster) can handle the increased load without blackouts or slowdowns. It’s essentially an infrastructure subsidy to keep US AI development on track and independent of energy bottlenecks.

There’s also a security aspect: both nations view leadership in AI as strategic, so ensuring the energy security of AI facilities is crucial. This could lead to more efforts like backup gas peaker plants for key data centres, or even dedicated small nuclear reactors, to immunise critical AI infrastructure from grid disruptions or fuel supply risks. In a hypothetical future standoff, a country that cannot power its AI systems reliably would be at a serious disadvantage.

Europe’s cautious approach

Europe, in contrast, is trying to chart a path that prioritises sustainability — but at the risk of dampening its AI momentum. The EU’s proposed regulations (like the AI Act) not only address ethics but also efficiency. As noted, the AI Act could effectively delay deployment of power-hungry models (e.g., next-gen GPT) until efficiency targets are met.

Also Read: How we generated 100+ leads on zero budget

Additionally, some European countries have taken hard stances on data centre growth due to energy concerns. Ireland’s moratorium on new Dublin-area data centres, for instance, was driven by fears that the national grid couldn’t meet both climate targets and a surge in data centre demand. That moratorium led companies to shift investments to places like Poland and Norway where power is more available.

The consequence is that Europe risks falling behind in AI infrastructure. While US and China race ahead with massive builds (regardless of carbon cost), Europe’s combination of slower cloud growth and higher energy prices could make it less attractive for AI development.

Some experts warn of a potential “digital drift” where European AI innovation migrates to more energy-abundant shores. On the other hand, Europe’s emphasis on efficiency and green power could pay off in the long run, yielding more sustainable operations that align with global climate imperatives (and avoid future regulatory penalties).

Global energy markets and AI investment

It’s not just the big three (US, China, EU). Around the world, countries are jockeying to attract data centre and AI investments — and energy is the key bargaining chip. For example, countries like Norway, Sweden, and Canada promote their abundant renewable energy (hydropower, wind) and cold climates (natural cooling) as ideal for sustainable AI data centres. Norway has lured several major projects by offering 100 per cent renewable power and low cooling costs, appealing to companies with net-zero commitments.

In Asia, Singapore has imposed a temporary freeze on new data centres due to energy and land constraints, then lifted it in favour of a selective policy favouring the most efficient, green designs. India and Indonesia are pitching themselves as emerging data centre hubs, but they’ll need to rapidly expand grid capacity (and ideally renewables) to deliver on those ambitions.

The energy crisis of 2022 (with spiking fuel prices) was a wake-up call for many: any country that wants to be an AI/cloud hub must ensure cheap, reliable power. This has geopolitical implications: nations rich in clean energy (like Iceland or Quebec with hydro, or Middle Eastern countries with solar + land for data centres) could play a bigger role in the digital economy by hosting energy-intensive AI computation. It’s a new twist on the resource competition of the past — instead of oil or minerals, it’s about attracting “computational industry” with the promise of low-cost electrons.

In summary, leaders need to be aware that AI isn’t happening in a vacuum — it’s intertwined with global energy and policy currents. Decisions about where to site AI operations, which markets to enter, or even which governments to partner with may hinge on energy availability and regulations.

Businesses at the cutting edge of AI should engage in policy discussions: for example, advocating for incentives for clean power or workable regulations that encourage efficiency without stifling innovation.

This is part two of a three-part series exploring AI’s energy impact. Read part one here

Part three of this series looks at the emerging solutions — tech and policy — that could put AI on a more sustainable path, and how companies can harness them.

This article was originally published here and co-authored by Xavier Greco, Founder and CEO of ENSSO.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

Image courtesy: DALL-E

The post The AI-energy paradox: Will AI spark a green energy revolution or deepen the global energy crisis? — Part 2 appeared first on e27.

Posted on

Singapore’s next payments chapter will be written by AI and tokenised money

Singapore is doubling down on its ambitions to become Asia’s undisputed payments capital, as a new industry report paints the city-state as one of the world’s most advanced digital and cross-border payments hubs.

The Singapore FinTech Association (SFA), together with PwC Singapore, has launched “Payments’ State of Play 2026”, a sweeping review of how the island nation’s payments ecosystem has evolved over the past decade, and where it is headed next.

Also Read: Fintech rebound: Singapore bags US$1.04B, outpaces global peers

The report argues that Singapore’s rise has been driven by a rare combination of progressive regulation, strong foundational infrastructure, high consumer demand for seamless digital experiences, and close public-private collaboration. What began as basic payment rails has now matured into one of the most sophisticated payment markets globally.

Digital payments dominance and record funding momentum

One of the most striking findings is Singapore’s scale of digital adoption. More than 98 per cent of adults are banked, while real-time payments and digital wallets increasingly dominate everyday transactions.

Digital wallets alone are projected to process US$66 billion in online and point-of-sale transactions by 2027, underscoring how cashless behaviour has become deeply embedded in the country’s economy.

Investor confidence has also remained resilient. The report notes that the city-state’s payments sector raised over US$319 million in funding in the first nine months of 2025 — surpassing the combined fintech funding totals of Indonesia, Malaysia, the Philippines, Thailand, and Vietnam.

Real-time rails powering the ecosystem

Singapore’s domestic payments infrastructure continues to scale rapidly, led by systems such as PayNow and FAST.

FAST transaction volumes hit 500 million in 2024, representing a 31 per cent year-on-year increase, as real-time transfers become the default for consumers and businesses alike.

Card payments also grew strongly, with total value rising at a compound annual growth rate (CAGR) of 12.9 per cent from 2020 to 2024. E-money value expanded at a CAGR of 7.3 per cent over the same period, despite a slight decline in transaction volume.

E-money growth and the global wallet boom

Singapore’s digital payments market is expected to accelerate further. Total transaction value reached US$39.37 billion in 2023 and is forecast to climb to US$113.65 billion by 2030.

Also Read: Singapore’s SME fintechs face growth hurdles amid restricted API access

E-money transactions are projected to rise steadily to US$4.28 billion by 2028, supported by AI adoption, embedded finance innovation, stronger stablecoin regulation, and expanding cross-border payment networks.

This trajectory mirrors a wider global shift, with mobile wallet transactions forecast to surge to an estimated US$17 trillion by 2029.

Cross-border connectivity as a regional differentiator

Singapore is also positioning itself as a key settlement and connectivity hub for Asia. Initiatives such as Project Nexus, alongside PayNow linkages with Thailand and Malaysia, are strengthening the city-state’s leadership in cross-border real-time payments.

Total remittance volume reached US$8.05 billion in 2022 and is expected to grow to US$13.34 billion by 2032, representing a CAGR of 5.2 per cent.

Stablecoins, digital assets, and Singapore’s FX strength

The report highlights Singapore’s rising influence in digital assets, particularly stablecoins. The city-state now accounts for over 70 per cent of Southeast Asia’s non-USD stablecoin market pegged to the Singapore dollar, supported by the Monetary Authority of Singapore’s globally recognised regulatory framework.

Singapore is also reinforcing its status as a major foreign exchange hub. The country is now the world’s third-largest FX trading centre, with average daily trading volumes climbing to US$1.485 trillion in April 2025 —  a 60 per cent increase from April 2022.

Holly Fang, President of the Singapore FinTech Association, said, “Over the past decade, Singapore has developed one of the most advanced, resilient, and trusted payments ecosystems in the world.”

She added that progressive regulation and industry collaboration have
positioned Singapore as a leader in real-time and cross-border payments, while also confronting fraud and scams head-on.

PwC Singapore Partner Wong Wanyi echoed this view, noting, “Payments are evolving rapidly, led by technology and emerging realities, while also presenting new risks.”

Also Read: Singapore’s regulatory vision is shaping cross-border payments in Asia: Report

She emphasised that sustaining Singapore’s leadership will require strong risk management frameworks and regulatory clarity that encourage innovation while building trust.

The next wave: AI, embedded finance, and consumer protection

Looking ahead, the report identifies several trends shaping the next phase of payments innovation:

  • Embedded finance and super apps, integrating lending, investment, and payments into everyday platforms
  • AI-powered payments, enhancing fraud detection and optimising processing
  • Tokenised deposits and regulated stablecoins, expanding use cases in domestic and cross-border payments
  • Greater interoperability, driven by regional initiatives like Project Nexus
  • Stronger consumer protection, amid escalating scam risks

Fraud remains a pressing challenge. As of November 2025, scam-related losses in Singapore reached US$620 million, close to the US$812 million recorded across the whole of 2024 — underscoring the urgency for coordinated action across the ecosystem.

The post Singapore’s next payments chapter will be written by AI and tokenised money appeared first on e27.

Posted on

How research and startup partnerships are unlocking new opportunities for growth

Strategic collaborations between research institutions and startups are reshaping the innovation landscape, unlocking new opportunities for growth and delivering meaningful societal impact. These partnerships allow scientific and academic entities to access commercialisation channels and adopt more agile development approaches, while startups benefit from resources and industry expertise needed to scale their innovations effectively.

Many early-stage startups look for the first business partners among corporate players. Yet, challenges remain—according to a Boston Consulting Group survey, 45 per cent of corporations and 55 per cent of startups express dissatisfaction with their partnership experiences, highlighting a gap that science organisations are uniquely positioned to bridge—connecting groundbreaking research with viable business models.

To appreciate the scale of innovation in Southeast Asia, consider this: the region is home to 63 unicorns—companies valued at US$1 billion or more—with over 124,450 startups in total based there as of May 2025.

Around the world, innovation ecosystems are expanding rapidly, with millions of new startups launching annually across regions in North America, Europe, and Asia. Despite this growth, the disconnect between startups and research organisations remains a common obstacle, and the tangible benefits to businesses remain modest. All stakeholders within the innovation ecosystem stand to gain by strengthening these partnerships to better fulfil their promise for society and the economy.

Below are three key benefits to explore.

Enhancing research and development (R&D)

For startups looking to strengthen their R&D efforts by partnering with scientific institutions, there are three key areas to focus on: aligning innovation goals at the project level, establishing clear and open communication channels, and setting precise collaboration expectations within agreements.

Getting everyone aligned on innovation goals at the project level is absolutely crucial. In my experience mentoring startups, many partnerships start with broad, high-level objectives but don’t drill down into specific outcomes for each project. The most successful collaborations are those that sync goals not just strategically, but also at the day-to-day operational level. Using digital tools and collaborative platforms can make this much easier, helping teams coordinate in real time and maintain shared visibility.

Also Read: New research report: The nexus between elite university education and startup funding

Effective communication forms the backbone of any successful partnership, yet transparency often falls short. Issues such as siloed information systems and conflicting priorities can quickly lead to misaligned expectations and wasted resources.

To prevent this, partners should prioritise full visibility into project progress, ensuring that everyone involved has access to accurate, detailed updates—whether by project phase, team, or milestone. Centralising collaboration workflows and clearly understanding associated costs further build trust and accountability.

Equally important is tailoring incentives specifically to joint efforts. Too frequently, research institutions and startups focus on broad research milestones instead of concrete, shared deliverables. This misalignment can cause partners to pursue individual goals rather than common objectives, resulting in resource imbalances where some areas are overstretched while others remain underutilised. Clear, outcome-focused incentives help maintain commitment to the partnership’s overall success.

The Natural Resources Institute Finland (Luke) offers an example of a European research organisation focused on sustainable development through renewable natural resources. Luke conducts extensive research and development across forestry and bioeconomy, supporting both national and international projects.

It provides access to advanced research infrastructures such as greenhouses, research fields, and laboratories, enabling high-quality experimental work. Luke also coordinates the European research infrastructure AnaEE (Analysis and Experimentation on Ecosystems), fostering collaboration and knowledge sharing across countries. Through its involvement in numerous partnerships, Luke plays a key role in turning scientific insights into practical solutions that promote sustainability and well-being.

Fast-tracking commercialisation

Accelerating commercialisation is often the missing piece when startups and research institutions join forces. While both sides excel at innovation, the actual process of getting new ideas to market can get lost in the shuffle. By working together more closely—sharing resources, knowledge, and a unified vision—the journey from discovery to product becomes more efficient and streamlined. This collaboration helps prevent common setbacks such as conflicting priorities, wasted efforts, and delays that can hinder promising technologies.

A concrete example of such effective collaboration is Turion Labs, which recently opened in Singapore as the region’s first comprehensive biotech innovation platform. This joint venture, supported by Korea’s S&S LAB and Indonesia’s Future Lestari, offers modular lab spaces, contract research services, and regulatory assistance within a unified framework.

Turion Labs aims to connect promising scientific research with practical paths to commercialisation. It supports startups and biomedical companies by providing access to advanced laboratory facilities alongside Korean research expertise and Southeast Asian markets. This initiative reflects the growing trend in Southeast Asia to develop collaborative innovation centers that bring together research and industry to help advance biotech development in the region.

Also Read: Nagoya University: Asia’s extensive network of innovation, research, and education

What makes these partnerships work is flexibility. The most successful collaborations aren’t rigid—they adapt to the needs of each project and each team. Startups and research institutions that prioritise both innovation and business efficiency find ways to share risk and align goals, while keeping lines of communication open. This approach is especially important as startups play an ever-larger role in commercialising high-impact innovations.

Uniting diverse talents

Navigating partnerships between science organisations and startups isn’t just about having the latest tech at your fingertips—it’s about bringing together the right people and perspectives. Technology can certainly make collaboration easier, but it’s not a cure-all. The real magic happens when the deep technical know-how of researchers meets the entrepreneurial drive of startup founders, creating space for meaningful innovation.

Still, even with all the collaboration tools available today, many partnerships fall short of their potential. Two issues tend to crop up again and again. First, organisations often jump into new systems without rethinking how they actually work together—like installing state-of-the-art software but sticking to old, inefficient habits. Second, when project goals aren’t clear and data isn’t aligned, teams can end up working at cross purposes, slowing down the move from idea to market.

Good management can make all the difference here. The most effective collaborations bring together cross-functional teams—researchers, entrepreneurs, and other key players—who regularly check in on progress and keep everyone focused on shared milestones. Setting clear, measurable targets keeps things on track and helps spot issues early.

Compelling examples of collaboration between research labs and startups can be seen at the University of Eastern Finland, where joint efforts have led to innovative photonics applications for consumer electronics.

Similarly, the National University of Singapore has partnered with startups through a dedicated program focused on flexible electronics and hybrid systems, driving the development of advanced consumer electronics technologies. These partnerships highlight how academic institutions and startups are working together to push the boundaries of innovation in the consumer electronics sector.

Also Read: Bridging the digital divide: Addressing Malaysia’s skills gap

By combining academic expertise with startup agility, these collaborations have rapidly advanced from lab prototypes to market-ready products.

Starting point

One effective way to kick off collaborations between startups and research institutions is by gaining a thorough, project-level understanding of the partnership landscape. Once that foundation is in place, partners can use a collaboration health map to spot inefficiencies and opportunities at various stages—whether it’s during prototype testing or preparing for market launch.

This kind of tool helps leaders identify the root causes behind common challenges such as misaligned goals or wasted resources. With those insights, they can roll out targeted actions that address the real problems, rather than treating surface symptoms.  Moreover, this approach helps ensure that improvements are sustainable and don’t fade over time.

By adopting these strategies, startups and science organisations can work more smoothly together and unlock greater value for everyone involved. Of course, the exact approach will vary depending on each partnership’s goals and setup. But no matter the details, taking a proactive stance on managing collaboration can lead to smarter decisions and stronger, more rewarding partnerships.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookXLinkedIn, and our WA community to stay connected.

Image credit: Canva Pro

The post How research and startup partnerships are unlocking new opportunities for growth appeared first on e27.

Posted on

Why visibility in the AI era is a design problem, not a discipline one

Consistency has long been framed as a discipline problem. If you want to stay visible, the advice goes, you simply need to post more, work harder, and show up daily — even when you don’t feel like it.

That framing no longer holds in the AI era.

What we are seeing instead is a shift: Consistency is becoming a systems and design problem, not a willpower one. And the founders who understand this early are the ones building leverage without burning out.

From “working harder” to “designing fewer steps”

I wrote the book The Lazy Person’s Guide to Success, built around a simple idea: If a task takes 100 steps, the real work is figuring out how to reduce it to 10.

That logic still applies today — only now, AI accelerates it dramatically.

Before Seraphina AI, efficiency came from project management tools, SOPs, and documentation. I still use Asana extensively for this reason: It structures information, preserves institutional memory, and makes work retrievable.

What AI changes is not organisation, but execution velocity.

Instead of asking people to remember complex workflows, AI can now guide them through processes in real time. The system doesn’t just store instructions; it actively assists. That distinction matters.

Also Read: Singapore’s AI ambitions face crucial test amid economic and talent pressures

AI as teammate, not replacement

The most effective founders are not using AI as a shortcut for thinking. They are using it as a thinking partner.

I see Seraphina AI as a digital twin or personal assistant — a teammate that is always available, never fatigued, and able to respond on demand. It augments human judgment rather than replaces it.

This is especially evident in content creation.

The bottleneck is no longer production capacity. It is attention and articulation.

Why most people believe they “don’t have content”

When people say they don’t know what to write, they are usually confusing content with output.

In reality:

  • conversations,
  • reactions,
  • opinions formed while reading,
  • reflections shared with peers,

are already content — just undocumented.

AI closes this gap by lowering the cost of capture.

Voice notes, short reflections, or informal messages can be transcribed, structured, and adapted into written formats without losing the original voice. The authenticity remains because the source material is human. AI simply handles transformation and distribution.

Micro habits outperform motivation

The most sustainable form of consistency comes from micro habits, not grand commitments.

A simple example:
When an idea arises, record it immediately — without editing, formatting, or judging its value.

That single habit:

  • reduces friction,
  • bypasses perfectionism,
  • and creates a reliable input stream for AI-assisted processing.

Over time, journaling becomes blogging. Blogging becomes dialogue. Dialogue becomes visibility.

The system compounds quietly.

Also Read: Is AI making it harder for tech startups to survive?

Visibility as choice — and requirement

Visibility today is optional only in theory.

You can choose to remain invisible and still be competent. But if leverage, reach, or influence matter, visibility becomes a functional requirement.

Importantly, visibility does not demand virality. It demands continuity.

Not every idea will resonate with everyone. But resonance does not scale linearly — it clusters. And clusters form communities.

The real blocker is perfection, not fear

In practice, the primary inhibitors of consistency are:

  • overthinking value,
  • waiting for “better” ideas,
  • and mistaking polish for usefulness.

In reality, something does not need to be universally valuable to matter. It only needs to be relevant to someone.

Consistency builds familiarity. Familiarity builds trust.

The skill that matters most as AI does more

As AI expands its capabilities, the differentiator is no longer speed or output.

It is communication.

The ability to articulate thinking, share perspective, and remain present in public discourse is becoming the defining human advantage.

In that sense, consistency is not about effort. It is about design.

And “lazy” consistency — done correctly — is not a lack of ambition, but a strategic choice to let systems do what systems do best, so humans can focus on what only humans can do.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post Why visibility in the AI era is a design problem, not a discipline one appeared first on e27.