Posted on

AI in action: How governments are using technology to predict, prevent, and personalise

For centuries, government has often been seen as a slow, reactive bureaucracy. Citizens fill out forms, wait in lines, and hope for a response. Artificial Intelligence (AI) is beginning to change this in a fundamental way, enabling a shift from a government that reacts to problems to one that anticipates needs.

Think of it like managing a city bridge. The old way was to wait for cracks to appear or, worse, for the bridge to fail, and then scramble to make repairs. The new, AI-driven approach is to use sensors and predictive models to understand the bridge’s structural stress in real-time, allowing engineers to prevent the failure before it ever happens.

This shift is more than just a technological upgrade; it’s a redefinition of the social contract. As decisions about benefits, health, and safety move from human clerks to algorithms, the relationship between the citizen and the state is fundamentally changing.

This is the promise of AI in government: to build a more proactive, personalised, and efficient state that can forecast health crises, disburse benefits to those in need without lengthy applications, and optimise city traffic dynamically. This document will explore what “AI” really is, see how it’s being used to remake key public services, and understand the critical challenges we must address, all based on insights from the “Tools to build an AI state” report.

What exactly is the ‘AI’ in government? A simple toolkit

“Artificial Intelligence” isn’t a single technology; it’s a collection of tools. Just as a mechanic has different tools for different jobs, governments use various types of AI to solve specific problems. The table below introduces three of the most common AI technologies used in public services.

AI technology What it does in government (with an example)
Natural Language Processing (NLP) Understands, interprets, and generates human language, both spoken and written.

Example: AI-powered chatbots answer citizen questions in multiple languages 24/7, and can even help summarise complex legislation into plain language.

Machine learning and predictive analytics Analyses historical data to find patterns and forecast future events or risks.

Example: Governments use predictive models to forecast disease outbreaks or identify patterns that suggest potential tax fraud.

Computer vision “Sees” and analyses information from images and videos to identify objects or patterns.

Example: AI systems can read medical scans like X-rays to detect cancer earlier, analyse camera footage to spot potholes on city roads, or analyse satellite imagery to monitor deforestation and other environmental changes.

Now that we understand the basic tools in the government’s AI toolkit, let’s explore how they are being applied to improve the services that impact our daily lives.

Also Read: A new ocean order: What startups and investors need to know about the High Seas Treaty

How AI is remaking public services: Three key examples

This transformation of the social contract is not abstract; it’s happening now in the public services that define our daily lives. From the classroom to the hospital to the daily commute, AI is being applied to fulfil the state’s core promises more effectively.

Here are three key examples.

  • Education: From standardised lessons to personalised learning

The traditional challenge in education has always been the “one-size-fits-all” model, where a single teacher must try to meet the diverse needs of a large classroom. AI’s primary promise is to make learning adaptive and personalised for every student.

  • AI-driven tutoring: Platforms like Squirrel AI in China provide millions of students with tutoring that adjusts the difficulty of lessons in real-time based on their performance, acting like a personal tutor for each child.
  • Smarter teacher tools: AI can automate routine tasks like grading assignments and generating lesson materials aligned with national curricula, providing teachers with detailed analytics on student progress. This frees up teachers’ time to focus on what matters most: mentoring and providing personal support to their students.
  • Building economic pathways: AI is not just for children. Platforms like Singapore’s SkillsFuture use AI to analyse labour market trends and guide adult workers toward in-demand skills and jobs, strengthening the promise of lifelong economic opportunity.

Just as AI can tailor a student’s education, it is also beginning to personalise healthcare from the moment a person seeks care.

  • Healthcare: From treating sickness to predicting it

Healthcare systems worldwide are strained by rising costs and a focus on treating people only after they get sick. AI is playing a central role in shifting this focus from treatment to anticipation, making public health more predictive and preventive.

  • Faster, more accurate diagnosis: Computer vision algorithms can analyse medical images like X-rays and MRIs with incredible speed and accuracy. These systems can identify anomalies in seconds, flagging risks that allow for intervention before a crisis occurs and leading to better patient outcomes.
  • Predicting health crises: During the COVID-19 pandemic, AI-driven epidemiological models helped governments predict where outbreaks would occur, allowing them to allocate resources more effectively. Beyond pandemics, these models can analyse health records to flag patients at high risk of conditions like sepsis, allowing hospitals to intervene preventatively.

While AI’s impact on personal health is profound, its ability to analyse and optimise large, complex systems is also reshaping the public infrastructure we all share, starting with our transport networks.

  • Transport: From traffic jams to smart traffic flow

Every city dweller is familiar with the frustration of traffic congestion, transit delays, and infrastructure failures. By analysing vast amounts of real-time data, AI is helping make transport systems adaptive and predictive, smoothing out the flow of people and goods.

  • Dubai’s smart traffic signals: In Dubai, AI-powered traffic lights respond dynamically to real-time traffic conditions. Instead of following a fixed schedule, they adjust their timing to reduce congestion and cut down on waiting times for drivers.
  • China’s city brain: This massive platform, developed by Alibaba, analyses city-wide data from cameras, GPS, and public transit. It orchestrates traffic flow across entire districts, dramatically cutting response times for emergency vehicles by minutes that can save lives.

These examples show a future of exciting possibilities, but this progress also comes with significant challenges and questions that society must carefully address.

Also Read: Asia rises in the AI chip race: China to outgrow US by 30 per cent by 2030

The Big questions: Balancing progress with people

Deploying this technology responsibly requires confronting the profound governance challenges it creates. While the benefits are clear, AI’s use in the public sector forces us to ask critical questions about fairness, accountability, and our fundamental rights.

  • Is it fair? The challenge of bias

AI systems learn from the data they are given. If that data reflects historical human biases, the AI can learn and even amplify those same prejudices. For example, a predictive policing model trained on biased arrest records could unfairly target a community that was already over-policed, creating a vicious cycle of discrimination.

  • Who’s in charge? The accountability problem

Many advanced AI systems are a “black box,” meaning it can be difficult, even for their creators, to understand exactly why they made a specific decision. This raises a critical question: if an algorithm wrongfully denies a person welfare benefits or flags them as a risk, who is accountable for the mistake?

  • Are we being watched? The privacy puzzle

To work effectively, AI often requires vast amounts of data about citizens, from their health records to their daily travel patterns. This creates a fundamental trade-off, raising serious concerns about the potential for government surveillance and the protection of personal privacy.

Conclusion: Governing wiser, not just faster

Artificial Intelligence is clearly more than just a new technology; it is a powerful force that is reshaping the relationship between citizens and the state. It offers the tools to build a government that is not only faster and more efficient but also more proactive and personalised.

However, the true measure of success for AI in government will not be speed or cost savings alone. It will be whether these tools are used to strengthen the social contract by making governance more transparent, inclusive, and trustworthy.

The goal is not simply to adopt AI the fastest, but to integrate it wisely, ensuring that this powerful wave of technological innovation is carefully aligned with our democratic values and the public’s trust.

Watch this space for a follow-up article for a deeper dive into AI applications in Government, and where opportunities lie for startups and investors.

A comprehensive analysis, “Tools to Deliver The AI State – a Technology Watch and Horizon Scan”, is available here.

You can also find me on my podcast and newsletter, where I share regular insights on geopolitics and leadership.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image generated using AI.

The post AI in action: How governments are using technology to predict, prevent, and personalise appeared first on e27.

Posted on

When privacy becomes a privilege: Balancing user protection with fair access for innovators

Over the past few years, I’ve come to genuinely admire how far Apple and Google have pushed the world toward stronger privacy and security.

Their efforts have not only but also forced the entire tech industry to rethink how data is handled, stored, and protected. Their frameworks — from Apple’s App Tracking Transparency to Google’s Privacy Sandbox — have raised the bar for what users expect in terms of trust and control.

These frameworks didn’t just appear overnight; they were the result of , and a growing recognition that privacy is not a luxury but a necessity in the digital age.

But as someone working in privacy-preserving AI, I’ve also seen the other side of this progress: access. This is where the narrative gets complicated. While these safeguards are undeniably beneficial for users, they also create an unintended consequence: they can that aims to enhance privacy further.

The paradox of privacy

Every new safeguard limits who can access sensitive device signals — including notifications, app usage, and network patterns. That’s good for users. After all, no one wants their personal data to be exploited or mishandled. These protections ensure that users have more control over their digital footprints, which is a significant step forward in an era where data breaches and misuse are all too common.

Yet, in practice, these restrictions mean the same companies that set the rules also keep privileged access for themselves. This creates a dynamic where —those with the resources and influence to shape these frameworks—can fully leverage the data they collect. Smaller players, even those with innovative solutions, are often left on the sidelines, they need to prove their concepts.

Also Read: How to build customer trust with improved data privacy

Independent innovators — the ones building privacy-enhancing technologies that never move or expose data — often can’t even demonstrate their models because the APIs are closed. This is particularly frustrating because these innovators are often the ones pushing the boundaries of what’s possible in privacy-preserving tech. Without access to the necessary tools and data, their potential contributions remain untapped.

It’s a strange paradox: we protect privacy by preventing the very people designing privacy-safe systems from proving their value. In essence, we’re creating a system where privacy is protected, but only for those who already have power. The innovators who could help are left struggling to gain a foothold.

The bigger picture

Regulators have started to notice this imbalance. Regulators have started to notice this imbalance. This is a positive sign, as it indicates that the conversation around privacy is evolving beyond just protection to include fairness and accessibility.

  • The EU Digital Markets Act (DMA) now classifies large platform owners as “gatekeepers” who must support interoperability and fair access to the data business users generate.
  • Singapore’s PDPA and AI Governance Framework name Federated Learning, Multi-Party Computation, and Differential Privacy as key enablers of responsible data use.
  • Global standards bodies such as OECD and NIST are defining what trustworthy privacy-preserving collaboration looks like.

These developments aren’t about punishing Big Tech. Rather, they’re about creating a where innovation isn’t stifled by monopolistic practices. They’re about ensuring that privacy doesn’t become a monopoly, reserved only for those who own the operating system. The goal is to foster an environment where privacy is a shared responsibility, not a privilege reserved for a select few.

Also Read: How to unlock possibilities through data privacy enhancing technologies

A personal reflection

I don’t write this to criticise Apple or Google; their leadership in privacy has influenced how users perceive digital trust. In fact, their contributions have been instrumental in shifting the industry toward a more . Without their efforts, we might still be in a world where user data is treated as a commodity rather than a right.

However, progress in technology should be inclusive, not exclusive. Inclusivity in this context means ensuring that the tools and frameworks designed to protect privacy are , not just those who already have a seat at the table. If we truly believe that privacy is a universal right, then access—guided by transparency and compliance, not control—must be part of that vision.

Because privacy shouldn’t be a privilege, it should be a to everyone, regardless of their size or resources. It should be the foundation on which fair innovation is built.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image courtesy: Canva

The post When privacy becomes a privilege: Balancing user protection with fair access for innovators appeared first on e27.

Posted on

BioArk’s growth strategy plants seeds for a greener agricultural future

Jeremy Chua, Chief Technical Officer & Co-founder, BioArk

Farming practices across Asia face mounting pressure to increase output while reducing environmental damage. For BioArk, a Singapore-based agritech company, this challenge is a starting point for rethinking how fertilisers are made, applied, and integrated into existing systems without demanding costly changes from farmers.

Rather than focusing on history or legacy methods, BioArk’s team develops bio-based fertilisers that compete directly with conventional chemical inputs.

“Our goal is to provide a like-for-like substitute,” says Jeremy Chua, BioArk’s CTO and co-founder, in an email to e27. “One that performs as well, costs comparably, and doesn’t require farmers to rework their operations.”

Its flagship product, Arktivate, is positioned as an interchangeable input that delivers immediate results while improving soil conditions over time. The company frames this as part of a broader “symbiotic ecosystem” approach, blending ecological processes with applied science to produce measurable outcomes in crop yields, soil health and environmental impact.

Key to BioArk’s development philosophy is the view that plant health cannot be separated from environmental health.

“Nature manages nutrient cycling and biodiversity without external inputs,” says Chua. “We try to understand how that works, identify the underlying scientific principles, and build those into our product designs.”

Also Read: You are what you eat: Opportunities in Southeast Asia’s agri-food sector

This involves using biotechnology processes to incorporate sustainably sourced organic inputs. The aim is to enhance the availability and uptake of nutrients while supporting the surrounding soil microbiome. According to the company, field tests show that these fertilisers can match or outperform traditional inputs while reducing reliance on fossil fuel–based products like urea or mined resources such as phosphate and potash.

The company also points to early evidence suggesting that every tonne of its fertiliser used may help store about 0.5 tonnes of CO₂ equivalent annually through improved soil biology. While this data is still being validated, it speaks to a wider goal: to enable farming methods that are economically viable while contributing to climate mitigation and ecosystem regeneration.

Growth strategy

BioArk is currently focusing on expansion in Indonesia and is exploring similar opportunities across key Southeast Asian agricultural markets. Countries such as Vietnam, Thailand and the Philippines are particularly interesting, given their high food production levels and vulnerability to environmental degradation.

Matthew Edward Loh, Chief Executive Officer & Co-founder, BioArk

The company’s strategy involves close collaboration with local farming communities to adapt its products to specific soil conditions and crop types. In practice, this includes on-the-ground demonstrations, training sessions and ongoing agronomic support. This approach is intended to reduce barriers to adoption and ensure compatibility with existing agricultural practices.

Also Read: Singapore anchors inaugural ClimAccelerator for agritech startups in APAC

The decision to avoid requiring major behavioural shifts reflects one of the company’s core assumptions: that new tools for sustainable agriculture must be easy to use, or risk being ignored altogether. Many of today’s alternatives—such as organic farming or precision agriculture—offer environmental benefits but often require significant capital investment or operational changes.

“Inertia is a real issue,” Chua says. “If we want widespread change, solutions must fit into current systems, not expect systems to change first.”

BioArk’s approach also reflects broader shifts in how agricultural innovation is pursued, particularly in urban hubs such as Singapore. As a regional centre for agri-food research, the city-state has provided BioArk access to government-backed R&D facilities, startup support networks and policy frameworks that prioritise sustainability.

Partnerships with local agencies, including Enterprise Singapore (ESG), have supported BioArk’s product development and helped position its technology for international deployment. Chua says this environment has allowed the team to quickly iterate and validate its fertilisers before scaling into wider markets.

Looking forward, BioArk aims to expand its manufacturing capacity, extend field trials across Asia and forge new partnerships to accelerate adoption. Its long-term objective is to reduce the agricultural sector’s reliance on synthetic fertilisers while contributing to improved soil resilience and carbon storage.

“Our focus is on scaling what works—environmentally, scientifically and economically,” Chua says. “Not in isolation, but in partnership with the growers who work the land every day.”

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image Credit: BioArk

The post BioArk’s growth strategy plants seeds for a greener agricultural future appeared first on e27.

Posted on

Indonesia’s Elevarm runs a data-driven farming model targets national expansion by 2026

In a powerful demonstration of purpose-driven innovation, Indonesian agritech company Elevarm has unveiled its 2024 Impact Report, shedding light on its transformative contributions to the nation’s horticulture sector. The report outlines how Elevarm’s integrated ecosystem model is revolutionising farming practices, improving farmer livelihoods, and advancing sustainable agriculture across the archipelago.

“Indonesia’s food security depends on empowering farmers with the right tools, knowledge, and support,” says Bayu Syerli Rachmat, co-founder and CEO of Elevarm, in a press statement. “By directly addressing sustainability at the grassroots level, Elevarm is proud to help close the productivity gap while protecting the environment.”

In 2024, Elevarm supported more than 16,000 smallholder farmers. These farmers experienced a remarkable transformation, with 36.5 per cent reporting increased yields and average incomes rising from IDR12.1 million (US$735) to IDR14.1 million (US$857) per crop cycle.

Beyond financial gains, farmers saw a 14 per cent reduction in chemical usage, thanks to Elevarm’s organic solutions such as vermicompost, produced and distributed in-house.

At the core of Elevarm’s achievements lies its integrated ecosystem service model, designed to create “triple wins”: boosting livelihoods, enhancing food security, and strengthening environmental resilience. Through a blend of advanced technology, tailored financing, market infrastructure, and advisory services, Elevarm addresses systemic challenges in Indonesian agriculture.

Also Read: Unlocking agritech’s potential: Can Southeast Asia rise to the challenge?

Tech-driven cultivation support

Elevarm leverages cutting-edge tech to deliver tailored cultivation practices. Farmers are equipped with high-quality inputs, including seeds, organic fertilisers, and biostimulants. The company also employs tech-driven solutions such as soil testing, monitoring dashboards, IoT-based field devices, and a dedicated farmer app that offers real-time insights and personalised guidance.

Addressing one of smallholder farmers’ most significant barriers, Elevarm provides affordable loans tied to harvest repayment. This financial support covers an average of 62.5 per cent of farmers’ working capital needs, reducing dependence on informal lending channels.

Moreover, crop and life insurance options protect farmers from risks associated with climate events, pests, and unforeseen personal tragedies. By shifting reliance away from informal lending, the company intends to help farmers gain financial stability and peace of mind.

To ensure farmers have reliable markets for their produce, Elevarm has established a comprehensive market infrastructure. Under its Farmer Partnership Model, farmers commit to selling their entire harvest to Elevarm at mutually agreed-upon fair prices. This approach guarantees off-take certainty and strengthens market trust, reflected in the growing volume of produce sold directly to the company.

Sustainable and professional farming practices

Sustainability remains central to Elevarm’s vision. The company promotes Good Agricultural Practices (GAP), polyculture (adopted by 42.1 per cent of its farmers), organic fertilisation, and reduced chemical and water usage. Their flagship vermicompost and NextBio products are pivotal in improving soil health, enhancing plant resilience, and driving long-term environmental benefits.

A rigorous data-driven approach underpins Elevarm’s operations. The company employs stratified sampling, historical yield analysis, field surveys, and third-party datasets to measure impact accurately. This meticulous data collection informs strategic decisions and ensures transparency in reporting outcomes.

Also Read: Automation, AI, and agritech power Vietnam’s VC momentum

A robust governance framework incorporating Standard Operating Procedures (SOPs), Service Level Agreements (SLAs), and comprehensive risk management supports Elevarm’s model. This structure ensures consistent service delivery, timely input distribution, efficient payment processing, and reliable claim verification.

A vision for nationwide transformation

Looking ahead, Elevarm is poised to scale its model nationally. Following its focus on impact in 2024, the company plans significant expansion in 2025, targeting new high-value commodities such as shallots, tomatoes, and beans. It will also venture into agroforestry, revitalising underutilised lands in Purwakarta, West Java.

By 2026, Elevarm intends to extend operations to Sumatra and Sulawesi, with an eye on institutionalising its model through policy advocacy and government collaboration.

The company is also developing a predictive AI-powered digital platform that aims to become an indispensable tool for farmers, offering even more precise and timely insights.

As part of its growth strategy, Elevarm plans to introduce third-party audits and Social Return on Investment (SROI) analyses to further validate its SDG-linked outcomes. These efforts will expand the scope of measurable indicators, including gender impact, environmental footprint, and income stability.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Image Credit: Elevarm

The post Indonesia’s Elevarm runs a data-driven farming model targets national expansion by 2026 appeared first on e27.

Posted on

Generative AI fatigue: Are we over‑automating creativity?

In less than two years, generative AI has gone from novelty to necessity. It writes our emails, designs our slides, drafts our articles, generates our images, scripts our videos, and even suggests what we should think next. For many organisations, the question is no longer whether to adopt generative AI, but how fast they can integrate it into every workflow.

Yet quietly, beneath the enthusiasm, a new sentiment is emerging across creative, professional, and knowledge‑based industries: fatigue.

Not burnout from overwork—but a subtler exhaustion. A sense that creativity is becoming automated, flattened, and strangely hollow.

This is generative AI fatigue. And it forces us to ask an uncomfortable question: are we over‑automating creativity itself?

The promise: Efficiency, scale, and democratisation

Let’s be clear: generative AI works.

It lowers barriers to entry. A solo founder can produce what once required an agency. A junior employee can draft with confidence. A non‑designer can create visuals. A non‑writer can publish.

From a business perspective, this is revolutionary. Generative AI compresses time, reduces cost, and scales output. In an economy obsessed with speed and efficiency, this feels like progress.

It also democratises access. For many people who previously lacked language fluency, technical skill, or formal training, AI tools provide a starting point—a scaffold.

But scale and speed come with trade‑offs. And those trade‑offs are now becoming visible.

The symptom: Everything starts to sound the same

Scroll LinkedIn. Read Medium. Browse Substack. Watch short‑form videos.

You’ll notice a pattern.

Polished. Structured. Clean.

And eerily interchangeable.

Thought leadership posts follow identical rhythms. Articles echo the same metaphors. Marketing copy repeats familiar frameworks. Even “personal” stories feel optimised rather than lived.

Also Read: Creativity at the heart of business growth

This is not because people have suddenly lost originality. It’s because generative AI systems are trained on what already exists—and rewarded for producing what statistically resembles success.

AI doesn’t invent culture. It averages it.

When creativity becomes prompt‑based and output‑driven, uniqueness is no longer the goal. Predictability is.

The result? Content abundance—and meaning scarcity.

The deeper problem: Creativity without friction

Creativity has always been inefficient.

It requires boredom, false starts, uncertainty, and discomfort. It often involves writing badly before writing well. Thinking slowly. Sitting with ideas that don’t immediately resolve.

Generative AI removes much of this friction.

At first, this feels liberating. But over time, it creates a subtle dependency: we stop wrestling with ideas and start selecting from options.

When AI does the first draft, the hard part disappears. And with it, something else quietly vanishes—the depth that comes from struggle.

This matters because creativity is not just output. It is a process.

Without process, creativity becomes aesthetic production rather than thinking.

The workplace impact: Faster, but shallower

In corporate environments, generative AI is often positioned as a productivity multiplier. Employees are encouraged—sometimes pressured—to use it to work faster, respond quicker, and produce more.

But speed has consequences.

When everyone uses similar tools trained on similar data, differentiation erodes. Strategy documents converge. Campaign ideas blur. Internal thinking becomes less exploratory and more formulaic.

Ironically, the very tool meant to enhance creativity may be making organisations more risk‑averse. AI optimises for what has worked before, not what might work next.

Innovation, however, lives in deviation—not repetition.

The psychological toll: Creative disengagement

There is also a human cost.

Many creatives report a loss of ownership over their work. When ideas are co‑generated, authorship becomes ambiguous. Pride diminishes. Motivation fades.

Others feel a constant pressure to “keep up”—not with other people, but with machines. If AI can produce ten variations in seconds, why should your one carefully considered idea matter?

This leads to a quiet disengagement. People stop investing emotionally in their output. Work becomes transactional. Creativity becomes mechanical.

Fatigue sets in—not from effort, but from meaninglessness.

Also Read: After failure, rekindling our creativity and finding balance

Are we confusing productivity with value?

At the heart of generative AI fatigue is a fundamental misalignment: we are measuring the wrong thing.

We celebrate output volume, not insight. Speed, not originality. Optimisation, not depth.

But creativity has never been about efficiency. The most influential ideas in art, technology, and culture did not emerge because they were fast or scalable. They emerged because someone saw the world differently—and took the time to articulate that difference.

When everything is optimised, nothing feels essential.

A reframe: AI as assistant, not author

The solution is not rejection. Generative AI is not going away, nor should it.

But we need a cultural reset.

Also Read: Can generative AI usher us into the gilded age of ad creativity?

AI should support creativity, not replace the thinking behind it. It should help with execution, not identity. Drafting, not deciding. Formatting, not forming opinions.

The most valuable creative work going forward will not be the most polished—it will be the most human.

Messy ideas. Strong points of view. Lived experience. Moral judgment. Context.

These are things AI cannot automate.

The future: Scarcity of thought, not tools

In a world flooded with generative content, originality will become rarer—and therefore more valuable.

The competitive advantage will not be who uses AI best, but who knows when not to use it.

Those who can still think slowly, write imperfectly, and sit with uncertainty will stand out.

Generative AI fatigue is not a rejection of technology. It is a signal.

A reminder that creativity was never meant to be frictionless—and that meaning cannot be automated.

The question is no longer whether AI can create.

It’s whether we still remember why we do.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops.

Featured image generated using AI.

The post Generative AI fatigue: Are we over‑automating creativity? appeared first on e27.