Posted on

What is your ecosystem strategy and why is it critical in 2024?

Distilling recent conversations across VCs, founders, and growth-focused leaders, it’s clear that the ‘ecosystem’ and the strategy around it are among the hottest discussions.

That said, there is confusion about what this term, or strategy, truly means.  In significant part, it is driven by a lack of clarity around what a ‘network’, ‘platform’, or an ‘ecosystem’ is.

Start with the oldest, most established model — a business network. This takes a centralised company’s product or service and scales market penetration with a distribution, reseller, and service/fulfilment set of partners.  It’s a standard value chain.

Additional synergies may (or may not) be around servicing a customer and persona and can include joint marketing, account or regional planning, or CRM data sharing. There is little in the way of open IP or significant new innovations or products created via this interaction. SAP is a great example of a massive network and its impact on scaling and growth for the company.

As for platforms, their growth and adoption are via their inherent ability to reduce friction between buyers and sellers connecting.   Scaling of transactions leads to lower costs and further attraction of transactors.  The platform attracts further aggregation of products and/or services on it, enabling a network effect.

Also Read: 2024 cloud trends: AI-powered machine learning, distributed databases, and more

The central coordinator or “owner” of the platform benefits the most from data and insights across transactions and engagement.  PayPal is a solid example of this, and arguably, so is AirBnB.  The evolution and innovation of the platform are still owned and controlled by the platform orchestrator.

‘Ecosystem strategy’ is eclipsing ‘platform as a strategy’

An ecosystem is different. It means the sum is greater than the parts. It goes beyond an existing company network or managed platform. It is much more decentralised, as well as open, with shared IP and innovation.

This creates new value, makes the pie bigger, and enables coordination and collaboration on an open, unfettered basis with these drivers:

  • Information and knowledge sharing
    • Verified information and insights from trusted/certified sources in the ecosystem
    • Novel research, leveraged to build unique IP
    • IP is open to the ecosystem for continued innovation
  • Transaction and data sharing
    • Data and insights fuel standardisation, efficiencies, automation, scaling
    • The network effect is accelerated
  • Business and technology innovation via the ecosystem
    • Often clustered around key technologies
    • Business model evolution and differentiation
    • New value propositions for the customer
    • Defining or re-defining the Category

This agreed collaboration is catalysed by the ecosystem orchestrator but becomes organic as well.

Consider ARM as an example; it has recently been in the news with its IPO.  This ‘Processor IP’ ecosystem (also the Category name) has catalysed a group of silicon, system and software companies to ship more than 250 billion ARM-based chips to date.

These players are often ruthless competitors (Apple and Samsung, for example), but participation in the ecosystem means “the sum is greater than the parts”, and specific benefits arise.  Joint research into chip design benefits all players involved, even though they also compete with each other.

Remarkably, ARM itself has never actually produced a chip. It was an ecosystem strategy at its early outset and continues to be one of the biggest and most successful ones to date. At least 90 per cent of smartphones produced globally today have ARM designs in them (and thus pay royalties to ARM). For “higher-end smartphones”, the market penetration and share are an astounding 99 per cent.

Also Read: Why 2024 will be interesting for Malaysia’s funding ecosystem?

Alibaba also took a very conscious and methodical ecosystem approach and quickly dislodged the market leader in China (eBay).  Core to this was allowing buyers and sellers to connect directly with each other, as well as encouraging innovation within the ecosystem and across players.  At many junctures, you can identify where Alibaba avoided revenue opportunities because of the conflict with its ecosystem strategy.

The global ecosystem shift – are you ready?

This fundamental understanding and re-think by savvy Founders and new start-ups is leading to an ecosystem focus and strategy.   It is also being driven by an understanding that 2024 will be a very different environment.  Consistently, we hear these factors driving the thinking around the Ecosystem route:

Cost of capital

With the cost of capital radically changed and likely not returning to “zero rates” or “free money” in the foreseeable future, founders and innovators are looking less at “building it ourselves” to “leveraging an ecosystem”. How can I catalyse and orchestrate mutually beneficial research, design and build-out?

Platform disruption (or ‘I fought the law, and the law won’)

The major platforms look a lot like the former railroad monopolies.   They will continue to receive a large focus from government and regulatory bodies.   The recent win by Fortnite over Google shows how disruptive this will be.

It also shows how nimble players will be able to seize new opportunities, especially positioned as legitimate, open ecosystem cooperation. Look for more disruption of platforms and new Categories defined with ecosystems at their heart.

Global trade and business flows

Unless you have been under a rock, it is impossible not to see the massive changes in trade, sourcing, trade finance, shipping, and documentation underway. Whether it is “on-shoring”, “friend-shoring”, or diversification (a lot away from China). These seismic shifts mean a multitude of new angles and openings. It is up to Category thinkers and designers to ID these new opportunities and redefine the Category and supporting ecosystem

Climate and sustainability

Stating the obvious, the impact of climate and sustainability will remain relentless.  It will disrupt, redefine, and catalyse new thinking around the measurability and interoperability of Greentech. It will create ‘strange bedfellows’ and new combinations of dependencies and partnerships. New Categories and supporting ecosystems can and must emerge. Within this, the opportunity for ecosystem strategy and deep thinking is immense.

Also Read: Cyberwatch 2024: A startup’s guide to a secure future

Don’t just find your tribe, build it

How can we seize this opportunity in 2024?

First, start with the problem. Don’t start with your product or your company. What is the unique problem that needs to be solved?  How can I describe this from a compelling Point of View (that leads to the problem)?

What is this new Category that is being redefined or created? And given that a Category cannot just be one company (you), how can you visualise the supporting ecosystem? What are the different markets, regulations, integrations, data-flows, delivery partners, R&D, analysts, and media (the list goes on) that are relevant and key to this new category and ecosystem?

Experience shows that visualising the ecosystem (the segments, players and value flows) is critical. But the additional key question is how you will track and engage with the different players. How do I prioritise with a structure for ecosystem tracking and optimisation? Who on my team will be responsible for different stakeholders?

We need you to lead us

Your ecosystem strategy and optimisation are about thinking and playing bigger. It enables you to punch heavier than just one company. It tells a story and shows potential participants how they and their organisation are part of the Category ecosystem and are part of solving the problem at its heart. It shows them that they can be part of something bigger. It builds a tribe. It builds a movement!

So lead, don’t follow, and demonstrate your clarity of strategy, POV, and the Category’s ecosystem: you will, in turn, be rewarded with growth and leadership!

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic

Join our e27 Telegram groupFB community, or like the e27 Facebook page

Image credit: Canva

The post What is your ecosystem strategy and why is it critical in 2024? appeared first on e27.

Posted on

Izwan Zakaria: Navigating legal frontiers in the startup landscape

e27 has been dedicated to nurturing a supportive ecosystem for entrepreneurs since its inception. Our Contributor Programme offers a platform for sharing unique insights.

As part of our newly introduced ‘Contributor Spotlight’, we shine a weekly spotlight on an outstanding contributor and dive into the vastness of their knowledge and expertise.

In this episode, we feature Izwan Zakaria, Founder of Izwan & Partners, a KL-based law firm aiding startups and tech companies. Joining our community in 2020, he’s contributed 14 articles with over 20,000 views.

Izwan shares his personal and professional journey in this episode of Contributor Spotlight.

The driving force

As a startup lawyer, Izwan often encounters recurring questions from founders, such as when to form a company or how to raise seed rounds. Recognising the challenges, he uses straightforward language and adopts a first-person perspective in his writing to simplify legal complexities. Izwan aims to help founders navigate legal issues and ease their journey through the startup landscape.

“As an introvert, writing allows me to contribute and share my perspectives on legal matters. It is an invaluable and rewarding experience to meet founders at events who tell me how much they’ve benefited from the articles I’ve written. The published articles also serve as a repository that may be found and accessible to other founders who may be finding a solution to the same legal challenge.

Also, the news website e27 has been my go-to resource for technology and startup news, especially in Southeast Asia. When I learned about the Contributor Programme, I knew I had to be part of it. The e27 team has been so supportive in allowing me to share thoughts on legal matters and regulatory trends,” he said candidly. 

Thoughts, goals, and journey

Izwan’s fascination with technology began at seven when his mother invested in his first computer. Despite the potential dismay when he dismantled it to explore the hardware, this experience sparked his interest.

With over a decade of corporate law practice, Izwan witnessed the transformative impact of platforms like Facebook and Twitter. Recognising the potential in the emerging digital economy, he founded Izwan & Partners six years ago in Kuala Lumpur. The law firm focuses on advising technology companies, startups, and investors.

Also Read: Unlocking angel investing: 6 key steps for making your first investment

“As a technology and startup lawyer who frequently deals with new emerging technologies, it is common to see how quickly these technologies advance, often surpassing the pace at which laws and regulations can catch up. For instance, there is so much news about artificial intelligence every day (someone told me you might need to hire a full-time person just to keep track of the daily updates!). The regulator in a jurisdiction may need to decide what the best approach to regulating the new emerging technology is,” he expressed.

In embedded finance, companies can integrate with platform payments or specialised finance services, using API integration to use pre-built solutions from financial partners. Izwan notes the challenge of distinguishing legal liabilities among parties and the varied regulations service providers may encounter based on end users’ locations. Despite entrepreneurs’ anticipation of regulatory clarity, he recognises the slower pace of regulatory changes, driven by the need to balance public protection and innovation promotion.

“In Malaysia, we’re witnessing a slow but notable emergence of new boutique law firms. These firms are usually led by middle-aged practitioners who left established firms to set up their own. They may likely focus on more niche practices. It is an encouraging trend as clients may have a wider range of options when it comes to choosing a lawyer that best serves their needs. The emergence of these young and tech-savvy lawyers also means they are more receptive to new tools, such as the use of generative artificial intelligence (Generative AI) and embracing remote work,” he said.

Advice for budding thought leaders

Izwan expressed, “In 2021, I was engaged by the World Bank as a short-term consultant after they found an article I wrote on the web. I ended up co-authoring a study together with other economists where we assessed Malaysia’s startup financing ecosystem. I’m glad that several recommendations we’ve made were adopted by the government to boost the funding ecosystem. I would not think that I’d have gotten the opportunity to contribute to the World Bank if I had not been writing at the time.”

According to him, writer Anne Lamott said it best: “Almost all good writing begins with terrible first efforts. You need to start somewhere”. 

Juggling too many things?

Izwan maintains a physical diary for meeting notes and uses Google Calendar to organise weekly schedules. He allocates specific times throughout the day for personal activities, ranging from walks to reading.

Also Read: 10 expert tips to safeguard your startup from costly contract disputes

“We all know achieving a work-life balance is tough. You may also have heard that “it isn’t about finding the time, but making the time”. So, the key to fulfilling life balance is about priorities, which may also include saying no more often to things sometimes,” he said.

Staying in the loop

Izwan shares, “An avid debater during my law school days, the reading habit is something that I continued since my university years. The usual reading sources include The Economist, The New York Times, Financial Times and The Edge Malaysia. I enjoy reading (the whole look, feel and smell of a physical book, of course!). I read about pretty much anything that interests me (usually entrepreneurship and technology).”

In scaling startups, Izwan notes the potential legal challenges arising from diverse regulatory frameworks, given the territorial nature of laws. Staying updated on regulatory positions, including neighbouring jurisdictions, is essential for practitioners dealing with the complex landscape of cross-border compliance. He recommends Lexology as a resource for timely regulatory updates authored by major legal providers. He suggests following legal thinkers like Lawrence Lessig and Richard Susskind for insights into emerging legal issues.

“Companies are now larger than people and countries in terms of deploying capital (Apple has more cash than the US and the UK governments) and influence. We are seeing the shift from shareholder capitalism to stakeholder capitalism.

If you just began your entrepreneurial journey, you’re in a unique position to embed your startup’s DNA to address environmental, social, and governance (ESG) issues, unlike large companies that may be tied to legacy issues and have to retrofit their entire culture to solve ESG related issues. If you are building a mobile app, you may also want to think about accessibility issues, like someone who is visually impaired, when designing a mobile app that may also benefit from your product or service,” he concluded.

Are you ready to join a vibrant community of entrepreneurs and industry experts? Do you have insights, experiences, and knowledge to share?

Join the e27 Contributor Programme and become a valuable voice in our ecosystem. 

The post Izwan Zakaria: Navigating legal frontiers in the startup landscape appeared first on e27.

Posted on

Experts advocate thoughtful regulation for the rapid rise of Generative AI

In the dynamic landscape of Southeast Asia, where technological innovation intertwines with diverse cultures and economies, the rapid ascent of Generative AI sparks a crucial dialogue among 20 leading experts. While everyone, from industry stalwarts to visionary entrepreneurs, is bullish about Generative AI’s transformative potential and impact, they are equally concerned about its potential misuse and ethical and privacy risks.

Peng T Ong of Monk’s Hill Ventures cautions against premature integration into mission-critical applications, emphasising the imperative of understanding what the “talking dog” of AI truly “thinks.” Across the spectrum, concerns voiced by Steve Brotman of Alpha Partners echo a call for thoughtful regulation driven by ethical considerations and a need to prevent potential misuse.

This feature encapsulates the nuanced insights of Southeast Asia’s AI luminaries, shedding light on the region’s stance towards shaping the responsible future of Generative AI amidst an evolving technological landscape.

Peng T Ong, Managing Partner, Monk’s Hill Ventures

The emergence of Generative AI has significantly surpassed a barrier in creating valuable and diverse content, and it does so at a relatively affordable cost. While it is currently unsuitable for many mission-critical applications, the prospect of getting there may be closer than we think.

I look at it this way––the ‘dog’ is ‘talking’. But just because the dog is talking doesn’t mean we should put it behind the wheel of a truck or at the trading desk of a billion-dollar hedge fund. We don’t know what the dog is ‘thinking’. A straitjacket must be put around the AI networks before we let them touch anything near mission-critical.

My concern is that folks behind this AI boom aren’t thinking sufficiently or thoughtfully about fundamental world engineering requirements before connecting these things to the physical world.

One possibility is for straitjackets to be implemented through computationally tractable algorithms — software for which we can deterministically predict behaviour. Rule-based expert systems will return to vogue, or perhaps using knowledge graphs (data representing knowledge) will become more pervasive.

Steve Brotman, Founder and Managing Partner, Alpha Partners

There is a need for thoughtful regulation in this space. The primary reasons for this stem from the potential risks and ethical considerations associated with deploying robust AI systems.

Generative AI can create content and make decisions with a level of sophistication rapidly approaching human-like capabilities. While this presents incredible opportunities for innovation and efficiency, it also raises significant concerns regarding misinformation, privacy, intellectual property rights, and the potential for misuse.

Also Read: ‘Bringing world-class AI talent into Singapore can substantially enrich the industry’

Regulation in this context is not about stifling innovation or curtailing the development of AI technologies. Instead, it’s about ensuring their development and deployment are done responsibly and ethically. Effective regulation can help establish guidelines for the safe use of AI, protect individuals’ rights, and prevent harm that may arise from these technologies’ misuse or unintended consequences.

Rudi Hidayat, Founder and CEO, V2 Indonesia and CIX Summit 2024

Implementing regulations could provide a necessary framework to address various concerns surrounding Generative AI, including ethical use, accountability, and potential misuse. Policymakers aim to establish guidelines that ensure these tools positively contribute to society without causing unintentional harm. It’s essential to strike the right balance in regulation to encourage the responsible development and application of Generative AI. The focus should be on mitigating specific risks rather than imposing overly restrictive measures that impede progress.

It’s crucial to have a platform that brings together professionals, business leaders and AI practitioners. This space would enable education and facilitate conversations, especially on Generative AI and its regulations, to contribute to the broader discourse on the role of AI in shaping the future.

Bharat Buxani, Senior VP – Marketing, 99 Group Indonesia (Rumah123.com and 99.co)

The question of whether Generative AI should be regulated is multifaceted. We acknowledge the pivotal role Generative AI plays in shaping the digital realm, especially within the real estate sector. While recognising the importance of potential regulations, our focus remains on using AI to elevate consumer experiences and address their evolving needs.

The responsible utilisation of Generative AI can catalyse innovation, providing personalised, efficient, and groundbreaking solutions for our users. In navigating this landscape, it is essential to strike a delicate balance, promoting innovation while proactively addressing ethical considerations. By doing so, we aim to maximise the positive impact of Generative AI within the real estate industry and beyond.

Minhyun Kim, CEO of AI Network

Generative AI should be regulated to address the growing ethical concerns, privacy issues, and misinformation risks. Without some form of regulation, we run the risk of generative AI developing in a manner that contradicts the best interests of society. This includes clickworker exploitation, copyright infringement, and training data manipulation. There is also the ever-present threat of sudden mass unemployment. Regulation would help smooth the transition and give societies time to adapt.

The challenge, of course, is how to regulate it. Traditional regulatory processes may be too slow to keep pace with AI’s rapid advancements. Alternative governance mechanisms, such as decentralised autonomous organisations (DAOs), have potential because they are designed to govern open-source software with better agility and responsiveness. They can also bring together more diverse stakeholders with acute knowledge of the technology and its potential impacts.

Hayk Hakobyan, CEO and Co-Founder of Bizbaz

It is best to regulate generative AI to mitigate cases of potential misuse. There are four key areas for regulation:

  1. Create/promote harmful content, such as deepfakes or other forms of misinformation. There are several examples of this for celebrities today.
  2. Create/promote discriminatory or biased content. For example, a generative AI model that is trained on a dataset of biased news articles could generate text that is also biased. This could hurt people who are marginalized or discriminated against.
  3. Create/promote privacy violations. For example, a generative AI model could be used to create fake images or videos of people without their consent. It could also inadvertently share or use private or confidential information of individuals.
  4. Initiate complex cyber-attacks, for example, by generating realistic phishing emails or creating fake identities for malicious purposes. Regulation could address these security risks and define standards for deploying generative AI systems.

To solve this, generative AI models should be tested for bias and discrimination, used to respect people’s privacy, labelled as such so that people are aware they are interacting with an AI system, not a human, and established a certification for generative AI models.

Dusan Stojanovic, Founder of True Global Ventures

The AI industry needs to have a more open playfield. Regulation is required as soon as possible and is way more critical than regulating the blockchain industry. AI will have a much more crucial impact on our future than blockchain.

Khos-Erdene Baatarkhuu, Group CEO of AND Global

Generative AI, exemplified by the remarkable rise of ChatGPT, offers a world of promise and complexity. The need for regulation emerges from the duality of AI, where immense potential meets significant challenges.

A comprehensive approach includes establishing AI governance bodies, nurturing public-private collaborations, and investing in AI literacy. Dealing with the technical and ethical aspects requires the collective wisdom of stakeholders from various backgrounds.

Tal Barmeir, Co-Founder and CEO, Blinq.IO

I firmly believe in the necessity of regulating generative AI, with a primary focus on transparency.

As a CEO, I understand the ethical obligations associated with AI deployment. Regulations are a cornerstone for establishing ethical standards, ensuring accountability, and fostering public trust.

Looking at financial services, for example, where AI is integral to risk assessment and decision-making, regulations are crucial. Individuals deserve to know when AI algorithms influence economic outcomes, promoting fairness and ethical use.

Also Read: What will be the key trend in technology next year?

Moreover, in legal and judicial systems, ensuring transparency is paramount. Regulations can set protocols for disclosing AI involvement in evidence or documents, preserving the integrity of legal proceedings and bolstering trust in the justice system.

Finally, regulations can mandate clear disclosure when AI is part of content creation in news and media, where misinformation is a growing concern. This is essential to combat the spread of misinformation and enable the public to distinguish between human-created and AI-generated content.

Kenneth Tan, Co-Founder and CEO of BeLive Technology

I argue against the popular notion that regulation stifles innovation.

For one, there are massively beneficial second-order effects of enforcing safeguards: trust is fostered, and organisations are more likely to accelerate the adoption of AI tools.

This, in turn, encourages more innovation as the ecosystem builds itself around early success stories. Leaders emerge, entrench themselves, and in some cases, establish monopolies. You would then witness the “disruptors” who break the spell of dominance and democratise the use of that technology — with consumers being the ultimate beneficiaries. This simply cannot happen without the trust of organisations and a regulatory body to turn to.

However, regulation cannot be mandated with a broad-stroke, shotgun approach. We should systemically discuss specific AI technologies, understand what problems they are causing, and, most importantly, who these problems affect.

Alvin Toh, CMO of Straits Interactive

On a global level, regulation is already underway. China’s regulations, for instance, address AI-related risks and introduce compliance obligations on entities involved in AI-related business. Three specific laws to regulate AI exist in China — the Algorithm Recommendation Regulation, Deep Synthesis Regulation, and the Generative AI Regulation.

Recently, representatives from 30 countries gathered to discuss the significant opportunities and risks posed by AI as signatories of the Bletchley Park Declaration. Emphasising the need for AI development to prioritise human-centric, trustworthy, and responsible approaches, the declaration acknowledges the transformative potential of AI across various sectors but also highlights the associated risks, particularly in domains like cybersecurity and biotechnology. The declaration stresses the urgency of addressing these risks, especially concerning powerful and potentially harmful AI models at the technology frontier.

The signatories commit to international cooperation, inclusive dialogue, and collaborative research to ensure AI’s safe and responsible deployment, recognising the importance of engagement from diverse stakeholders, including governments, companies, civil society, and academia. The declaration also outlines a comprehensive agenda focusing on risk identification, policy development, transparency, evaluation metrics, and establishing a global scientific research network on frontier AI safety, and the nations pledge to reconvene in 2024 to advance these objectives further.

Thus, it is imperative for firms intending to deploy Generative AI in their products to have a good AI governance structure in place in anticipation of AI regulations currently being considered by governments worldwide. You don’t want to invest huge dollars and be on the wrong side of the international standard guidelines and regulations for this rapidly developing technology.

Mauro Sauco, CTO and Co-Founder, Transparently.AI

On the ethical side of things, there are biases and discrimination in content generation that we want to make sure that we avoid, so regulation could help us prevent content that can hurt other people based on race, gender, or religion.

There are issues with copyrights, especially with generative AI. We know this is one big problem in generative AI: Who owns the material that comes out, given that it is a derivative of all the material that comes? So that’s just some of the considerations.

Also Read: AI will have more impact on our future than blockchain: Dusan Stojanovic

We can prevent many of these issues by putting some regulations around this. Policing the output is one thing, but having these regulations will also impact the machine-learning process of how people are training their AI models to make the work more nuanced. Regulation, in this sense, addresses content generation from a systemic standpoint.

We are for regulations that do not stop or hurt innovation. We are for flexible regulations that can pivot depending on the circumstance.

Apichai Sakulsureeyadej, Founder and CEO, Radiant1

Generative AI needs to be regulated by ethical standards. We need to adhere to the guidelines of the Personal Data Protection Act. It is a good start while combining it with a more comprehensive approach.

Sourabh Chatterjee, Group CTO, Oona Insurance

Generative AI is the world’s fastest-developing technology and offers unparalleled opportunity. Given the beast’s nature, Gen AI will inevitably be regulated, with many policymakers seeking to introduce AI-specific legislation. However, the approach must remain pragmatic as this technology undergoes continuous evolution from where it is today.

Ultimately, striking a delicate balance involves establishing fundamental guardrails without compromising openness, innovation, and the myriad benefits of Gen AI.

Sanjay Uppal, Founder and CEO, finbots.ai

Generative AI’s meteoric rise and its potential impact on our way of living calls for a governance compass that combines Responsible AI practices by developers of AI and regulatory measures that would set a prudent path forward. Transparency isn’t optional; it’s the cornerstone of AI reliance, demanding globally endorsed standards illuminating AI’s inner workings.

However, AI is in its infancy and evolving rapidly. Therein lies the challenge. Just like you would put guardrails for a human child’s safe growth, you equally want to allow space for innovation and self-expression. The development of AI would be no different.

As regulators and governments seek to contain the negative fallout of reckless use of AI, the challenge will remain in identifying how tight or lax these regulations should be.

Lim Sun Sun, Professor of Communication & Technology at SMU

From the perspective of commercial enterprises, from bootstrapped startups to Fortune 500 companies, generative AI offers abundant, powerful tools for streamlining and expediting business processes. Since businesses are always motivated to improve their bottom lines, the efficiencies of digitalisation and AI adoption have tremendous appeal.

Also Read: Transforming customer service: AI’s ‘artificial empathy’ holds the key

However, not all companies will pay equal attention to the risks of leveraging AI, such as algorithmic biases, privacy protections or faulty automated decisions. Regulatory oversight is thus vital for ensuring that businesses adopt AI with the necessary safeguards to ensure ethical and responsible use.

Jerrold Soh, Assistant Professor of Law at SMU

Regulation is typically needed when consumers lack the ability and producers lack incentives to take precautions against a product’s risks. Like mass-produced food, today’s Generative AI systems are made through complex processes involving multiple corporate stakeholders, data inputs, and computational techniques that consumers have little visibility into. Producers have incentives to overstate their nutritional benefits while avoiding questions on ingredient quality.

The speed and scale at which such systems are being developed, deployed, and used ­– including to generate harmful content like pornographic fakes, commercial rip-offs, and political lies – suggests that if left unchecked, generative AI could poison society’s vital information streams. –

Max Del Vita, CPO, MoneySmart Group

Generative AI is still relatively nascent and evolving, making the regulation question nuanced.

On one hand, excessive regulation at this early stage could stifle innovation and slow down the pace of discovery and development. On the other, the technology poses risks, especially regarding the potential for misuse and impersonation.

A middle-ground approach is prudent. Instead of heavy-handed regulation, establishing guiding ethical principles can be the initial framework to ensure generative AI’s responsible use and development. Technologies like blockchain can also play a complementary role by enhancing the trustworthiness of AI-generated content. By providing a tamper-proof, decentralised data record, blockchain can verify and trace the source of AI-generated material, adding a layer of security and reliability that contributes to responsible use and mitigates potential risks.

Francis Lui, CEO of NexMind

As a founder in the space of AI, it is not necessary to regulate Generative AI at this stage; it is too complex to regulate entirely, as regulations at this early stage will likely hinder innovation and growth in this space.

Given the current pace, AI technology will likely change rapidly, making it challenging to create adaptable rules.

Ensuring that Generative AI is developed and used responsibly and ethically is essential. Within the development process of Generative AI, self-regulatory measures can go a long way in ensuring the entire onus of comprehension doesn’t lie with governmental regulators, thus prohibiting growth.

Simon Quirk, Co-Founder Gracenote.ai

AI will have many positive impacts that will benefit businesses, governments and humanity, especially with the right guiding principles. But equally, when used or allowed to act in ways that are not for the greater good, the potential for adverse consequences is great. Therefore, like anything that can be used for good or bad, implementing rules is key to making sure AI usage aligns with societal norms.

Marianne Winslett, Venture Partner at R3i Capital

It’s not too early to ensure that AI software development follows the best practices of software development anywhere, including proper collection and documentation of training data and ample testing using both standard and adversarial scenarios. Along with respect for privacy and data ownership, these practices are also in developers’ best interests if they wish to minimise future litigation. Beyond these points, it’s a bit early for regulations specific to generative AI, which is still in its infancy.

Nelaka Haturusinha, Director at Striders Global Investment

Regulating generative AI is essential to harness its transformative power responsibly, safeguarding against unintended consequences and ensuring ethical boundaries guide innovation.

Michelle Duval, CEO and Founder of Fingerprint for Success

Generative AI groups must unify and work together to be a force for good. The only way this can be achieved is through active collaboration and governance by the world’s most influential people who support both optimistic and pessimistic views on the significant impact AI can and will have on humanity. Through this group, we must work towards regulation.

Kelly Forbes, Exec Director at AI Asia Pacific Institute

In preparation for regulation, Governance mechanisms need to be in place. Sectors such as education and healthcare are priorities where risks could potentially outweigh benefits.

Warren Leow, Group CEO at Inmagine

Generative AI should be self-regulated, with responsible platforms playing a role to ensure creators can be paid or empowered accordingly. It would be difficult for governments to regulate across geographies.

At Inmagine, we believe generative AI is a game changer that would upend many business models. Ultimately, the dust will settle based on what the consumers want.

Hence, our responsibility lies in balancing user expectations and ensuring we empower our community and contributors to earn amidst a changing market.

Gullnaz Baig, Executive Director, Angsana Council

Multidisciplinary collaboration between those with technical prowess and those who understand society is required to build AI that is safe and equitable by design. While this need should be obvious, we should not expect it to come naturally without considerable pressure.

Given the current race to advance in AI development, a multidisciplinary approach, which could slow down the process, is considered cumbersome. Product development sprints do not lend themselves well to the postulations of policy teams. This is as true for the big tech companies racing to get ahead with their own Foundational Models as it is for startups integrating AI into their offerings.

Also Read: AI revolution: Balancing human empathy and robotic efficiency in customer service

So, we are either left with relying on tech leaders to do the right thing, if they can figure it out, or on states to develop punitive regulations to keep AI development in check.

Yet, regulations, even those as robust as the EU AI Act, are only useful as accountability frameworks. While they are the state’s most vital tool to wield against tech companies, they are also weak. They are often reactive and may struggle to keep pace with the rapid advancements in AI technology. In some cases, regulation kicks in only when the harm has been done.

There is a third option. It enables the state to engage with technologists at a more meaningful level. AI can be developed to check other AIs, ensuring the ecosystem is safe overall. An example is Detect GPT, an AI that helps verify where a text is AI-generated. States should view AI development as an ecosystem. Even as they develop regulations to check risks and harms, they should incentivise the development of AI for safety. National AI strategies should include specific provisions to co-invest in safe AI technology development, seed funding research into AIs to check for discrimination, violation of IP rights, etc., and even provide visa and tax incentives for companies that concentrate on building AI for safety to specifically ensure that on balance the ecosystem is a safe one for all.

The post Experts advocate thoughtful regulation for the rapid rise of Generative AI appeared first on e27.

Posted on

Muuse wants to eliminate single-use containers in Singapore’s thriving F&B scene

In November 2022, Singapore-based Muuse launched a partnership with Hawker Centre @ Our Tampines Hub, a facility managed by FairPrice Group’s Kopitiam, to provide a closed-loop reusable container system to reduce single-use waste from landfills.

The container rental system has enabled the shift from disposable takeaway packaging for eight hawker stalls that the startup is working with. According to Muuse, over the period of the pilot project, 608 unique users borrowed 9,608 reusable containers, diverting single-use packaging from waste, which otherwise would have ended up in landfill.

“Over the past five years, our commitment to promoting reuse in Singapore has always included the vision of extending this service to hawker centres. These centres hold immense cultural significance, woven into the daily lives of many Singaporeans. If we are serious about addressing the problem of single-use waste, it is imperative to offer reusable packaging solutions in these essential contexts,” says Muuse CEO Jonathan Tostevin in an email interview with e27.

The system works by allowing consumers to borrow reusable containers for free. Each reusable container is tagged with a unique serialised QR code, which customers scan to borrow using the Muuse app.

Customers had 30 days to return their reusable containers to return points in the hawker centre for cleaning and sanitisation. The system also includes specialised return points for halal reusable containers.

Also Read: What is left behind in our conversation on climate change

According to the company, its tracking system shows that 99 per cent of containers were returned throughout the pilot programme. It offers a range of product solutions, including consumer-facing apps (web and mobile), integration with vending machines, third-party apps, and POS systems, enhancing convenience and engaging users in the reuse experience.

In running this project, Muuse receives support from The SG Eco Fund. It expects to launch a full commercial partnership at another hawker centre in 2024 to help Singapore hit its Green Plan target of reducing the amount of waste to landfills per capita per day by 30 per cent by 2030.

Encouraging reusability

Tostevin says that the practice of reusable packaging in the F&B industry is still in its early phases, with ongoing efforts to enhance the overall service experience, especially regarding the convenience of purchase and return.

The team finds that a lean and agile approach is crucial in the product development process of solutions such as Muuse, allowing them to experiment rapidly in real-world business scenarios.

“At Muuse, our approach to product development is deeply rooted in customer centricity. Our team is dedicated to identifying users’ needs, expectations, and pain points in the reuse service experience. Regular user feedback and proactive engagement on the ground play a pivotal role in continually refining and enhancing our service offerings. For instance, at the hawker centre project in Our Tampines Hub, Muuse conducted multiple rounds of surveys and customer interviews with end-users and hawkers to understand their expectations, contributing to the identification of a more convenient and scalable model for reuse at hawker centres in Singapore,” he says.

Also Read: Demystifying the financial impacts of climate change with Intensel

“The Muuse platform focuses on addressing two primary requirements in the reuse space – achieving traceability of packaging in circulation and retaining/growing consumer participation.”

When asked about the platform’s user acquisition strategy, Tostevin says that numerous F&B businesses have approached them to seek sustainable alternatives.

“Many have already begun transitioning from single-use plastics to more expensive disposable options and are now subsequently exploring the possibility of incorporating reuse into their operations. Many of our clients have internal or externally set waste reduction targets, with packaging constituting a significant portion of their waste stream,” he says.

“However, we understand that the shift from disposables to reusable packaging entails more than a straightforward replacement—it requires adjustments to operational processes, staff training, and effective consumer communication. To address this, we aim to simplify the transition by optimising the user and vendor experience, ensuring it is straightforward and convenient. Our pricing is competitive with single-use packaging, and we strive to provide reusable solutions that benefit both consumers and vendors, such as stackable containers.”

Eradicating single-use waste

The journey of Muuse began as an idea by a group of eco-conscious surfers visiting Bali and seeing everyday trash in the seas, which made them question how the items ended up there.

“We have been privately funded thus far and have sufficient resources to sustain our operations until the end of 2024. Throughout 2024, we are focused on hitting our targets of US$1 million in revenue (3x in 2023) and to break even by the end of the year,” Tostevin says.

“We aim to seek external funding in 2024 to help us scale and hit our targets across the America and Asia regions.”

In its mission to eradicate single-use waste, Muuse wants to extend its smart IoT packaging throughout the F&B sector and beyond while developing both its digital and physical infrastructure on a city-wide scale.

“Our ongoing initiative includes introducing Muuse containers to another hawker centre next year, with plans for further expansion. We are dedicated to broadening partnerships, collaborating with brands such as Starbucks and PepsiCo, increasing revenue and clientele across all our markets, and continually enhancing the convenience and effectiveness of our service for both consumers and vendors. By the end of next year, we aim to have diverted one million single-use items from landfills.”

Image Credit: Muuse

The post Muuse wants to eliminate single-use containers in Singapore’s thriving F&B scene appeared first on e27.

Posted on

AI companies raised record US$50B in 2023 globally: data shows

Companies in the artificial intelligence vertical raised nearly US$50 billion year-to-date globally, according to data compiled by data analytics company AltIndex.com.

This is the second-highest figure in the global market’s history.

The global AI industry has more than doubled in just three years, reaching a US$240 billion value and a quarter of a billion users worldwide. This impressive growth has drawn much attention from VC investors who poured billions of dollars into AI companies and startups. Despite the VC funding slowdown, the strong fundraising activity continued this year, tuning 2023 into the second-best year for fundraising in the AI market’s history after 2021.

Also Read: Experts advocate thoughtful regulation for the rapid rise of Generative AI

According to Crunchbase data, AI companies and startups raised US$78.5 billion in 2021 alone, more than double the amount seen in 2020. After a fantastic 2021, the fundraising activity dropped by 42 per cent year-over-year, but AI companies still raised a massive US$45.2 billion in 2022.

However, statistics show 2023 was even more successful, with companies and startups in this space raising US$4.5 billion more than last year, or US$49.8 billion in total.

Nearly 60 per cent of that value, or US$28.8 billion, was raised in the first half of the year, showing a slight 4 per cent drop compared to last year. But the fundraising activity soared in the year’s second half, with AI companies raising another US$20.9 billion in funding rounds, almost 40 per cent more than in H2 2022.

While the total funding amount increased by US$4.5 billion year-over-year, the number of investments was below 2022 figures, meaning that AI companies managed to raise more fresh capital in fewer funding rounds. Statistics show the AI industry saw 842 VC investments this year, down from 1,101 in 2022.

With nearly US$50 billion raised in funding rounds this year, the cumulative funding amount in the AI space has climbed to an impressive US$333 billion.

Also Read: ‘Bringing world-class AI talent into Singapore can substantially enrich the industry’

Around 55 per cent of that value, or US$189 billion, went to companies from the US, with California as the leading hub. Asian AI companies raised the second-highest value in funding rounds, or over US$96 billion, and European companies followed with US$35.3 billion in total funding.

The post AI companies raised record US$50B in 2023 globally: data shows appeared first on e27.