Posted on Leave a comment

Why we’re saying “no” to DeepSeek for now

DeepSeek, a notable player in the artificial intelligence field, has generated significant interest since the launch of its flagship model, DeepSeek-R1, in January 2025. This AI system promises to reshape the technological landscape with its innovative approach and fresh perspective on AI development. The industry is watching closely as DeepSeek introduces a new framework that challenges conventional methods.

As stakeholders and companies explore what DeepSeek offers, its potential impact on large language models (LLMs), which are the backbone of much of today’s “AI”, is becoming increasingly apparent.
However, amid all the excitement, it is important to keep a balanced perspective. Although DeepSeek’s approach is impressive, its long-term benefits, especially for startups, remain uncertain.

This article examines the various factors that have led my company to postpone adopting DeepSeek. It highlights the complexities of AI adoption and the careful choices that organisations like ours must make in our efforts to drive sustainable innovation.

What is DeepSeek and how is it different?

DeepSeek was founded in 2023 and is supported by the Chinese hedge fund High Flyer. It gained prominence with the release of its flagship model, DeepSeek-R1, in January 2025. Matching the reasoning and mathematical abilities of leading competitors, DeepSeek has set itself apart, at least in popular opinion, with its unique development approach.

  • Open-source ethos

Many view DeepSeek as following an open-source philosophy, which contrasts with the more closed approaches seen in models such as OpenAI’s ChatGPT and Anthropic’s Claude. Although DeepSeek provides its code, technical details, and even its model weights for public use, modification, and implementation, questions about its overall transparency remain, including reports of censorship during the application and training phases.

  • Cost-effectiveness

Advocates point out that DeepSeek is reportedly trained at only 10% of the cost incurred by Meta’s Llama. This economical approach challenges the common belief that AI development must be prohibitively expensive, prompting a reconsideration of funding requirements in the field.

  • Environmental considerations

Proponents also argue that a less resource-intensive training process makes DeepSeek more environmentally friendly, addressing concerns over high carbon emissions from energy-intensive AI operations. This sustainable model not only lowers operational costs but also offers an environmentally conscious pathway for scaling AI.

Given these features, the enthusiasm surrounding DeepSeek suggests that its growth could democratise AI, enabling smaller companies and individual developers to access technology comparable to the most advanced LLMs. Reflecting its popularity, DeepSeek surpassed ChatGPT as the most downloaded free app in the US Apple app store by the end of January 2025.

Also Read: Second order effects in AI from DeepSeek AI

The DeepSeek dilemma: Innovation vs practicality

We are not convinced.

DeepSeek’s progress has undoubtedly pushed the boundaries of LLM development, and while mainstream media may herald it as a breakthrough, practical realities present a more complex picture. Although DeepSeek will bring benefits to various sectors, these advantages do not necessarily translate into tangible benefits for all.

For many startups, the direct gains from DeepSeek appear limited. There might be indirect benefits, such as cost reductions and performance improvements from established providers reacting to the competitive challenge posed by DeepSeek, but for startups like ours, these advantages do not seem substantial enough to warrant switching.

For context, my team at Transparently.ai currently uses Google’s Vertex AI for our generative AI needs. After conducting extensive preliminary tests on DeepSeek, we decided to stick with Vertex AI. Here is why:

  • Infrastructure demands: Running DeepSeek in a self-hosted environment requires considerable memory and GPU resources, whether on physical hardware or cloud services. In any case, the required resources lead to significant costs in order to maintain acceptable latency.
  • Direct API usage concerns: While using APIs directly from the DeepSeek service might seem like an economical alternative, the associated privacy concerns are too significant to ignore, ultimately forcing companies to either self-host or rely on hosted endpoints provided by cloud vendors.
  • Production complexity: Setting up a production-grade system involves addressing redundancy, availability, and global distribution, which can be daunting for startups with limited resources.
  • Operational costs: The overall expenses and operational overhead can quickly add up. The cost of GPU resources, combined with the complexities of managing multiple regional instances for high availability, networking, and load balancing, significantly increases the financial burden.
  • Scalability constraints: With self-hosted setups there is no elasticity—the ability to scale up and down upon demand. Planning and building for peak load scenarios becomes necessary, further driving up overall expenses.
  • Cloud provider offerings: Although some cloud providers offer DeepSeek through hosted endpoints, the advantages appear minimal. DeepSeek R1 is comparable to or only slightly more advanced than current reasoning models. With major providers expected to incorporate similar innovations soon, the effort and cost to modify existing software for DeepSeek do not seem justified.

Also Read: The DeepSeek debate: Opportunity or overhype for startups in ASEAN?

The bottom line: Stick with established frontier models

One major advantage of established frontier model providers is their commitment to continuous improvement. These companies invest heavily in research and development, ensuring that their models are consistently upgraded.

It is also important to note that major AI companies are likely to quickly adopt DeepSeek’s innovations, which could reduce costs and further improve their own models. This rapid uptake might diminish DeepSeek’s competitive edge over time. As users, we benefit from these ongoing enhancements without additional effort. This is an area where DeepSeek may struggle to compete with industry leaders.

While DeepSeek marks an important development in the LLM ecosystem, its overall impact remains uncertain. For startups, the practical and cost-effective choice is to continue using established API services from major providers, which offer continuous improvements, robust infrastructure, and the financial strength to support ongoing AI innovation.

In our case, we have chosen to remain with Google’s Vertex AI while keeping an eye on how DeepSeek evolves in the future. Although DeepSeek holds promise, the current environment favours established providers for their practicality, cost efficiency, and steady progress.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic.

Join us on InstagramFacebookX, and LinkedIn to stay connected.

Image credit: Canva Pro

The post Why we’re saying “no” to DeepSeek for now appeared first on e27.

Leave a Reply

Your email address will not be published. Required fields are marked *