Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

DeepSeek-V3 model: A cost-effective and open challenge to AI giants

Gong Zhe

 , Updated 19:44, 28-Dec-2024
A screenshot of DeepSeek's chatbot service, based on the company's DeepSeek-V3 model, December 28, 2024. /DeepSeek
A screenshot of DeepSeek's chatbot service, based on the company's DeepSeek-V3 model, December 28, 2024. /DeepSeek

A screenshot of DeepSeek's chatbot service, based on the company's DeepSeek-V3 model, December 28, 2024. /DeepSeek

A Chinese AI firm has on Thursday unveiled DeepSeek-V3, a powerful new language model making waves with its claimed cost-effectiveness and open availability. This release directly challenges the dominance of closed models like OpenAI's GPT series and raises important questions about the future of AI accessibility and affordability.

The company's internal testing shows the model excels in English, Chinese, coding and mathematics, rivaling even leading commercial models like OpenAI's GPT-4o. The model's size, boasting 671 billion parameters, dwarfs Meta's Llama 2 (70B) and even surpasses Llama 3.1 (405B), potentially contributing to its claimed performance.

Individual users can easily explore its potential through a free chatbot on DeepSeek's website. This interactive tool not only searches the web but also provides valuable insights into the model's decision-making process by displaying its reasoning steps.

Powerful, while cost-effective

While providing similar performance to what the community calls "frontier models," DeepSeek-V3 also excels with its lower development and operational costs. DeepSeek claims it spent a mere $5.5 million training the model, a fraction of the estimated over $100 million invested by OpenAI in GPT-4.

DeepSeek-V3 further asserts significantly lower prices for their online services, with 1 million tokens priced at just $1.1, currently offered at a promotional rate of $0.28, a dramatic contrast to GPT-4o's $10 pricing.

Adding to its disruptive potential, DeepSeek-V3 is available for free download and local execution, which offers significant advantages for users prioritizing data privacy, working in areas with limited internet access, or seeking greater control over their AI tools. This contrasts sharply with models like Microsoft's Copilot, Google's Gemini and OpenAI's GPT series, which require a constant internet connection.

For businesses prioritizing data security, deploying a local copy of DeepSeek-V3 offers a powerful solution, enabling them to harness cutting-edge AI without compromising sensitive information.

Still too large for local use

However, the sheer size of DeepSeek-V3 presents a significant hurdle for home users: running DeepSeek-V3 requires substantial hardware, well beyond the capabilities of PCs and smartphones. Individual users will likely prefer its free chatbot.

As of now, verifiable real-world examples of successful local execution remain limited, and independent verification of the company's performance claims is still needed. One blogger claimed the model can run on a cluster of eight Apple Mac Mini Pros, each with a powerful M4 chip and 64 gigabytes of memory. The entire rig costs over $10,000.

DeepSeek acknowledges the model's large size and less-than-perfect speed, attributing these limitations to current hardware constraints. They express optimism that advancements in hardware will naturally resolve these issues.

Their ultimate goal, according to a research paper posted on the company's website, is to achieve artificial general intelligence while maintaining a commitment to open access and long-term development.

Search Trends