On Monday, OpenAI CEO Sam Altman acknowledged the capabilities of Chinese startup DeepSeek’s latest AI model, R1, describing it as “impressive.” Altman highlighted the model’s efficiency, stating, “DeepSeek’s r1 is an impressive model, particularly around what they’re able to deliver for the price.”
DeepSeek has garnered international attention with its AI advancements. The company reported that training its DeepSeek-V3 model required an investment of less than $6 million in computing power, utilizing approximately 2,000 Nvidia H800 chips. This approach contrasts with other leading AI firms that often employ supercomputers with upwards of 16,000 GPUs for training.
The recent release of DeepSeek-R1 has further emphasized the company’s focus on cost-effective AI solutions. According to a post on DeepSeek’s official WeChat account, the R1 model is 20 to 50 times more affordable to use than OpenAI’s o1 model, depending on the specific task.
In response to DeepSeek’s advancements, Altman expressed enthusiasm about the competitive landscape, stating, “We will obviously deliver much better models and also it’s legit invigorating to have a new competitor! We will pull up some releases.”
Despite the emergence of cost-effective models like DeepSeek’s R1, Altman emphasized OpenAI’s commitment to leveraging substantial computing power for AI development. He remarked, “But mostly we are excited to continue to execute on our research road map and believe more computing is more important now than ever before to succeed at our mission.”
DeepSeek’s rapid progress has not only intensified competition in the AI sector but also prompted discussions about the future direction of AI research and the balance between cost-efficiency and computational investment.