
近日,DeepSeek公司颠覆了人工智能领域的传统认知。长久以来,业界普遍认为训练尖端模型需要超过10亿美元资金投入和数千颗最先进的芯片,认定人工智能必须闭源开发,并相信只有少数公司拥有构建人工智能模型的能力——因此严守技术机密至关重要。
但这家中国公司给出了不同答案。媒体报道显示,他们仅用2,000颗英伟达(Nvidia)芯片,以约600万美元的超预期成本就完成了最新模型的训练。这印证了我们始终坚持的观点:更精简高效的模型无需庞大封闭系统也能取得实质突破。
然而中国团队的创新引出了一个更深刻的命题:谁将主导人工智能的未来?人工智能技术的发展绝对不能被少数人垄断,特别是那些并不认同企业数据保护、隐私权和透明度等基本价值观的企业。真正的解决之道不在于限制进步,而在于构建由高校、企业、科研机构和公民社会组织共同参与的开发生态。
另外一种情况是什么?让那些价值观和优先事项不同的人掌握人工智能的领导权。这意味着我们要主动放弃对这项重塑各行各业、影响社会方方面面的关键技术的掌控。唯有实现人工智能民主化,才能催生真正的创新与进步。
如今,炒作的时代已经结束。我坚信2025年必须成为打破人工智能技术垄断的破局之年。到2026年,社会各界不应止步于应用人工智能,更要成为人工智能的共建者。
DeepSeek对人工智能领域的启示
构建这样一个未来的关键在于小型开源模型。DeepSeek给我们带来的启示是,最佳的工程设计应该从性能和成本两个方面进行优化。一直以来,人工智能被视为规模化的游戏——模型规模越大,效果越好。但DeepSeek真正的突破除了规模,还关乎效率方面。在IBM的研究中,我们发现针对特定用途优化的模型已经将人工智能推理成本降低了30倍,极大提高了人工智能模型训练的效率和可及性。
我不认为通用人工智能(AGI)即将到来,或者人工智能的未来取决于建造规模如曼哈顿般庞大、依靠核能供电的数据中心。这些观点制造了虚假的二元对立。没有任何物理法则规定人工智能必须是昂贵的。训练和推理成本并不是固定的——这是一个亟待解决的工程挑战。无论老牌企业还是初创公司都有能力降低这些成本,使人工智能变得更实用和更加普及。
这种情况早有先例。在计算机发展初期,存储和处理能力成本高昂,令人望而却步。然而,通过技术进步和规模经济效应,这些成本大幅下降,由此开启了一波又一波的创新和应用浪潮。
人工智能也将遵循同样的轨迹。这对于世界各地的企业而言是好消息。一项技术只有变得经济可行且容易获取时,才能真正发挥变革性的作用。通过采用开放、高效的人工智能模型,企业可以获得契合自身需求的高性价比解决方案,使人工智能在各行各业释放出最大潜力。(财富中文网)
阿温德·克里希纳现任IBM的董事长兼首席执行官。
Fortune.com上发表的评论文章中表达的观点,仅代表作者本人的观点,不代表《财富》杂志的观点和立场。
翻译:刘进龙
审校:汪皓
近日,DeepSeek公司颠覆了人工智能领域的传统认知。长久以来,业界普遍认为训练尖端模型需要超过10亿美元资金投入和数千颗最先进的芯片,认定人工智能必须闭源开发,并相信只有少数公司拥有构建人工智能模型的能力——因此严守技术机密至关重要。
但这家中国公司给出了不同答案。媒体报道显示,他们仅用2,000颗英伟达(Nvidia)芯片,以约600万美元的超预期成本就完成了最新模型的训练。这印证了我们始终坚持的观点:更精简高效的模型无需庞大封闭系统也能取得实质突破。
然而中国团队的创新引出了一个更深刻的命题:谁将主导人工智能的未来?人工智能技术的发展绝对不能被少数人垄断,特别是那些并不认同企业数据保护、隐私权和透明度等基本价值观的企业。真正的解决之道不在于限制进步,而在于构建由高校、企业、科研机构和公民社会组织共同参与的开发生态。
另外一种情况是什么?让那些价值观和优先事项不同的人掌握人工智能的领导权。这意味着我们要主动放弃对这项重塑各行各业、影响社会方方面面的关键技术的掌控。唯有实现人工智能民主化,才能催生真正的创新与进步。
如今,炒作的时代已经结束。我坚信2025年必须成为打破人工智能技术垄断的破局之年。到2026年,社会各界不应止步于应用人工智能,更要成为人工智能的共建者。
DeepSeek对人工智能领域的启示
构建这样一个未来的关键在于小型开源模型。DeepSeek给我们带来的启示是,最佳的工程设计应该从性能和成本两个方面进行优化。一直以来,人工智能被视为规模化的游戏——模型规模越大,效果越好。但DeepSeek真正的突破除了规模,还关乎效率方面。在IBM的研究中,我们发现针对特定用途优化的模型已经将人工智能推理成本降低了30倍,极大提高了人工智能模型训练的效率和可及性。
我不认为通用人工智能(AGI)即将到来,或者人工智能的未来取决于建造规模如曼哈顿般庞大、依靠核能供电的数据中心。这些观点制造了虚假的二元对立。没有任何物理法则规定人工智能必须是昂贵的。训练和推理成本并不是固定的——这是一个亟待解决的工程挑战。无论老牌企业还是初创公司都有能力降低这些成本,使人工智能变得更实用和更加普及。
这种情况早有先例。在计算机发展初期,存储和处理能力成本高昂,令人望而却步。然而,通过技术进步和规模经济效应,这些成本大幅下降,由此开启了一波又一波的创新和应用浪潮。
人工智能也将遵循同样的轨迹。这对于世界各地的企业而言是好消息。一项技术只有变得经济可行且容易获取时,才能真正发挥变革性的作用。通过采用开放、高效的人工智能模型,企业可以获得契合自身需求的高性价比解决方案,使人工智能在各行各业释放出最大潜力。(财富中文网)
阿温德·克里希纳现任IBM的董事长兼首席执行官。
Fortune.com上发表的评论文章中表达的观点,仅代表作者本人的观点,不代表《财富》杂志的观点和立场。
翻译:刘进龙
审校:汪皓
Last week, DeepSeek challenged conventional wisdom in AI. Until now, many assumed that training cutting-edge models required over $1 billion and thousands of the latest chips. That AI had to be proprietary. That only a handful of companies had the talent to build it—so secrecy was essential.
DeepSeek proved otherwise. News reports suggest they trained their latest model with just 2,000 Nvidia chips at a fraction of the expected cost—around $6 million. This reinforces what we’ve said all along: Smaller, efficient models can deliver real results without massive, proprietary systems.
But China’s breakthrough raises a bigger question: Who will shape the future of artificial intelligence? AI development cannot be controlled by a handful of players—especially when some may not share fundamental values like protection of enterprise data, privacy, and transparency. The answer isn’t restricting progress—it’s ensuring AI is built by a broad coalition of universities, companies, research labs, and civil society organizations.
What’s the alternative? Letting AI leadership slip to those with different values and priorities. That would mean ceding control of a technology that will reshape every industry and every part of society. Innovation and true progress can only come by democratizing AI.
The time for hype is over. I believe that 2025 must be the year when we unlock AI from its confines within a few players. By 2026, a broad swath of society shouldn’t just be using AI—they should be building it.
DeepSeek AI lesson
Smaller, open-source models are how that future will be built. DeepSeek’s lesson is that the best engineering optimizes for two things: performance and cost. For too long, AI has been seen as a game of scale—where bigger models meant better outcomes. But the real breakthrough is as much about size as it is about efficiency. In our work at IBM, we’ve seen that fit-for-purpose models have already led to up to 30-fold reductions in AI inference costs, making training more efficient and accessible.
I do not agree that artificial general intelligence (AGI) is around the corner or that the future of AI depends on building Manhattan-sized, nuclear-powered data centers. These narratives create false choices. There is no law of physics that dictates AI must remain expensive. The cost of training and inference isn’t fixed—it is an engineering challenge to be solved. Businesses, both incumbents and upstarts, have the ingenuity to push these costs down and make AI more practical and widespread.
We’ve seen this play out before. In the early days of computing, storage and processing power were prohibitively expensive. Yet, through technological advancements and economies of scale, these costs plummeted—unlocking new waves of innovation and adoption.
The same will be true for AI. This is promising for businesses everywhere. Technology only becomes transformative when it becomes affordable and accessible. By embracing open and efficient AI models, businesses can tap into cost-effective solutions tailored to their needs, unlocking AI’s full potential across industries.
Arvind Krishna is the chairman and CEO of IBM.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.