维诺德·科斯拉与马克·安德森都从公司创始人转型为投资者。上周末,两人就通用人工智能开发是否应该开源展开辩论。通用人工智能将使机器的智能水平与人类相当。
两人争论的导火索是科斯拉发布了一条帖子,对OpenAI及其首席执行官萨姆·奥尔特曼大加赞扬。
科斯拉写道:“我们从@OpenAI创立之初就认识@sama,我们完全支持他和他的公司。这些诉讼是开发通用人工智能和实现其效益的巨大干扰。”
对于科斯拉的信息,安德森指责他“游说禁止开源”人工智能研究。
安德森似乎对科斯拉支持OpenAI的立场有不同意见,因为OpenAI已经背离了最初的开源原则。自人工智能诞生以来,安德森就坚定支持开源人工智能,他认为开源人工智能是一种保障措施,可以避免少数几家大型科技公司和政府部门掌控最前沿人工智能研究的准入。
在当前的这场论战和以前的言论中,安德森对于一些最大的人工智能批评者提出的担忧嗤之以鼻。安德森曾经把这些担忧归咎于对颠覆性和不确定性的恐惧,而不是技术本身具有恶意,他在X上重申了这种观点。
安德森在X上发帖称:“每一种能够提升人类福祉的重大新技术,都会引发虚假的道德恐慌。人工智能则是最新的例子。”
另一方面,科斯拉却透过地缘政治和国家安全的视角看待人工智能,而不是从严格的创业角度来看待它。
对于安德森所说的科斯拉不支持开源的言论,科斯拉回应称开源风险过高。
他回复安德森称:“你会将曼哈顿计划(Manhattan Project)开源吗?人工智能对国家安全更加重要。我们必须在人工智能领域取得胜利。这是爱国主义,而不是口号。”
科斯拉和安德森之间的辩论,涉及到萨姆·奥尔特曼、OpenAI面临的诉讼和埃隆·马斯克。马斯克后来也加入了论战。两人在辩论中还谈到了是否应该允许任何人进行任何形式的人工智能研究,或者是否应该将最先进的人工智能版本交给政府。虽然这场辩论看起来只是一群非常成功的硅谷企业家在网络上展开的一些讨论,但我们可以管中窥豹,从中看到目前围绕开源人工智能展开的关键辩论。
最终,没有任何一方希望彻底禁止开源或闭源研究。在这场关于限制开源研究的论战中,有部分主张源于担心限制开源研究被作为一种恶意的理由,目的是确保对在人工智能领域已经取得进展的大公司实施监管。传奇人工智能研究人员、Meta前首席人工智能科学家杨立昆(Yann LeCun)在X上加入论战时表达了这种观点。
他写道:“没有人要求禁止闭源人工智能。但有些人正在积极游说各国政府禁止(或限制)开源人工智能。有些人以军事和经济安全作为理由。还有人则提到了威胁人类生存的幻想。”
硅谷知名天使投资人罗恩·康威要求领先的人工智能公司承诺“开发能够改善生活,为人类开启更美好未来的人工智能”。迄今为止,Meta、谷歌(Google)、微软(Microsoft)和OpenAI都已经在他的信中签字。
安德森引用科斯拉使用的曼哈顿项目的比喻,对OpenAI的安全保障提出了担忧。他认为,如果没有与曼哈顿项目同等程度的安全保障,例如“严格的安全审查和许可流程”、“持续的内部监控”和有“全天候武装警卫”守护的“牢不可破的物理设施”,那么OpenAI最先进的研究就将被美国的地缘政治对手们窃取。
OpenAI并未立即回复置评请求。
但安德森似乎不只是在争论一种观点,而是在进行思考练习。他在回应自己的帖子时写道:“当然,每一个假设都很荒谬。”
埃隆·马斯克加入论战批评OpenAI的安全措施
此时,OpenAI的联合创始人埃隆·马斯克加入了论战。
马斯克回复安德森讨论OpenAI安全措施的帖子称:“国家行为者当然很容易盗取他们的知识产权。”
科斯拉也提到了马斯克,称马斯克起诉OpenAI的决定是出于“酸葡萄心理”。上周,马斯克起诉OpenAI,指控该初创公司违反了创业协议。马斯克认为,OpenAI与微软的密切关系和停止将其产品开源的决定,违背了公司创立的使命。据彭博社(Bloomberg)掌握的一份备忘录显示,OpenAI与科斯拉的观点类似,指责马斯克“后悔没有参与公司今天的发展”。
马斯克回应称,科斯拉说他后悔在2019年离开OpenAI根本是“不知所云”。
科斯拉的科斯拉风险投资公司(Khosla Ventures)是OpenAI的长期投资者。2019年,科斯拉风险投资公司在OpenAI投资5,000万美元。同样,他并不认同马斯克的诉讼。科斯拉在X上发帖称:“有人说,如果你没有创新能力,那就去提起诉讼,这就是目前的状况。”他在帖子中标记了马斯克和OpenAI。
马斯克加入之后,论战仍在继续。科斯拉依旧坚定地认为,人工智能比发明核弹更重要,因此不能完全开源,但他认同马斯克和安德森的主张,认为领先的人工智能公司应该采取更严格的安全措施,甚至可以向政府求助。
科斯拉写道:“我同意应该为所有[最先进的]人工智能提供国家网络安全帮助和保护,而且这必不可少。人工智能不仅关乎网络防御,还关乎谁将在全球经济和政治竞争中胜出。全球价值观和政治体制的未来都取决于人工智能。”
虽然科斯拉对于把所有人工智能研究开源持保留态度,但他表示不希望人工智能停止发展。他在回应安德森时表示:“[最先进的]人工智能不应该放慢开发速度,因为在我看来,敌对国家的危险程度更高。”
但在“人工智能对齐”这个问题上,科斯拉和安德森却找到了一些共同点。“人工智能对齐”是指开发人工智能技术所使用的模型中的意识形态、原则和道德观。(财富中文网)
译者:刘进龙
审校:汪皓
维诺德·科斯拉与马克·安德森都从公司创始人转型为投资者。上周末,两人就通用人工智能开发是否应该开源展开辩论。通用人工智能将使机器的智能水平与人类相当。
两人争论的导火索是科斯拉发布了一条帖子,对OpenAI及其首席执行官萨姆·奥尔特曼大加赞扬。
科斯拉写道:“我们从@OpenAI创立之初就认识@sama,我们完全支持他和他的公司。这些诉讼是开发通用人工智能和实现其效益的巨大干扰。”
对于科斯拉的信息,安德森指责他“游说禁止开源”人工智能研究。
安德森似乎对科斯拉支持OpenAI的立场有不同意见,因为OpenAI已经背离了最初的开源原则。自人工智能诞生以来,安德森就坚定支持开源人工智能,他认为开源人工智能是一种保障措施,可以避免少数几家大型科技公司和政府部门掌控最前沿人工智能研究的准入。
在当前的这场论战和以前的言论中,安德森对于一些最大的人工智能批评者提出的担忧嗤之以鼻。安德森曾经把这些担忧归咎于对颠覆性和不确定性的恐惧,而不是技术本身具有恶意,他在X上重申了这种观点。
安德森在X上发帖称:“每一种能够提升人类福祉的重大新技术,都会引发虚假的道德恐慌。人工智能则是最新的例子。”
另一方面,科斯拉却透过地缘政治和国家安全的视角看待人工智能,而不是从严格的创业角度来看待它。
对于安德森所说的科斯拉不支持开源的言论,科斯拉回应称开源风险过高。
他回复安德森称:“你会将曼哈顿计划(Manhattan Project)开源吗?人工智能对国家安全更加重要。我们必须在人工智能领域取得胜利。这是爱国主义,而不是口号。”
科斯拉和安德森之间的辩论,涉及到萨姆·奥尔特曼、OpenAI面临的诉讼和埃隆·马斯克。马斯克后来也加入了论战。两人在辩论中还谈到了是否应该允许任何人进行任何形式的人工智能研究,或者是否应该将最先进的人工智能版本交给政府。虽然这场辩论看起来只是一群非常成功的硅谷企业家在网络上展开的一些讨论,但我们可以管中窥豹,从中看到目前围绕开源人工智能展开的关键辩论。
最终,没有任何一方希望彻底禁止开源或闭源研究。在这场关于限制开源研究的论战中,有部分主张源于担心限制开源研究被作为一种恶意的理由,目的是确保对在人工智能领域已经取得进展的大公司实施监管。传奇人工智能研究人员、Meta前首席人工智能科学家杨立昆(Yann LeCun)在X上加入论战时表达了这种观点。
他写道:“没有人要求禁止闭源人工智能。但有些人正在积极游说各国政府禁止(或限制)开源人工智能。有些人以军事和经济安全作为理由。还有人则提到了威胁人类生存的幻想。”
硅谷知名天使投资人罗恩·康威要求领先的人工智能公司承诺“开发能够改善生活,为人类开启更美好未来的人工智能”。迄今为止,Meta、谷歌(Google)、微软(Microsoft)和OpenAI都已经在他的信中签字。
安德森引用科斯拉使用的曼哈顿项目的比喻,对OpenAI的安全保障提出了担忧。他认为,如果没有与曼哈顿项目同等程度的安全保障,例如“严格的安全审查和许可流程”、“持续的内部监控”和有“全天候武装警卫”守护的“牢不可破的物理设施”,那么OpenAI最先进的研究就将被美国的地缘政治对手们窃取。
OpenAI并未立即回复置评请求。
但安德森似乎不只是在争论一种观点,而是在进行思考练习。他在回应自己的帖子时写道:“当然,每一个假设都很荒谬。”
埃隆·马斯克加入论战批评OpenAI的安全措施
此时,OpenAI的联合创始人埃隆·马斯克加入了论战。
马斯克回复安德森讨论OpenAI安全措施的帖子称:“国家行为者当然很容易盗取他们的知识产权。”
科斯拉也提到了马斯克,称马斯克起诉OpenAI的决定是出于“酸葡萄心理”。上周,马斯克起诉OpenAI,指控该初创公司违反了创业协议。马斯克认为,OpenAI与微软的密切关系和停止将其产品开源的决定,违背了公司创立的使命。据彭博社(Bloomberg)掌握的一份备忘录显示,OpenAI与科斯拉的观点类似,指责马斯克“后悔没有参与公司今天的发展”。
马斯克回应称,科斯拉说他后悔在2019年离开OpenAI根本是“不知所云”。
科斯拉的科斯拉风险投资公司(Khosla Ventures)是OpenAI的长期投资者。2019年,科斯拉风险投资公司在OpenAI投资5,000万美元。同样,他并不认同马斯克的诉讼。科斯拉在X上发帖称:“有人说,如果你没有创新能力,那就去提起诉讼,这就是目前的状况。”他在帖子中标记了马斯克和OpenAI。
马斯克加入之后,论战仍在继续。科斯拉依旧坚定地认为,人工智能比发明核弹更重要,因此不能完全开源,但他认同马斯克和安德森的主张,认为领先的人工智能公司应该采取更严格的安全措施,甚至可以向政府求助。
科斯拉写道:“我同意应该为所有[最先进的]人工智能提供国家网络安全帮助和保护,而且这必不可少。人工智能不仅关乎网络防御,还关乎谁将在全球经济和政治竞争中胜出。全球价值观和政治体制的未来都取决于人工智能。”
虽然科斯拉对于把所有人工智能研究开源持保留态度,但他表示不希望人工智能停止发展。他在回应安德森时表示:“[最先进的]人工智能不应该放慢开发速度,因为在我看来,敌对国家的危险程度更高。”
但在“人工智能对齐”这个问题上,科斯拉和安德森却找到了一些共同点。“人工智能对齐”是指开发人工智能技术所使用的模型中的意识形态、原则和道德观。(财富中文网)
译者:刘进龙
审校:汪皓
Vinod Khosla and Marc Andreessen, both founders turned investors, spent part of their weekends debating each other on whether the pursuit of artificial general intelligence—the idea that a machine could become as smart as a human—should be open-source.
The debate kicked off with a post from Khosla praising OpenAI and Sam Altman, the company’s CEO.
“We have known @sama since the early days of @OpenAI and fully support him and the company,” Khosla wrote. “These lawsuits are a massive distraction from the goals of getting to AGI and its benefits.”
Andreessen responded to Khosla’s message by accusing him of “lobbying to ban open source” research in AI.
Andreessen seemed to take issue with Khosla’s support for OpenAI because the firm has walked away from its previous open-source ethos. Since the advent of AI, Andreessen has come out as a big supporter of open-source AI, advocating it as a means to safeguard against a select few Big Tech firms and government agencies controlling access to the most cutting-edge AI research.
Both in this debate and in the past, Andreessen has been dismissive of the concerns raised by some of AI’s biggest critics. Andreessen has previously chalked up these worries to fears of disruption and uncertainty rather than the technology being malicious in and of itself—a point he reiterated in his exchange on X.
“Every significant new technology that advances human well-being is greeted by a ginned-up moral panic,” Andreessen posted on X. “This is just the latest.”
Khosla, on the other hand, tends to look at AI through a geopolitical and national-security lens rather than through a strictly entrepreneurial one.
In responding to Andreesen’s claims that he isn’t in favor of open-source, Khosla said the stakes were too high.
“Would you open source the Manhattan Project?” Khosla replied to Andreessen. “This one is more serious for national security. We are in a tech economic war with China and AI that is a must win. This is exactly what patriotism is about, not slogans.”
The back-and-forth discussion between Khosla and Andreessen saw the two opine on Sam Altman, OpenAI’s lawsuits, and Elon Musk, who chimed in himself at one point. The debate also explored whether anyone should be allowed to pursue any form of AI research, or if its most advanced versions should be delegated to the government. So while it may have seemed like just some online sniping between a group of extraordinarily successful Silicon Valley entrepreneurs, it contained a microcosm of the ongoing and critical debate around open-source AI.
Ultimately, neither camp wants to thoroughly ban open- or closed-source research. But part of the debate around limiting open-source research hinges on concerns it is being co-opted as a bad-faith argument to ensure regulatory capture for the biggest companies already making headway on AI—a point that legendary AI researcher and Meta’s former chief AI scientist Yann LeCun made when he entered the fray on X.
“No one is asking for closed-source AI to be banned,” LeCun wrote. “But some people are heavily lobbying governments around the world to ban (or limit) open source AI. Some of those people invoke military and economic security. Others invoke the fantasy of existential risk.”
Elsewhere in Silicon Valley, famed angel investor Ron Conway asked leading AI companies to pledge to “building AI that improves lives and unlocks a better future for humanity.” So far he has enlisted the likes of Meta, Google, Microsoft, and OpenAI as signatories to the letter.
Andreessen, sticking with Khosla’s Manhattan Project analogy, raised concerns about OpenAI’s safety protocols. He believes without the same level of security that surrounded the Manhattan Project—such as a “rigorous security vetting and clearance process,” “constant internal surveillance,” and “hardened physical facilities” with “24×7 armed guards”—OpenAI’s most advanced research could be stolen by the U.S.’s geopolitical rivals.
OpenAI did not immediately respond to a request for comment.
Andreessen, though, appears to have been doing more of a thought exercise than arguing a point, writing in response to his own post, “Of course every part of this is absurd.”
Elon Musk enters the debate to criticize OpenAI’s security
At this point, OpenAI cofounder Elon Musk chimed in.
“It would certainly be easy for a state actor to steal their IP,” Musk replied to Andreessen’s post about security at OpenAI.
Khosla, too, made mention of Musk, calling his decision to sue OpenAI “sour grapes.” Last week, Musk filed a lawsuit against OpenAI, alleging it breached the startup’s founding agreement. According to Musk, OpenAI’s close relationship with Microsoft and its decision to stop making its work open-source violated the organization’s mission. OpenAI took a similar tack to Khosla, accusing Musk of having “regrets about not being involved with the company today,” according to a memo obtained by Bloomberg.
Musk responded by saying Khosla “doesn’t know what he is talking about,” regarding his departure from OpenAI in 2019.
Khosla’s venture capital firm Khosla Ventures is a longtime backer of OpenAI. In 2019, Khosla Ventures invested $50 million into OpenAI. As such, he didn’t take kindly to Musk’s lawsuit. “Like they say if you can’t innovate, litigate and that’s what we have here,” Khosla wrote on X, tagging both Musk and OpenAI.
With Musk now involved, the debate continued. Khosla remained adamant AI was more important than the invention of the nuclear bomb and therefore couldn’t afford to be entirely open-source—though he did agree with Musk and Andreessen that its top firms should have more rigorous security measures, even relying on the government for assistance.
“Agree national cyber help and protection should be given and required for all [state of the art] AI,” Khosla wrote. “AI is not just cyber defense but also about winning economically and politically globally. The future of the world’s values and political system depends on it.”
Despite his reservations about making all of AI research open-source, Khosla said he did not want development to halt. “[State of the art] AI should not be slowed because enemy nation states are orders of magnitude bigger danger in my view,” Khosla said in response to Andreessen.
But Khosla and Andreessen did find some common ground on the question of AI alignment, which refers to the set of ideologies, principles, and ethics that will inform the models on which AI technologies are developed.