OpenAI的多位知名度高的员工接连离职,让外界质疑负责人工智能安全的团队是否正在被逐步掏空。
在OpenAI任职近十年的首席科学家伊利亚·苏茨克维尔宣布离开公司,不久之后,他的团队合作伙伴和《时代》周刊(Time)全球百大人工智能人物之一扬·雷克也宣布辞职。
雷克在5月14日发帖称:“我已经辞职。”
在这两人宣布离职之前,有媒体报道利奥波德·阿申布雷纳因为泄漏信息而被解雇,丹尼尔·科科塔杰洛于今年4月离职,威廉·桑德斯在今年早些时候离职。
OpenAI的多位员工发帖表达了他们在听到这些消息之后的失望心情。他们没有回应《财富》杂志的置评请求。
OpenAI的研究员卡罗尔·温莱特写道:“我很荣幸过去两年半在OpenAI与扬共事。为了保证通用人工智能的安全性和有益性,他付出了巨大努力,没有人能够与他相比。失去他之后,公司将表现得越来越糟糕。”
本周,中美两国的高层特使在日内瓦开会,讨论当人类即将开发出通用人工智能,当人工智能可以在许多任务上与人类竞争时,我们必须做些什么。
超级智能对齐
但科学家们已经将目光转向了下一个进化阶段——超级人工智能。
苏茨克维尔和雷克共同负责在2023年7月成立的一个团队。该团队的任务是解决超级人工智能对齐所面临的核心技术挑战,所谓“对齐”是为了保证人类保留对智力和能力都远超人类的机器的控制。
OpenAI曾经承诺将为此投入现有算力资源的20%,目标是在未来四年实现超级对齐。
但与开发尖端人工智能有关的成本却变成了阻碍。
本月早些时候,奥尔特曼说,一方面他为了开发通用人工智能,准备每年投入数十亿美元,另一方面他依旧需要确保OpenAI能够持续获得足够的资金来维持运营。
这些资金将来自实力雄厚的投资者,例如微软(Microsoft)的首席执行官萨蒂亚·纳德拉。
这意味着总是要领先于谷歌(Google)等竞争对手发布成果。
比如OpenAI的最新旗舰产品GPT-4o,该公司称它具有根据文本、音频和视频进行实时“推理”的能力。“推理”这个词在通用人工智能领域存在争议。
OpenAI在本周演示的女声助手非常逼真,人们评价它就像是从斯派克·琼兹的人工智能科幻电影《她》(Her)中直接提取的声音。
“伊利亚看到了什么?”
超级对齐团队创建几个月后,苏茨克维尔和控股该公司的非营利部门的其他非执行董事会成员罢免了奥尔特曼,称他们对首席执行官失去了信心。
出于对公司分裂的担忧,纳德拉很快就奥尔特曼回归安排了磋商。几天后,懊悔的苏茨克维尔为他在此次“政变”中的角色道歉。
当时,路透社(Reuters)报道称,此次事件可能与一个秘密项目有关,该项目的目标是开发一款具有更强大的推理能力的人工智能。
在那之后,苏茨克维尔就很少公开露面。由于这次政变引起的轰动和随后掩盖这件事情的方式,在社交媒体上引发了各种猜测。
“伊利亚看到了什么?”变成了人工智能社区经常出现的一句话。
最近,科科塔杰洛称他对公司失去了信心,为了表达抗议决定辞职,这进一步加剧了外界的担忧。
但在5月14日发布的一份声明里,苏茨克维尔似乎暗示,他离开OpenIA并不是因为对安全问题的担忧,而是为了追求个人感兴趣的其他事业,他会在后期对外公布。
苏茨克维尔写道:“公司的发展过程堪称奇迹,我相信OpenAI可以开发出安全和对人类有益的通用人工智能。”他对OpenAI的三巨头萨姆·奥尔特曼、格雷格·布罗克曼和米拉·穆拉蒂以及他的继任者雅各布·帕乔基表达了支持。(财富中文网)
译者:刘进龙
审校:汪皓
OpenAI的多位知名度高的员工接连离职,让外界质疑负责人工智能安全的团队是否正在被逐步掏空。
在OpenAI任职近十年的首席科学家伊利亚·苏茨克维尔宣布离开公司,不久之后,他的团队合作伙伴和《时代》周刊(Time)全球百大人工智能人物之一扬·雷克也宣布辞职。
雷克在5月14日发帖称:“我已经辞职。”
在这两人宣布离职之前,有媒体报道利奥波德·阿申布雷纳因为泄漏信息而被解雇,丹尼尔·科科塔杰洛于今年4月离职,威廉·桑德斯在今年早些时候离职。
OpenAI的多位员工发帖表达了他们在听到这些消息之后的失望心情。他们没有回应《财富》杂志的置评请求。
OpenAI的研究员卡罗尔·温莱特写道:“我很荣幸过去两年半在OpenAI与扬共事。为了保证通用人工智能的安全性和有益性,他付出了巨大努力,没有人能够与他相比。失去他之后,公司将表现得越来越糟糕。”
本周,中美两国的高层特使在日内瓦开会,讨论当人类即将开发出通用人工智能,当人工智能可以在许多任务上与人类竞争时,我们必须做些什么。
超级智能对齐
但科学家们已经将目光转向了下一个进化阶段——超级人工智能。
苏茨克维尔和雷克共同负责在2023年7月成立的一个团队。该团队的任务是解决超级人工智能对齐所面临的核心技术挑战,所谓“对齐”是为了保证人类保留对智力和能力都远超人类的机器的控制。
OpenAI曾经承诺将为此投入现有算力资源的20%,目标是在未来四年实现超级对齐。
但与开发尖端人工智能有关的成本却变成了阻碍。
本月早些时候,奥尔特曼说,一方面他为了开发通用人工智能,准备每年投入数十亿美元,另一方面他依旧需要确保OpenAI能够持续获得足够的资金来维持运营。
这些资金将来自实力雄厚的投资者,例如微软(Microsoft)的首席执行官萨蒂亚·纳德拉。
这意味着总是要领先于谷歌(Google)等竞争对手发布成果。
比如OpenAI的最新旗舰产品GPT-4o,该公司称它具有根据文本、音频和视频进行实时“推理”的能力。“推理”这个词在通用人工智能领域存在争议。
OpenAI在本周演示的女声助手非常逼真,人们评价它就像是从斯派克·琼兹的人工智能科幻电影《她》(Her)中直接提取的声音。
“伊利亚看到了什么?”
超级对齐团队创建几个月后,苏茨克维尔和控股该公司的非营利部门的其他非执行董事会成员罢免了奥尔特曼,称他们对首席执行官失去了信心。
出于对公司分裂的担忧,纳德拉很快就奥尔特曼回归安排了磋商。几天后,懊悔的苏茨克维尔为他在此次“政变”中的角色道歉。
当时,路透社(Reuters)报道称,此次事件可能与一个秘密项目有关,该项目的目标是开发一款具有更强大的推理能力的人工智能。
在那之后,苏茨克维尔就很少公开露面。由于这次政变引起的轰动和随后掩盖这件事情的方式,在社交媒体上引发了各种猜测。
“伊利亚看到了什么?”变成了人工智能社区经常出现的一句话。
最近,科科塔杰洛称他对公司失去了信心,为了表达抗议决定辞职,这进一步加剧了外界的担忧。
但在5月14日发布的一份声明里,苏茨克维尔似乎暗示,他离开OpenIA并不是因为对安全问题的担忧,而是为了追求个人感兴趣的其他事业,他会在后期对外公布。
苏茨克维尔写道:“公司的发展过程堪称奇迹,我相信OpenAI可以开发出安全和对人类有益的通用人工智能。”他对OpenAI的三巨头萨姆·奥尔特曼、格雷格·布罗克曼和米拉·穆拉蒂以及他的继任者雅各布·帕乔基表达了支持。(财富中文网)
译者:刘进龙
审校:汪皓
A series of high-profile departures at OpenAI has raised questions as to whether the team responsible for AI safety is gradually being hollowed out.
Immediately following the announcement by chief scientist Ilya Sutskever that he was leaving the company after almost a decade, his team partner and one of Time’s 100 most important AI figures, Jan Leike, also announced he was quitting.
“I resigned,” Leike posted on May 14.
The duo follow Leopold Aschenbrenner, reportedly fired for leaking information, as well as Daniel Kokotajlo, who left in April, and William Saunders earlier this year.
Several staffers at OpenAI, which did not respond to a request by Fortune for comment, posted their disappointment upon hearing the news.
“It was an honor to work with Jan the past two and a half years at OpenAI. No one pushed harder than he did to make AGI safe and beneficial,” wrote OpenAI researcher Carroll Wainwright. “The company will be poorer without him.”
High-level envoys from China and the USA are meeting in Geneva this week to discuss what must be done now that mankind is on the cusp of developing artificial general intelligence (AGI), when AI can compete with humans in a wide variety of tasks.
Superintelligence alignment
But scientists have already turned their attention to the next stage of evolution—artificial super intelligence.
Sutskever and Leike jointly headed up a team created in July tasked with solving the core technical challenges of ASI alignment, a euphemism for ensuring humans retain control over machines far more intelligent and capable than they.
OpenAI pledged to commit 20% of its existing computing resources towards that goal with the aim of achieving superalignment in the next four years.
But the costs associated with developing cutting-edge AI are prohibitive.
Earlier this month, Altman said that while he’s prepared to burn billions every year in the pursuit of AGI, he still needs to ensure that OpenAI can continually secure enough funding to keep the lights on.
That money needs to come from deep-pocketed investors like Satya Nadella, CEO of Microsoft.
This means constantly delivering results ahead of its rivals like Google.
This includes OpenAI’s newest flagship product, GPT-4o, which the company claims can actually “reason”—a verb laden with controversy in GenAI circles—across text, audio and video in real time.
The female voice assistant it displayed this week is so lifelike people are remarking it seems to have been lifted straight out of Spike Jonze’s AI science fiction film “Her”.
“What did Ilya see?”
A few months after the Superalignment team was formed, Sutskever, together with other non-executive directors on the board of the non-profit arm that controls the company, ousted Altman, claiming they no longer had faith in their CEO.
Nadella quickly negotiated his return amid fears the company could split, and days later a rueful Sutskever apologized for his role in the mutiny.
At the time, Reuters reported it may have been linked to a secret project with the goal of developing an AI capable of higher reasoning.
Since then, Sutskever has barely been visible. The spectacular nature of the coup, along with the manner in which it was subsequently swept under the carpet prompted widespread speculation in social media.
“What did Ilya see?” became a common refrain within the broader AI community.
Kokotajlo furthered these concerns recently by remarking he had resigned in protest after losing confidence in the company.
In a statement on May 14, Sutskever seemed to suggest, however, that he was not leaving OpenAI due to concerns over safety but to pursue other interests personal to him that he would reveal at a later date.
“The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial,” he wrote, endorsing OpenAI’s trio of top leaders, Sam Altman, Greg Brockman and Mira Murati, as well as his successor, Jakub Pachocki.