首页 500强 活动 榜单 商业 科技 领导力 专题 品牌中心
杂志订阅

OpenAI前董事首度披露:为何要解雇创始人奥尔特曼

OpenAI非营利性董事会前成员海伦·托纳在访谈中怒批山姆·奥尔特曼。

文本设置
小号
默认
大号
Plus(0条)

去年11月,OpenAI曾短暂罢免了山姆·奥尔特曼的首席执行官职务。图片来源:JEROD HARRIS—GETTY IMAGES FOR VOX MEDIA

作为幕后主使之一,海伦·托纳一度将山姆·奥尔特曼扫地出门,但这场引发剧烈震动的政变最终以失败告终。在近期一次重磅访谈中,她公开指责奥尔特曼作为OpenAI掌门人一再失信于人,这是自11月的爆炸性事件发生后,托纳首次对外公开发声。

海伦·托纳就职于乔治城大学(Georgetown University),是一名人工智能政策专家,从2021年到去年年底因参与罢免奥尔特曼而辞职前,一直在控制着OpenAI的非营利董事会中任职。在员工集体离职的威胁下,奥尔特曼获得新董事会任命,官复原职,而最初参与策划这场政变的四人中,仅Quora首席执行官亚当·德安杰洛得到留任。

对于外界有关她和其他OpenAI董事会成员恐惧技术进步的猜测,托纳予以了否认。相反,她表示,之所以会发生此次政变,是因为奥尔特曼的行事风格明显缺乏诚信,在进行关键决策时不会提前沟通,一步步削弱了彼此之间的信任。

在5月28日上线的《TED AI秀》上,她说:“多年来,山姆一直隐瞒信息,歪曲公司情况,有时甚至直接对董事会撒谎,让董事会很难真正履行自己的职责”。

据托纳表示,甚至包括ChatGPT的发布,奥尔特曼都没有提前告知董事会(2022年11月,ChatGPT首次公开亮相,旋即引发GenAI狂潮)。她说:“我们也是看了推特才知道ChatGPT已经发布问世”。

托纳声称,奥尔特曼总能找出借口来减淡化董事会的担忧,这也是董事会长期以来没有采取任何措施的原因。

她接着说:“山姆总能找出一些听起来无伤大雅的借口,让你觉得没什么大不了的事情,或者只是误会之类的而已。但最终的结果是,由于这种情况多年来一再上演,我们四个决定解雇他的人最后一致认为,山姆的话根本不能信,从董事会的角度来说,他的做法完全无法接受”。

OpenAI没有回复《财富》的置评请求。

托纳说,去年10月,她参与发表了一篇论文,文中表示Anthropic的AI安全表现优于OpenAI,这让奥尔特曼大为光火。

她继续说道:“问题是,论文发表后,山姆又开始欺骗其他董事会成员,试图把我挤出董事会,再次伤害了我们对他的信任”,并补充说,该行为发生时,董事会正在“非常认真地讨论要不要解雇他”。

“在过去几年,亮眼的产品成为众人瞩目的焦点,安全文化和流程则开始‘退居二线’。”扬·莱克说。

如果孤立地看,托纳对奥尔特曼这样或那样的抨击或许可以看作是政变失败主谋出于酸葡萄心理而发的牢骚。但OpenAI前高级AI安全研究员扬·莱克和斯嘉丽·约翰逊也都作出过类似批评,也能在一定程度上印证她所描述的那种缺乏诚信的行为方式。

自我监管的尝试注定将以失败告终

斯嘉丽·约翰逊称,奥尔特曼曾找到她,希望在OpenAI最新的旗舰产品——ChatGPT语音机器人中使用她的声音,该语音机器人可以与用户进行对话,(如果配上斯嘉丽的声音)会让人联想到其在电影《她》中扮演的角色。斯嘉丽拒绝了奥尔特曼的请求,但她怀疑后者可能违背她的意愿在该机器人中混入了自己的部分声音。OpenAI对她的说法予以了反驳,但还是同意暂停使用该机器人。

然后是莱克,他曾是负责创建防护措施、确保人类能够控制超智能AI团队的联合负责人。本月离职之后,他表示自己清楚地意识到,管理层无意按照承诺将宝贵资源投向自己的团队,并对前雇主发出了严厉斥责。(5月28日,他加入了托纳去年10月称赞过的Anthropic,也是OpenAI的竞争对手)。

在AI安全团队的主要成员各奔东西之后,OpenAI彻底解散了该团队,将控制权统一交到了奥尔特曼和他的盟友手中。至于那些负责追求财务成果最大化的人,是否是实施可能阻碍商业活动的防护措施的最佳人选,我们拭目以待。

尽管有些员工同样心存疑虑,但除莱克外,很少有人愿意公开发声。幸亏有了Vox本月早些时候的报道,大家才发现,员工之所以都保持沉默,一大原因是受制一条罕见的“非贬损条款”,如违反该条款,员工很可能失去自己在这家或为全球最炙手可热初创公司的兑现股权。

OpenAI前员工雅各布·希尔顿在X上发文称,“一年多前,在我离开OpenAI 时,签署了一份非贬损协议,并需对协议本身保密,不为别的,只为避免失去我的兑现股权。”

在此之前,OpenAI前安全研究员丹尼尔·科科塔伊洛曾表示,为了不受退出协议的约束,他自愿放弃了自己的股权。奥尔特曼后来证实了这一说法的真实性。

他在5月早些时候发帖说:“虽然我们没有收回过股权,但这种条款确实不应该出现在我们的任何文件或沟通文件之中。这是我的责任,也是我在运营OpenAI的过程中为数不多真正感到尴尬的一次。我不知道会发生这样的事情,我应该对此早做了解。”

针对有关OpenAI股权处理事宜的报道,山姆·奥尔特曼表示,“我们从未收回过任何人的兑现股权,如果有人不愿签署离职协议(或不同意非贬损协议),我们也不会收回兑现股权。兑现股权就是兑现股权。就这么简单。”

托纳近期在《经济学人》杂志上发表了专栏文章,她和OpenAI前主管塔莎·麦考利在文章中主张,实践证据证明,没有AI公司能通过自我监管实现良好治理。

她们写道:“如果说有哪家公司能够在以安全、合乎道德的方式开发先进AI系统的同时,成功实现自我管理,那一定是OpenAI。根据我们的经验,我们认为,自我管理挡不住利润激励带来的压力。”(财富中文网)

译者:梁宇

审校:夏林

作为幕后主使之一,海伦·托纳一度将山姆·奥尔特曼扫地出门,但这场引发剧烈震动的政变最终以失败告终。在近期一次重磅访谈中,她公开指责奥尔特曼作为OpenAI掌门人一再失信于人,这是自11月的爆炸性事件发生后,托纳首次对外公开发声。

海伦·托纳就职于乔治城大学(Georgetown University),是一名人工智能政策专家,从2021年到去年年底因参与罢免奥尔特曼而辞职前,一直在控制着OpenAI的非营利董事会中任职。在员工集体离职的威胁下,奥尔特曼获得新董事会任命,官复原职,而最初参与策划这场政变的四人中,仅Quora首席执行官亚当·德安杰洛得到留任。

对于外界有关她和其他OpenAI董事会成员恐惧技术进步的猜测,托纳予以了否认。相反,她表示,之所以会发生此次政变,是因为奥尔特曼的行事风格明显缺乏诚信,在进行关键决策时不会提前沟通,一步步削弱了彼此之间的信任。

在5月28日上线的《TED AI秀》上,她说:“多年来,山姆一直隐瞒信息,歪曲公司情况,有时甚至直接对董事会撒谎,让董事会很难真正履行自己的职责”。

据托纳表示,甚至包括ChatGPT的发布,奥尔特曼都没有提前告知董事会(2022年11月,ChatGPT首次公开亮相,旋即引发GenAI狂潮)。她说:“我们也是看了推特才知道ChatGPT已经发布问世”。

托纳声称,奥尔特曼总能找出借口来减淡化董事会的担忧,这也是董事会长期以来没有采取任何措施的原因。

她接着说:“山姆总能找出一些听起来无伤大雅的借口,让你觉得没什么大不了的事情,或者只是误会之类的而已。但最终的结果是,由于这种情况多年来一再上演,我们四个决定解雇他的人最后一致认为,山姆的话根本不能信,从董事会的角度来说,他的做法完全无法接受”。

OpenAI没有回复《财富》的置评请求。

托纳说,去年10月,她参与发表了一篇论文,文中表示Anthropic的AI安全表现优于OpenAI,这让奥尔特曼大为光火。

她继续说道:“问题是,论文发表后,山姆又开始欺骗其他董事会成员,试图把我挤出董事会,再次伤害了我们对他的信任”,并补充说,该行为发生时,董事会正在“非常认真地讨论要不要解雇他”。

“在过去几年,亮眼的产品成为众人瞩目的焦点,安全文化和流程则开始‘退居二线’。”扬·莱克说。

如果孤立地看,托纳对奥尔特曼这样或那样的抨击或许可以看作是政变失败主谋出于酸葡萄心理而发的牢骚。但OpenAI前高级AI安全研究员扬·莱克和斯嘉丽·约翰逊也都作出过类似批评,也能在一定程度上印证她所描述的那种缺乏诚信的行为方式。

自我监管的尝试注定将以失败告终

斯嘉丽·约翰逊称,奥尔特曼曾找到她,希望在OpenAI最新的旗舰产品——ChatGPT语音机器人中使用她的声音,该语音机器人可以与用户进行对话,(如果配上斯嘉丽的声音)会让人联想到其在电影《她》中扮演的角色。斯嘉丽拒绝了奥尔特曼的请求,但她怀疑后者可能违背她的意愿在该机器人中混入了自己的部分声音。OpenAI对她的说法予以了反驳,但还是同意暂停使用该机器人。

然后是莱克,他曾是负责创建防护措施、确保人类能够控制超智能AI团队的联合负责人。本月离职之后,他表示自己清楚地意识到,管理层无意按照承诺将宝贵资源投向自己的团队,并对前雇主发出了严厉斥责。(5月28日,他加入了托纳去年10月称赞过的Anthropic,也是OpenAI的竞争对手)。

在AI安全团队的主要成员各奔东西之后,OpenAI彻底解散了该团队,将控制权统一交到了奥尔特曼和他的盟友手中。至于那些负责追求财务成果最大化的人,是否是实施可能阻碍商业活动的防护措施的最佳人选,我们拭目以待。

尽管有些员工同样心存疑虑,但除莱克外,很少有人愿意公开发声。幸亏有了Vox本月早些时候的报道,大家才发现,员工之所以都保持沉默,一大原因是受制一条罕见的“非贬损条款”,如违反该条款,员工很可能失去自己在这家或为全球最炙手可热初创公司的兑现股权。

OpenAI前员工雅各布·希尔顿在X上发文称,“一年多前,在我离开OpenAI 时,签署了一份非贬损协议,并需对协议本身保密,不为别的,只为避免失去我的兑现股权。”

在此之前,OpenAI前安全研究员丹尼尔·科科塔伊洛曾表示,为了不受退出协议的约束,他自愿放弃了自己的股权。奥尔特曼后来证实了这一说法的真实性。

他在5月早些时候发帖说:“虽然我们没有收回过股权,但这种条款确实不应该出现在我们的任何文件或沟通文件之中。这是我的责任,也是我在运营OpenAI的过程中为数不多真正感到尴尬的一次。我不知道会发生这样的事情,我应该对此早做了解。”

针对有关OpenAI股权处理事宜的报道,山姆·奥尔特曼表示,“我们从未收回过任何人的兑现股权,如果有人不愿签署离职协议(或不同意非贬损协议),我们也不会收回兑现股权。兑现股权就是兑现股权。就这么简单。”

托纳近期在《经济学人》杂志上发表了专栏文章,她和OpenAI前主管塔莎·麦考利在文章中主张,实践证据证明,没有AI公司能通过自我监管实现良好治理。

她们写道:“如果说有哪家公司能够在以安全、合乎道德的方式开发先进AI系统的同时,成功实现自我管理,那一定是OpenAI。根据我们的经验,我们认为,自我管理挡不住利润激励带来的压力。”(财富中文网)

译者:梁宇

审校:夏林

One of the ringleaders behind the brief, spectacular, but ultimately unsuccessful coup to overthrow Sam Altman accused the OpenAI boss of repeated dishonesty in a bombshell interview that marked her first extensive remarks since November’s whirlwind events.

Helen Toner, an AI policy expert from Georgetown University, sat on the nonprofit board that controlled OpenAI from 2021 until she resigned late last year following her role in ousting Altman. After staff threatened to leave en masse, he returned empowered by a new board with only Quora CEO Adam D’Angelo remaining from the original four plotters.

Toner disputed speculation that she and her colleagues on the board had been frightened by a technological advancement. Instead she blamed the coup on a pronounced pattern of dishonest behavior by Altman that gradually eroded trust as key decisions were not shared in advance.

“For years, Sam had made it very difficult for the board to actually do that job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board,” she told The TED AI Show in remarks published on Tuesday.

Even the very launch of ChatGPT, which sparked the generative AI frenzy when it debuted in November 2022, was withheld from the board, according to Toner. “We learned about ChatGPT on Twitter,” she said.

Toner claimed Altman always had a convenient excuse at hand to downplay the board’s concerns, which is why for so long no action had been taken.

“Sam could always come up with some kind of innocuous-sounding explanation of why it wasn’t a big deal, or it was misinterpreted or whatever,” she continued. “But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn’t believe things that Sam was telling us and that’s a completely unworkable place to be in as a board.”

OpenAI did not respond to a request by Fortune for comment.

Things ultimately came to a head, Toner said, after she co-published a paper in October of last year that cast Anthropic’s approach to AI safety in a better light than OpenAI, enraging Altman.

“The problem was that after the paper came out Sam started lying to other board members in order to try and push me off the board, so it was another example that just like really damaged our ability to trust him,” she continued, adding that the behavior coincided with discussions in which the board was “already talking pretty seriously about whether we needed to fire him.”

Taken in isolation, those and other disparaging remarks Toner leveled at Altman could be downplayed as sour grapes from the ringleader of a failed coup. The pattern of dishonesty she described comes, however, on the wings of similarly damaging accusations from a former senior AI safety researcher, Jan Leike, as well as Scarlett Johansson.

Attempts to self-regulate doomed to fail

The Hollywood actress said Altman approached her with the request to use her voice for its latest flagship product—a ChatGPT voice bot that users can converse with, reminiscent of the fictional character Johansson played in the movie Her. When she refused, she suspects, he may have blended in part of her voice, violating her wishes. The company disputes her claims but agreed to pause its use anyway.

Leike, on the other hand, served as joint head of the team responsible for creating guardrails that ensure mankind can control hyperintelligent AI. He left this month, saying it had become clear to him that management had no intention of diverting valuable resources to his team as promised, leaving a scathing rebuke of his former employer behind in his wake. (On Tuesday he joined the same OpenAI rival Toner had praised in October, Anthropic.)

Once key members of its AI safety staff had scattered to the wind, OpenAI disbanded the team entirely, unifying control in the hands of Altman and his allies. Whether those in charge of maximizing financial results are best entrusted with implementing guardrails that may prove a commercial hindrance remains to be seen.

Although certain staffers were having their doubts, few outside of Leike chose to speak up. Thanks to reporting by Vox earlier this month, it emerged that a key motivating factor behind that silence was an unusual nondisparagement clause that, if broken, would void an employee’s vesting equity in perhaps the hottest startup in the world.

When I left @OpenAI a little over a year ago, I signed a non-disparagement agreement, with non-disclosure about the agreement itself, for no other reason than to avoid losing my vested equity. (Thread)

This followed earlier statements by former OpenAI safety researcher Daniel Kokotajlo that he voluntarily sacrificed his share of equity in order not to be bound by the exit agreement. Altman later confirmed the validity of the claims.

“Although we never clawed anything back, it should never have been something we had in any documents or communication,” he posted earlier this month. “This is on me and one of the few times I’ve been genuinely embarrassed running OpenAI; I did not know this was happening and I should have.”

in regards to recent stuff about how openai handles equity:we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.

Toner’s comments come fresh on the heels of her op-ed in the Economist, in which she and former OpenAI director Tasha McCauley argued that no AI company could be trusted to regulate itself as the evidence showed.

“If any company could have successfully governed itself while safely and ethically developing advanced AI systems it would have been OpenAI,” they wrote. “Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives.”

财富中文网所刊载内容之知识产权为财富媒体知识产权有限公司及/或相关权利人专属所有或持有。未经许可,禁止进行转载、摘编、复制及建立镜像等任何使用。
0条Plus
精彩评论
评论

撰写或查看更多评论

请打开财富Plus APP

前往打开