埃隆·马斯克(Elon Musk)曾多次将人工智能称为“文明风险”。人工智能教父之一杰弗里·辛顿(Geoffrey Hinton)最近改变了论调,称人工智能是“生存威胁”。
但DeepMind公司联合创始人穆斯塔法·苏莱曼(Mustafa Suleyman)最近发表了不同观点。Deepmind之前得到了马斯克的支持,已在人工智能领域发展了十多年。苏莱曼是最新出版的《即将到来的浪潮:技术、权力和21世纪最大的困境》(“The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma”)一书的合著者。作为该领域最杰出、从业时间最长的专家之一,他认为,AI带来了影响深远的问题,但其威胁并不像其他人认为的那样紧迫。事实上,从现在开始,挑战相当简单明了。
人工智能技术带来的风险是整个2023年公众辩论的焦点,成为媒体热议的话题。穆斯塔法上周对《麻省理工科技评论》(MIT Technology Review)表示:“我只是认为,生存风险论完全是庸人自扰。我们应该讨论更实际的基本问题(从隐私到偏见,从面部识别到在线审核)。”
他表示,最紧迫的问题尤其应该是监管。苏莱曼对世界各国政府能够有效监管人工智能持乐观态度。苏莱曼表示:“我认为所有人都陷入恐慌:认为我们无法对其进行监管。这简直是无稽之谈。我们完全有能力监管人工智能。我们将采用相同的框架(之前行之有效)。”
他的信念在一定程度上源于,在航空和互联网等曾被视为前沿技术的领域,各国都实现了有效监管。他认为:如果商业航班没有合理的安全协议,乘客将永远不会信任航空公司,这将损害其业务。在互联网上,消费者可以访问无数的网站,但像贩卖毒品或宣传恐怖主义这样的活动是明令禁止的,尽管没有完全消除。
另一方面,正如《麻省理工科技评论》的威尔·道格拉斯·海恩(Will Douglas Heaven)向苏莱曼指出的那样,一些观察人士认为,目前的互联网监管存在缺陷:没有追究大型科技公司的全部责任。特别是,作为现行互联网法律基石之一的《通信规范法》第230条规定,公司无须为第三方或用户在他们平台发布的内容承担责任。这是某些大型社交媒体公司成立的基础,使其免于为网站上分享的内容承担任何责任。今年2月,最高法院审理了两起可能改变互联网立法格局的案件。
为了实现对人工智能的监管,苏莱曼希望结合广泛的国际监管来创建新的监管机构,并(在“微观层面”)将政策落细落小。所有雄心勃勃的人工智能监管机构和开发人员可以采取的第一步就是限制“递归自我改进”,即人工智能的自我改进能力。限制人工智能的这一特定能力将是关键的第一步,以确保其未来的发展不会完全脱离人类的监督。
“你不会想让你的小人工智能在没有你监督的情况下自行更新代码。”苏莱曼说。“也许这甚至应该是一项需要获得许可的活动——(你知道的)就像处理炭疽或核材料一样。”
如果不对人工智能的细节进行管理,有时会引入使用的“实际代码”,立法者将很难确保其法律的可执行性。苏莱曼说:"这关系到设定人工智能无法逾越的界限。“
为了确保实现上述愿景,政府应该能够“直接管理”人工智能开发人员,以确保他们不会跨越最终设定的任何界限。其中一些界限应该明确标出,比如禁止聊天机器人回答某些问题,或是对个人数据进行隐私保护。
世界各国政府都在制定人工智能法规
周二,美国总统乔·拜登(Joe Biden)在联合国发表演讲时也表达了类似的观点,他呼吁世界各国领导人携手合作,减轻人工智能带来的“巨大危险”,同时确保人工智能仍“为善所用”。
而在美国国内,鉴于人工智能技术发展变化如此之快,参议院多数党领袖查克·舒默(Chuck Schumer,纽约州民主党人)敦促国会议员迅速采取行动,对人工智能进行监管。上周,舒默邀请了特斯拉(Tesla)首席执行官埃隆·马斯克(Eon Musk)、微软(Microsoft)首席执行官萨蒂亚·纳德拉(Satya Nadella)和Alphabet首席执行官桑达尔·皮查伊(Sundar Pichai)等大型科技公司高管到华盛顿开会,讨论未来人工智能监管问题。一些国会议员对邀请硅谷高管讨论旨在监管其公司的政策的决定持怀疑态度。
欧盟(最早监管人工智能的政府机构之一)在6月通过了一项立法草案,要求开发人员分享用于训练模型的数据,并严格限制面部识别软件的使用——苏莱曼也表示应该限制这种软件的使用。《时代》杂志的一篇报道发现,ChatGPT的制造商OpenAI曾游说欧盟官员弱化其拟议立法的部分法条。
中国也是最早出台最完善的人工智能监管法规的国家之一。今年7月,中国国家互联网信息办公室发布了《生成式人工智能服务管理暂行办法》,包括明确要求遵守现行版权法,并规定哪些类型的开发需要政府批准。
苏莱曼则坚信,政府在未来的人工智能监管中可以发挥关键作用。“我热爱民族国家。”他说。“我相信监管的力量。我呼吁民族国家采取行动来解决这一问题。事关重大,现在是时候采取行动了。”(财富中文网)
译者:中慧言-王芳
埃隆·马斯克(Elon Musk)曾多次将人工智能称为“文明风险”。人工智能教父之一杰弗里·辛顿(Geoffrey Hinton)最近改变了论调,称人工智能是“生存威胁”。
但DeepMind公司联合创始人穆斯塔法·苏莱曼(Mustafa Suleyman)最近发表了不同观点。Deepmind之前得到了马斯克的支持,已在人工智能领域发展了十多年。苏莱曼是最新出版的《即将到来的浪潮:技术、权力和21世纪最大的困境》(“The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma”)一书的合著者。作为该领域最杰出、从业时间最长的专家之一,他认为,AI带来了影响深远的问题,但其威胁并不像其他人认为的那样紧迫。事实上,从现在开始,挑战相当简单明了。
人工智能技术带来的风险是整个2023年公众辩论的焦点,成为媒体热议的话题。穆斯塔法上周对《麻省理工科技评论》(MIT Technology Review)表示:“我只是认为,生存风险论完全是庸人自扰。我们应该讨论更实际的基本问题(从隐私到偏见,从面部识别到在线审核)。”
他表示,最紧迫的问题尤其应该是监管。苏莱曼对世界各国政府能够有效监管人工智能持乐观态度。苏莱曼表示:“我认为所有人都陷入恐慌:认为我们无法对其进行监管。这简直是无稽之谈。我们完全有能力监管人工智能。我们将采用相同的框架(之前行之有效)。”
他的信念在一定程度上源于,在航空和互联网等曾被视为前沿技术的领域,各国都实现了有效监管。他认为:如果商业航班没有合理的安全协议,乘客将永远不会信任航空公司,这将损害其业务。在互联网上,消费者可以访问无数的网站,但像贩卖毒品或宣传恐怖主义这样的活动是明令禁止的,尽管没有完全消除。
另一方面,正如《麻省理工科技评论》的威尔·道格拉斯·海恩(Will Douglas Heaven)向苏莱曼指出的那样,一些观察人士认为,目前的互联网监管存在缺陷:没有追究大型科技公司的全部责任。特别是,作为现行互联网法律基石之一的《通信规范法》第230条规定,公司无须为第三方或用户在他们平台发布的内容承担责任。这是某些大型社交媒体公司成立的基础,使其免于为网站上分享的内容承担任何责任。今年2月,最高法院审理了两起可能改变互联网立法格局的案件。
为了实现对人工智能的监管,苏莱曼希望结合广泛的国际监管来创建新的监管机构,并(在“微观层面”)将政策落细落小。所有雄心勃勃的人工智能监管机构和开发人员可以采取的第一步就是限制“递归自我改进”,即人工智能的自我改进能力。限制人工智能的这一特定能力将是关键的第一步,以确保其未来的发展不会完全脱离人类的监督。
“你不会想让你的小人工智能在没有你监督的情况下自行更新代码。”苏莱曼说。“也许这甚至应该是一项需要获得许可的活动——(你知道的)就像处理炭疽或核材料一样。”
如果不对人工智能的细节进行管理,有时会引入使用的“实际代码”,立法者将很难确保其法律的可执行性。苏莱曼说:"这关系到设定人工智能无法逾越的界限。“
为了确保实现上述愿景,政府应该能够“直接管理”人工智能开发人员,以确保他们不会跨越最终设定的任何界限。其中一些界限应该明确标出,比如禁止聊天机器人回答某些问题,或是对个人数据进行隐私保护。
世界各国政府都在制定人工智能法规
周二,美国总统乔·拜登(Joe Biden)在联合国发表演讲时也表达了类似的观点,他呼吁世界各国领导人携手合作,减轻人工智能带来的“巨大危险”,同时确保人工智能仍“为善所用”。
而在美国国内,鉴于人工智能技术发展变化如此之快,参议院多数党领袖查克·舒默(Chuck Schumer,纽约州民主党人)敦促国会议员迅速采取行动,对人工智能进行监管。上周,舒默邀请了特斯拉(Tesla)首席执行官埃隆·马斯克(Eon Musk)、微软(Microsoft)首席执行官萨蒂亚·纳德拉(Satya Nadella)和Alphabet首席执行官桑达尔·皮查伊(Sundar Pichai)等大型科技公司高管到华盛顿开会,讨论未来人工智能监管问题。一些国会议员对邀请硅谷高管讨论旨在监管其公司的政策的决定持怀疑态度。
欧盟(最早监管人工智能的政府机构之一)在6月通过了一项立法草案,要求开发人员分享用于训练模型的数据,并严格限制面部识别软件的使用——苏莱曼也表示应该限制这种软件的使用。《时代》杂志的一篇报道发现,ChatGPT的制造商OpenAI曾游说欧盟官员弱化其拟议立法的部分法条。
中国也是最早出台最完善的人工智能监管法规的国家之一。今年7月,中国国家互联网信息办公室发布了《生成式人工智能服务管理暂行办法》,包括明确要求遵守现行版权法,并规定哪些类型的开发需要政府批准。
苏莱曼则坚信,政府在未来的人工智能监管中可以发挥关键作用。“我热爱民族国家。”他说。“我相信监管的力量。我呼吁民族国家采取行动来解决这一问题。事关重大,现在是时候采取行动了。”(财富中文网)
译者:中慧言-王芳
Elon Musk has repeatedly referred to AI as a “civilizational risk.” Geoffrey Hinton, one of the founding fathers of AI research, changed his tune recently, calling AI an “existential threat.” And then there’s Mustafa Suleyman, cofounder of DeepMind, a firm formerly backed by Musk that has been on the scene for over a decade, and coauthor of the newly released “The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma.” One of the most prominent and longest-tenured experts in the field, he thinks such far-reaching concerns aren’t as pressing as others make them out to be, and in fact, the challenge from here on out is pretty straightforward.
The risks posed by AI have been front and center in public debates throughout 2023 since the technology vaulted into the public consciousness, becoming the subject of fascination in the press. “I just think that the existential-risk stuff has been a completely bonkers distraction,” Mustafa told MIT Technology Review last week. “There’s like 101 more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation.”
The most pressing issue, in particular, should be regulation, he says. Suleyman is bullish on government’s across the world being able to effectively regulate AI. “I think everybody is having a complete panic that we’re not going to be able to regulate this,” Suleyman said. “It’s just nonsense. We’re totally going to be able to regulate it. We’ll apply the same frameworks that have been successful previously.”
His conviction is in part borne of the successful regulation of past technologies that were once considered cutting edge such as aviation and the internet. He argues: Without proper safety protocols for commercial flights, passengers would have never trusted airlines, which would have hurt business. On the internet, consumers can visit a myriad of sites but activities like selling drugs or promoting terrorism are banned—although not eliminated entirely.
On the other hand, as the Review‘s Will Douglas Heaven noted to Suleyman, some observers argue that current internet regulations are flawed and don’t sufficiently hold big tech companies accountable. In particular, Section 230 of the Communications Decency Act, one of the cornerstones of current internet legislation, which offers platforms safe harbor for content posted by third party users. It’s the foundation on which some of the biggest social media companies are built, shielding them from any liability for what gets shared on their websites. In February, the Supreme Court heard two cases that could alter the legislative landscape of the internet.
To bring AI regulation to fruition, Suleyman wants a combination of broad, international regulation to create new oversight institutions and smaller, more granular policies at the “micro level.” A first step that all aspiring AI regulators and developers can take is to limit “recursive self improvement” or AI’s ability to improve itself. Limiting this specific capability of artificial intelligence would be a critical first step to ensure that none of its future developments were made entirely without human oversight.
“You wouldn’t want to let your little AI go off and update its own code without you having oversight,” Suleyman said. “Maybe that should even be a licensed activity—you know, just like for handling anthrax or nuclear materials.”
Without governing some of the minutiae of AI, inducing at times the “actual code” used, legislators will have a hard time ensuring their laws are enforceable. “It’s about setting boundaries, limits that an AI can’t cross,” Suleyman says.
To make sure that happens, governments should be able to get “direct access” to AI developers to ensure they don’t cross whatever boundaries are eventually established. Some of those boundaries should be clearly marked, such as prohibiting chatbots to answer certain questions, or privacy protections for personal data.
Governments worldwide are working on AI regulations
During a speech at the UN Tuesday, President Joe Biden sounded a similar tune, calling for world leaders to work together to mitigate AI’s “enormous peril” while making sure it is still used “for good.”
And domestically, Senate majority leader Chuck Schumer (D-N.Y.) has urged lawmakers to move swiftly in regulating AI, given the rapid pace of change in the technology’s development. Last week, Schumer invited executives from the biggest tech companies including Tesla CEO Elon Eon Musk, Microsoft CEO Satya Nadella, and Alphabet CEO Sundar Pichai to Washington for a meeting to discuss prospective AI regulation. Some lawmakers were skeptical of the decision to invite executives from Silicon Valley to discuss the policies that would seek to regulate their companies.
One of the earliest governmental bodies to regulate AI was the European Union, which in June passed draft legislation requiring developers to share what data is used to train their models and severely restricting the use of facial recognition software—something Suleyman also said should be limited. A Time report found that OpenAI, which makes ChatGPT, lobbied EU officials to weaken some portions of their proposed legislation.
China has also been one of the earliest movers on sweeping AI legislation. In July, the Cyberspace Administration of China released interim measures for governing AI, including explicit requirements to adhere to existing copyright laws and establishing which types of developments would need government approval.
Suleyman for his part is convinced governments have a critical role to play in the future of AI regulations. “I love the nation-state,” he said. “I believe in the power of regulation. And what I’m calling for is action on the part of the nation-state to sort its shit out. Given what’s at stake, now is the time to get moving.”