亚历克斯•斯皮内利是商业软件制造商LivePerson的首席技术专家,他认为美国近期的国会暴乱事件揭示了人工智能的潜在危险,虽然这项技术通常与亲特朗普的暴徒无关。
机器学习技术既能帮助公司在Facebook和推特上投放线上广告,也能成为不良分子的宣传工具,散播不实信息。
举个例子,2016年有人在Facebook上分享虚假新闻,平台的人工智能系统随后将这些文章推送给了用户。最近,Facebook的人工智能技术还推荐用户加入讨论QAnon阴谋论的群组,平台最终屏蔽了这一话题。
斯皮内利谈及亲特朗普的暴徒时表示:“他们生活的世界充满了不实信息和谎言。”
人工智能不仅可以用来散播不实信息,它在隐私和面部识别等领域也存在问题,这让不少企业在应用这项技术时三思而行。一些公司非常担心人工智能相关的伦理问题,于是取消了与其相关的项目,或者根本就不启动。
斯皮内利表示,他已经取消LivePerson和以前所在公司的一些人工智能项目。出于对人工智能的担忧,他没有透露这些公司的名字。这位专家之前就职于亚马逊、广告巨头麦肯世界集团和汤森路透。
根据他的说法,这些项目涉及机器学习,通过分析客户数据来预测用户行为。隐私维权人士经常表达对这类项目的担忧,因为它们依赖大量的个人信息。
斯皮内利说道:“从哲学上讲,我坚信如果要使用一个人的数据,对方必须知情同意。”
企业人工智能的道德问题
人工智能可以预测销售业绩,解读法律文件,并让客服机器人更加真实,因此过去几年一直受到企业的支持。不过,与其相关的负面头条新闻也是源源不断。
去年,IBM、微软和亚马逊禁止警方使用他们的面部识别软件,因其总是错认女性和有色人种。微软和亚马逊都希望能继续向警方销售软件,但他们呼吁联邦政府规范执法部门对该项技术的使用。
IBM首席执行官阿尔温德•克里希纳更进一步表示,公司会永久暂停面部识别软件业务,称其反对任何“用于大规模监视、种族侧写、侵犯基本人权和自由”的技术。
2018年,知名人工智能研究人员蒂姆尼特•格布鲁和乔伊•布奥拉姆维尼发表了一篇研究论文,强调了面部识别软件存在的偏见问题。鲁曼•乔杜里是初创公司Parity AI的CEO,此前曾负责埃森哲咨询公司的人工智能团队,她表示一些化妆品公司因此暂停了人工智能项目,这些项目可以呈现化妆品在不同肤色的上妆效果,但这些公司担心会造成对黑人女性的歧视。
“很多公司对面部识别技术的热情在这个时候冷却下来,”乔杜里说道。“我和化妆品行业的客户开了会,所有的项目都停了。”
谷歌最近出现的问题也促使企业反思人工智能。最近,上文提到的人工智能研究人员格布鲁在离开谷歌后表示,这家公司对她的研究进行了审查。这份研究关注谷歌人工智能软件的两个问题,一个是它可以理解人类语言但会因此产生偏见,另一个是它在训练中消耗大量电力会破坏环境。
这对谷歌造成了不良影响,因为这家搜索巨头以前也曾遇到过偏见问题。谷歌标榜自己是环境管理员,但它的Google Photos应用将黑人误认为大猩猩。
格布鲁离职不久,谷歌便禁止另一位人工智能伦理研究员访问公司的电脑系统,后者一直对这家公司持批评态度。谷歌一位发言人拒绝对研究人员或公司在道德层面的失误发表评论,他反而引用首席执行官桑达尔•皮查伊和高管杰夫•迪恩的说法,称公司正在评估格布鲁离职的相关情况,并将继续进行人工智能伦理研究。
米里亚姆•沃格尔曾是美国司法部律师,如今担任非营利组织EqualAI的负责人,这家组织帮助一些公司处理人工智能的偏见问题。她透露道,许多公司和人工智能研究人员正在密切关注谷歌的问题,其中一些人担心,这会让人以后不热衷于研究与雇主商业利益无关的课题。
“这件事引起了每个人的关注,”沃格尔如此评价格布鲁的离职。“一个在这个领域倍受赞赏和尊敬的领袖也会面临失业风险,这一事实让不少人心里一凉。”
谷歌一向把自己定位为人工智能伦理的领头人,但这次失误证明其德不配位。沃格尔希望其他公司不要过度反应,解雇或噤声质疑人工智能项目道德的员工。
沃格尔说道:“希望这些公司不要觉得在公司内设立伦理部门就会制造紧张气氛,导致事态升级到目前这个水平。”
人工智能道德在向前发展
阿比谢克•古普塔在微软专注机器学习方面的工作,他也是蒙特利尔人工智能伦理研究所的创始人兼首席研究员。据他所言,与几年前相比,现在的公司会更多地考虑人工智能的伦理问题,情况已经有所改善。
而且,大家并不认为公司应该完全停止使用人工智能。旧金山附近的圣克拉拉大学有一间马库拉应用伦理学中心,其技术伦理主任布莱恩•格林表示,人工智能已是一项举足轻重的技术,无法舍弃。
“人们对歇业的恐惧要超过对歧视的恐惧,”格林说道。
虽然LivePerson的斯皮内利对人工智能的一些用途感到担心,但他的公司仍在大量投资自然语言处理等分支领域,让电脑学习理解语言。他希望通过公开公司在人工智能和道德方面的立场,让客户相信LivePerson正在努力将危害降到最低。
LivePerson和专业服务巨头高知特以及保险公司哈门那都是EqualAI组织的成员,它们已经公开承诺将测试和监控其人工智能系统,以发现涉及偏见的问题。
斯皮内利说道:“如果我们做得不好,请站出来质疑我们。”(财富中文网)
译者:秦维奇
亚历克斯•斯皮内利是商业软件制造商LivePerson的首席技术专家,他认为美国近期的国会暴乱事件揭示了人工智能的潜在危险,虽然这项技术通常与亲特朗普的暴徒无关。
机器学习技术既能帮助公司在Facebook和推特上投放线上广告,也能成为不良分子的宣传工具,散播不实信息。
举个例子,2016年有人在Facebook上分享虚假新闻,平台的人工智能系统随后将这些文章推送给了用户。最近,Facebook的人工智能技术还推荐用户加入讨论QAnon阴谋论的群组,平台最终屏蔽了这一话题。
斯皮内利谈及亲特朗普的暴徒时表示:“他们生活的世界充满了不实信息和谎言。”
人工智能不仅可以用来散播不实信息,它在隐私和面部识别等领域也存在问题,这让不少企业在应用这项技术时三思而行。一些公司非常担心人工智能相关的伦理问题,于是取消了与其相关的项目,或者根本就不启动。
斯皮内利表示,他已经取消LivePerson和以前所在公司的一些人工智能项目。出于对人工智能的担忧,他没有透露这些公司的名字。这位专家之前就职于亚马逊、广告巨头麦肯世界集团和汤森路透。
根据他的说法,这些项目涉及机器学习,通过分析客户数据来预测用户行为。隐私维权人士经常表达对这类项目的担忧,因为它们依赖大量的个人信息。
斯皮内利说道:“从哲学上讲,我坚信如果要使用一个人的数据,对方必须知情同意。”
企业人工智能的道德问题
人工智能可以预测销售业绩,解读法律文件,并让客服机器人更加真实,因此过去几年一直受到企业的支持。不过,与其相关的负面头条新闻也是源源不断。
去年,IBM、微软和亚马逊禁止警方使用他们的面部识别软件,因其总是错认女性和有色人种。微软和亚马逊都希望能继续向警方销售软件,但他们呼吁联邦政府规范执法部门对该项技术的使用。
IBM首席执行官阿尔温德•克里希纳更进一步表示,公司会永久暂停面部识别软件业务,称其反对任何“用于大规模监视、种族侧写、侵犯基本人权和自由”的技术。
2018年,知名人工智能研究人员蒂姆尼特•格布鲁和乔伊•布奥拉姆维尼发表了一篇研究论文,强调了面部识别软件存在的偏见问题。鲁曼•乔杜里是初创公司Parity AI的CEO,此前曾负责埃森哲咨询公司的人工智能团队,她表示一些化妆品公司因此暂停了人工智能项目,这些项目可以呈现化妆品在不同肤色的上妆效果,但这些公司担心会造成对黑人女性的歧视。
“很多公司对面部识别技术的热情在这个时候冷却下来,”乔杜里说道。“我和化妆品行业的客户开了会,所有的项目都停了。”
谷歌最近出现的问题也促使企业反思人工智能。最近,上文提到的人工智能研究人员格布鲁在离开谷歌后表示,这家公司对她的研究进行了审查。这份研究关注谷歌人工智能软件的两个问题,一个是它可以理解人类语言但会因此产生偏见,另一个是它在训练中消耗大量电力会破坏环境。
这对谷歌造成了不良影响,因为这家搜索巨头以前也曾遇到过偏见问题。谷歌标榜自己是环境管理员,但它的Google Photos应用将黑人误认为大猩猩。
格布鲁离职不久,谷歌便禁止另一位人工智能伦理研究员访问公司的电脑系统,后者一直对这家公司持批评态度。谷歌一位发言人拒绝对研究人员或公司在道德层面的失误发表评论,他反而引用首席执行官桑达尔•皮查伊和高管杰夫•迪恩的说法,称公司正在评估格布鲁离职的相关情况,并将继续进行人工智能伦理研究。
米里亚姆•沃格尔曾是美国司法部律师,如今担任非营利组织EqualAI的负责人,这家组织帮助一些公司处理人工智能的偏见问题。她透露道,许多公司和人工智能研究人员正在密切关注谷歌的问题,其中一些人担心,这会让人以后不热衷于研究与雇主商业利益无关的课题。
“这件事引起了每个人的关注,”沃格尔如此评价格布鲁的离职。“一个在这个领域倍受赞赏和尊敬的领袖也会面临失业风险,这一事实让不少人心里一凉。”
谷歌一向把自己定位为人工智能伦理的领头人,但这次失误证明其德不配位。沃格尔希望其他公司不要过度反应,解雇或噤声质疑人工智能项目道德的员工。
沃格尔说道:“希望这些公司不要觉得在公司内设立伦理部门就会制造紧张气氛,导致事态升级到目前这个水平。”
人工智能道德在向前发展
阿比谢克•古普塔在微软专注机器学习方面的工作,他也是蒙特利尔人工智能伦理研究所的创始人兼首席研究员。据他所言,与几年前相比,现在的公司会更多地考虑人工智能的伦理问题,情况已经有所改善。
而且,大家并不认为公司应该完全停止使用人工智能。旧金山附近的圣克拉拉大学有一间马库拉应用伦理学中心,其技术伦理主任布莱恩•格林表示,人工智能已是一项举足轻重的技术,无法舍弃。
“人们对歇业的恐惧要超过对歧视的恐惧,”格林说道。
虽然LivePerson的斯皮内利对人工智能的一些用途感到担心,但他的公司仍在大量投资自然语言处理等分支领域,让电脑学习理解语言。他希望通过公开公司在人工智能和道德方面的立场,让客户相信LivePerson正在努力将危害降到最低。
LivePerson和专业服务巨头高知特以及保险公司哈门那都是EqualAI组织的成员,它们已经公开承诺将测试和监控其人工智能系统,以发现涉及偏见的问题。
斯皮内利说道:“如果我们做得不好,请站出来质疑我们。”(财富中文网)
译者:秦维奇
Alex Spinelli, chief technologist for business software maker LivePerson, says the recent U.S. Capitol riot shows the potential dangers of a technology not usually associated with pro-Trump mobs: artificial intelligence.
The same machine-learning tech that helps companies target people with online ads on Facebook and Twitter also helps bad actors distribute propaganda and misinformation.
In 2016, for instance, people shared fake news articles on Facebook, whose A.I. systems then funneled them to users. More recently, Facebook's A.I. technology recommended that users join groups focused on the QAnon conspiracy, a topic that Facebook eventually banned.
“The world they live in day in and day out is filled with disinformation and lies,” says Spinelli about the pro-Trump rioters.
A.I.'s role in disinformation, and problems in other areas including privacy and facial recognition, are causing companies to think twice about using the technology. In some cases, businesses are so concerned about ethics related to A.I. that they are killing projects involving A.I. or never starting them to begin with.
Spinelli says that he has canceled some A.I. projects at LivePerson and at previous employers that he declined to name because of concerns about A.I. He previously worked at Amazon, advertising giant McCann Worldgroup, and Thomson Reuters.
The projects, Spinelli says, involved machine learning analyzing customer data in order to predict user behavior. Privacy advocates often raise concerns about such projects, which rely on huge amounts of personal information.
"Philosophically, I’m a big believer in the use of your data being approved by you,” Spinelli says.
Ethical problems in corporate A.I.
Over the past few years, artificial intelligence has been championed by companies for its ability to predict sales, interpret legal documents, and power more realistic customer chatbots. But it's also provided a steady drip of unflattering headlines.
Last year, IBM, Microsoft, and Amazon barred police use of their facial recognition software because it more frequently misidentifies women and people of color. Microsoft and Amazon both want to continue selling the software to police, but they called for federal rules about how law enforcement can use the technology.
IBM CEO Arvind Krishna went a step further by saying his company would permanently suspend its facial recognition software business, saying that the company opposes any technology used "for mass surveillance, racial profiling, violations of basic human rights and freedoms."
In 2018, high-profile A.I. researchers Timnit Gebru and Joy Buolamwini published a research paper highlighting bias problems in facial recognition software. In reaction, some cosmetics companies paused A.I. projects that would determine how makeup products would look on certain people's skin, for fear the technology could discriminate against Black women, says Rumman Chowdhury, the former head of Accenture’s responsible A.I. team and now CEO of startup Parity AI.
“That was when lot of companies cooled down with how much they wanted to use facial recognition,” Chowdhury says. “I had meetings with clients in makeup, and all of it stopped.”
Recent problems at Google have also caused companies to rethink A.I. More recently, Gebru, the A.I. researcher, left Google and then claimed that the company had censored some of her research. That research focused on bias problems with the company's A.I. software that understands human language and the fact that the software used huge amounts of electricity in its training, which could harm the environment.
This reflected poorly on Google because the search giant has experienced bias problems in the past, when its Google Photos product misidentified Black people as gorillas, and the search giant champions itself as an environmental steward.
Shortly after Gebru's departure, Google suspended computer access to another of its A.I. ethics researchers who has been critical of the search giant. A Google spokesperson declined to comment about the researchers or the company's ethical blunders. Instead, he pointed to previous statements by Google CEO Sundar Pichai and Google executive Jeff Dean saying that the company is conducting a review of the circumstances of Gebru's departure and is committed to continuing its A.I. ethics research.
Miriam Vogel, a former Justice Department lawyer who now heads the EqualAI nonprofit, which helps companies address A.I. bias, says many companies and A.I. researchers are paying close attention to Google’s A.I. problems. Some fear that the problems may have a chilling impact on future research about topics that don't align with their employers' business interests.
“This issue has captured everyone’s attention,” Vogel says about Gebru leaving Google. “It took their breath away that someone who was so widely admired and respected as a leader in this field could have their job at risk.”
Although Google has positioned itself as a leader in A.I. ethics, the company's missteps point to a contradiction with that high-profile crown. Vogel hopes that companies don’t overreact by firing or silencing their own employees who question the ethics of certain A.I. projects.
“I would hope companies do not take fear that by having an ethical arm of their organization that they would create tensions that would lead to an escalation at this level,” Vogel says.
A.I. ethics going forward
Still, the fact that companies are thinking about A.I. ethics is an improvement from a few years ago, when they gave the issue relatively little thought, says Abhishek Gupta, who focuses on machine learning at Microsoft and is founder and principal researcher of the Montreal AI Ethics Institute.
And no one thinks companies will completely stop using A.I. Brian Green, the director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University, near San Francisco, says it's become too important of a tool to drop.
“The fear of going out of business trumps the fear of discrimination,” Green says.
And while LivePerson's Spinelli worries about some uses of A.I., his company is still heavily investing in its subsets like natural language processing, in which computers learn to understand language. He’s hoping that by being public about the company’s stance on A.I. and ethics, customers will trust that LivePerson is trying to minimize any harms.
LivePerson, along with professional services giant Cognizant and insurance firm Humana, are members of the EqualAI organization and have made public pledges that they will test and monitor their A.I. systems for problems involving bias.
Says Spinelli, “Call us out if we fail.”