首页 500强 活动 榜单 商业 科技 领导力 专题 品牌中心
杂志订阅

企业如何避免人工智能偏见

Jonathan Vanian
2021-02-16

专家认为,人们应当了解到今天的人工智能“确实存在一定的局限性,所以要谨慎使用。

文本设置
小号
默认
大号
Plus(0条)

如果企业希望用户能顺利使用其开发的人工智能软件,就必须确保开发团队以及软件训练数据集的多样化。

这一结论来自《财富》杂志周二举办的在线论坛,主要议题是人工智能偏见。

然而,企业想在社会中找到公平又具有普遍代表性的数据可能是一项挑战。前法官、美国顶级律所Cravath, Swaine & Moore合伙人凯瑟琳·福里斯特解释说,一些数据集,比如刑事司法系统中的数据集就出了名地不平衡。

比如某个城市里,如果当地执法部门曾过度针对黑人社区,人工智能又用逮捕数据集为基础数据,在预测犯罪时就可能会错误地判断黑人的犯罪几率更高。

福里斯特说:“因此,和我们的历史一样,应用于人工智能工具的数据资产也存在结构性不平等。坦率地说,很难摆脱这种不平等。”

福里斯特表示,一直在努力告诫法官,数据偏见会影响法律系统里某些人工智能工具。但这很有挑战性,因为软件产品种类繁多,并且没有相互比较的标准。

她说,人们应当了解到今天的人工智能“确实存在一定的局限性,所以要谨慎使用。”

Dropbox软件公司的多样性、公平性和包容性主管丹尼·吉洛里称,一直希望通过产品多样性委员会来减少人工智能偏见。该委员会成员分析公司产品,审核对某些群体是否存在不经意的歧视。Dropbox工作人员发布产品之前,不仅要提交研发中产品的隐私审核,还要提交多样性审核。

吉洛里说,Dropbox的多样性委员会在某个产品中发现了与“个人身份信息”相关的偏见问题,后来顺利解决。

关键是要尽早发现偏见,而不是必须“出事之后再回过头解决问题,” 吉洛里说。(财富中文网)

译者:晓维

审校:夏林

如果企业希望用户能顺利使用其开发的人工智能软件,就必须确保开发团队以及软件训练数据集的多样化。

这一结论来自《财富》杂志周二举办的在线论坛,主要议题是人工智能偏见。

然而,企业想在社会中找到公平又具有普遍代表性的数据可能是一项挑战。前法官、美国顶级律所Cravath, Swaine & Moore合伙人凯瑟琳·福里斯特解释说,一些数据集,比如刑事司法系统中的数据集就出了名地不平衡。

比如某个城市里,如果当地执法部门曾过度针对黑人社区,人工智能又用逮捕数据集为基础数据,在预测犯罪时就可能会错误地判断黑人的犯罪几率更高。

福里斯特说:“因此,和我们的历史一样,应用于人工智能工具的数据资产也存在结构性不平等。坦率地说,很难摆脱这种不平等。”

福里斯特表示,一直在努力告诫法官,数据偏见会影响法律系统里某些人工智能工具。但这很有挑战性,因为软件产品种类繁多,并且没有相互比较的标准。

她说,人们应当了解到今天的人工智能“确实存在一定的局限性,所以要谨慎使用。”

Dropbox软件公司的多样性、公平性和包容性主管丹尼·吉洛里称,一直希望通过产品多样性委员会来减少人工智能偏见。该委员会成员分析公司产品,审核对某些群体是否存在不经意的歧视。Dropbox工作人员发布产品之前,不仅要提交研发中产品的隐私审核,还要提交多样性审核。

吉洛里说,Dropbox的多样性委员会在某个产品中发现了与“个人身份信息”相关的偏见问题,后来顺利解决。

关键是要尽早发现偏见,而不是必须“出事之后再回过头解决问题,” 吉洛里说。(财富中文网)

译者:晓维

审校:夏林

If companies are serious about their artificial intelligence software working well for everyone, they must ensure that the teams developing it as well as the datasets used to train the software are diverse.

That’s one takeaway from an online panel discussion about A.I.’s bias hosted by Fortune on Tuesday.

It can be challenging for companies to find datasets that are both fair and reflective of everyone in society. In fact, some datasets like those from the criminal justice system are notoriously plagued with inequality, explained Katherine Forrest, a former judge and partner at the law firm Cravath, Swaine and Moore.

Consider a dataset of arrests in a city in which local law enforcement has a history of over-policing Black neighborhoods. Because of the underlying data, an A.I. tool developed to predict who is likely to commit a crime may incorrectly deduce that Black people are far more likely to be offenders.

“So the data assets used for all of these tools is only as good as our history,” Forrest said. “We have structural inequalities that are built into that data that are frankly difficult to get away from.”

Forrest said she has been trying to educate judges about bias problems affecting certain A.I. tools used in the legal system. But it’s challenging because there are many different software products and there is no standard for comparing them to each other.

She said that people should know that today’s A.I. “has some real limitations, so use it with caution.”

Danny Guillory, the head of diversity, equity, and inclusion for Dropbox, said one way his software company has been trying to mitigate A.I. bias is through a product diversity council. Council members analyze the company’s products to learn if they inadvertently discriminate against certain groups of people. Similar to how Dropbox workers submit products under development for privacy reviews prior to their release, employees submit products for diversity reviews.

Guillory said the company's diversity council has already discovered some bias problems in an unspecified product that had to do with “personal identifying information” and workers were able to fix the issues.

The point is to spot bias problems early, instead of having to “retroactively fix things,” Guillory said.

财富中文网所刊载内容之知识产权为财富媒体知识产权有限公司及/或相关权利人专属所有或持有。未经许可,禁止进行转载、摘编、复制及建立镜像等任何使用。
0条Plus
精彩评论
评论

撰写或查看更多评论

请打开财富Plus APP

前往打开