Sinovation Venture首席执行官、谷歌中国前总裁李开复表示,随着企业开始推进低级服务工作的自动化进程,他们或将制造“虚假任务”,以测试员工是否胜任高级职位。
李开复在Collective[i]主办的一场线上活动上说:“我们可能需要这样一个世界——在这个世界里,人们‘假装在工作’,但实际上他们正在被评估向上流动的可能性。”Collective[i]是一家将人工智能应用于销售和客户关系管理系统的公司。
企业高层的工作需要更深入、更有创造性的思考,自动化难以做到,必须由人类来完成。但是,如果入门级工作全部能够由自动化完成,企业就没有理由雇佣和培养年轻人才。因此,李开复认为,企业们将需要找到一种新方法来招聘入门级员工,并打造一条晋升之路。
以上是李开复对人工智能系统的广泛应用可能产生的社会影响做出的几项预测之一。其中一些摘自他即将出版的新书《人工智能2041:我们未来的十个愿景》(AI 2041: Ten Visions for Our future),这是他与科幻作家陈楸帆合作编写的10个短篇小说集,阐述了人工智能改变个人和组织的可能方式。李开复开玩笑说:“这几乎是书版的《黑镜》(Black Mirror),只不过形式更具建设性。”李开复是人工智能和机器学习领域的知名专家,曾经于2018年出版了《人工智能超级力量:中国、硅谷和世界新秩序》(AI Superpowers: China, Silicon Valley, and the New World Order)一书。
提到人工智能在社会行为中扮演的角色时,人们关注最多的是算法倾向于反映、加剧现有的社会偏见。例如,推特在一场旨在消除算法偏见的竞赛中发现,它的图像裁剪工具优先留下较瘦的白人女性,而不是其他人种的人。由数据驱动的模型存在加剧社会不平等的风险,尤其是当越来越多的个人、企业、政府部门依靠数据库来作出决定。正如李开复所指出的,当一家“公司拥有相当多的权力和数据时,即使它在表面上优化了算法结构,迎合了用户兴趣,它仍然可能做一些对社会非常有害的事情”。
尽管人工智能存在潜在性伤害,但李开复相信,开发人员和人工智能技术人员可以进行自我约束。他支持指标的开发,以帮助企业判断其人工智能系统的性能,其方式类似于通过环境、社会和公司治理(ESG)指标来确定企业的表现。“你只需要提供可靠的方法,让这类人工智能伦理成为可定期衡量、可付诸实施的东西。”他说。
但是李开复指出,为了训练程序员们,还需要做更多的工作,比如开发一些能够帮助“监测潜在的偏见问题”的工具。更广泛地说,他建议人工智能工程师采用“类似于医生培训中的希波克拉底誓言”的东西——所谓“希波克拉底誓言”,是指医生们与病人们打交道时所需的职业道德,常被概括为“不伤害”原则。
“从事人工智能工作的人需要意识到,他们在编程时对人们的生活负有巨大的责任。”李开复说,“这不仅仅是为他们的互联网公司雇主赚多少钱的问题。”(财富中文网)
编译:杨二一
Sinovation Venture首席执行官、谷歌中国前总裁李开复表示,随着企业开始推进低级服务工作的自动化进程,他们或将制造“虚假任务”,以测试员工是否胜任高级职位。
李开复在Collective[i]主办的一场线上活动上说:“我们可能需要这样一个世界——在这个世界里,人们‘假装在工作’,但实际上他们正在被评估向上流动的可能性。”Collective[i]是一家将人工智能应用于销售和客户关系管理系统的公司。
企业高层的工作需要更深入、更有创造性的思考,自动化难以做到,必须由人类来完成。但是,如果入门级工作全部能够由自动化完成,企业就没有理由雇佣和培养年轻人才。因此,李开复认为,企业们将需要找到一种新方法来招聘入门级员工,并打造一条晋升之路。
以上是李开复对人工智能系统的广泛应用可能产生的社会影响做出的几项预测之一。其中一些摘自他即将出版的新书《人工智能2041:我们未来的十个愿景》(AI 2041: Ten Visions for Our future),这是他与科幻作家陈楸帆合作编写的10个短篇小说集,阐述了人工智能改变个人和组织的可能方式。李开复开玩笑说:“这几乎是书版的《黑镜》(Black Mirror),只不过形式更具建设性。”李开复是人工智能和机器学习领域的知名专家,曾经于2018年出版了《人工智能超级力量:中国、硅谷和世界新秩序》(AI Superpowers: China, Silicon Valley, and the New World Order)一书。
提到人工智能在社会行为中扮演的角色时,人们关注最多的是算法倾向于反映、加剧现有的社会偏见。例如,推特在一场旨在消除算法偏见的竞赛中发现,它的图像裁剪工具优先留下较瘦的白人女性,而不是其他人种的人。由数据驱动的模型存在加剧社会不平等的风险,尤其是当越来越多的个人、企业、政府部门依靠数据库来作出决定。正如李开复所指出的,当一家“公司拥有相当多的权力和数据时,即使它在表面上优化了算法结构,迎合了用户兴趣,它仍然可能做一些对社会非常有害的事情”。
尽管人工智能存在潜在性伤害,但李开复相信,开发人员和人工智能技术人员可以进行自我约束。他支持指标的开发,以帮助企业判断其人工智能系统的性能,其方式类似于通过环境、社会和公司治理(ESG)指标来确定企业的表现。“你只需要提供可靠的方法,让这类人工智能伦理成为可定期衡量、可付诸实施的东西。”他说。
但是李开复指出,为了训练程序员们,还需要做更多的工作,比如开发一些能够帮助“监测潜在的偏见问题”的工具。更广泛地说,他建议人工智能工程师采用“类似于医生培训中的希波克拉底誓言”的东西——所谓“希波克拉底誓言”,是指医生们与病人们打交道时所需的职业道德,常被概括为“不伤害”原则。
“从事人工智能工作的人需要意识到,他们在编程时对人们的生活负有巨大的责任。”李开复说,“这不仅仅是为他们的互联网公司雇主赚多少钱的问题。”(财富中文网)
编译:杨二一
As businesses begin to automate low-level service work, companies may start creating fake tasks to test employee suitability for senior positions, says Kai-Fu Lee, the CEO of Sinovation Ventures and former president of Google China.
“We may need to have a world in which people have ‘the pretense of working,’ but actually they’re being evaluated for upward mobility,” Lee said at a virtual event hosted by Collective[i], a company that applies A.I. to sales and CRM systems.
Work at higher levels of a company, which requires deeper and more creative thinking, is harder to automate and must be completed by humans. But if entry-level work is fully automated, companies don't have a reason to hire and groom young talent. So, Lee says, companies will need to find a new way to hire entry-level employees and build a path for promotion.
It was one of several predictions Lee made about the possible social effects of widespread adoption of A.I. systems. Some were drawn from his upcoming book, AI 2041: Ten Visions for Our Future—a collection of 10 short stories, written in partnership with science fiction author Chen Qiufan, that illustrate ways that A.I. might change individuals and organizations. “Almost a book version of Black Mirror in a more constructive format,” joked Lee, a well-known expert in the field of A.I. and machine learning and author of the 2018 book AI Superpowers: China, Silicon Valley, and the New World Order.
Talk of A.I. and its role in social behavior often centers on the tendency of algorithms to reflect and exacerbate existing social biases. For example, a contest by Twitter to root out bias in its algorithms found that its image-cropping model prioritized thinner white women over people of other demographics. Data-driven models risk reinforcing social inequality, especially as more individuals, companies, and governments rely on them to make consequential decisions. As Lee noted, when a “company has too much power and data, [even if] it’s optimizing an objective function that’s ostensibly with the user interest [in mind], it could still do things that could be very bad for the society.”
Despite the potential for A.I. to do harm, Lee has faith in developers and A.I. technicians to self-regulate. He supported the development of metrics to help companies judge the performance of their A.I. systems, in a manner similar to the measurements used to determine a firm's performance against environmental, social, and corporate governance (ESG) indicators. "You just need to provide solid ways for these types of A.I. ethics to become regularly measured things and become actionable."
Yet he noted that more work needs to be done to train programmers, including the creation of tools to help "detect potential issues with bias." More broadly, he suggested that A.I. engineers adopt something "similar to the Hippocratic oath in medical training," referring to the set of professional ethics that doctors adhere to during their dealings with patients, most commonly summarized as "Do no harm."
“People working on A.I. need to realize the massive responsibilities they have on people’s lives when they program," Lee said. "It’s not just a matter of making more money for the Internet company that they work for.”