立即打开
无人驾驶汽车最大的伦理困境:车祸时先救谁?

无人驾驶汽车最大的伦理困境:车祸时先救谁?

Bill Buchanan 2015年11月09日
每位司机随时可能基于对风险的判断做出各种决定。但面对诸如救乘客还是救行人这类道德问题时,自动驾驶汽车目前还没有快速确定的方法。

    人们每天都会基于对风险的判断做出各种决定。比如,在路上没车的时候,急着赶公交车的你可能会横穿马路。但如果马路上的车很多,你就不会这么做了。在某些危急时刻,人们必须在一瞬间做出决定。比如,正开车时,有个小孩突然跑到汽车前面,而你的左右两方又有其它危险——比如一边是只猫,另一边是悬崖峭壁,这时你要如何抉择?你会冒着自己的生命危险去保护其他人的安全吗?

    不过,面对这些道德问题时,自动驾驶汽车技术却没有快速或是确定的方法,甚至可以说,毫无办法。汽车厂商们都面临着算法带来的道德困境。车载电脑已经可以帮我们泊车和自动巡航,甚至可以在面临重大安全问题的时刻控制车辆。但这也意味着,自动驾驶技术将面临人类有时会遭遇的艰难抉择。

    那么,应该如何给电脑编制“道德算法”呢?

    ·计算每一种可能结果,选取伤亡数字最低的那一条路走。每个生命都要被平等对待。

    ·计算每一种可能结果,选取儿童伤亡数字最低的那一条路走。

    ·赋值计算:人命20分,猫命4分,狗命2分,马命1分。然后计算每种规避措施造成后果的总分值,选取总分最低的那一条路走。那么,一大群狗的得分要超过两只猫,车子就会向另一个方向规避,救下这一群狗。

    可是,如果车载电脑将车里的驾驶员和乘客也计算在内呢?要是车外的生命总分要比车里的高怎么办?要是人们知道,在必要情况下,车子会按照既定程序牺牲他们,他们还愿意坐进车里吗?

    法国图卢兹经济学院的让·弗朗西斯·博纳丰最近发表的一篇研究认为,这些问题没有正确或错误的答案。在此次调查中,博纳丰选取了亚马逊公司土耳其机器人项目(Mechanical Turk)的数百名员工作为访问对象。调查人员首先问道:如果为了拯救一位或更多行人,车子就得转向并撞上障碍物,从而导致驾驶员丧生,那么是否还应该进行这样的规避?随后调查人员会增减被拯救的行人数量。博纳丰发现,大多数人都在原则上同意对汽车进行编程以尽量减少伤亡人数,但谈到这些情景的具体细节时,他们就不那么确定了。他们支持其他人使用自动驾驶汽车,自己却不那么热心。人们往往都有实利主义本能,认为应该牺牲车内人员的生命来救助更多的车外人员——除非这些车内人员就是他们自己。

    智能机器

    科幻小说家们总是爱写关于机器人暴政的故事,比如机器人占领了世界(如《终结者》电影以及其它许多文艺作品),每个人说的话都会被监听和分析(就像奥威尔的小说《1984》里描写的一样)等等。这些情景暂时不可能发生,但科幻小说中的许多奇思妙想正在成为主流科技。互联网和云计算提供的平台已经造就了很多技术飞跃,显示了人工智能相对于人类的巨大优势。

    在斯坦利·库布里克的经典电影《2001漫游太空》中,我们已经可以见到一丝未来的影子。在电影中,电脑已经可以根据任务的优先性做出决定。飞船上的电脑HAL说道:“这个任务对我太重要了,我不能让你危害到它。”如今,从手机到汽车,机器智能已经在我们身边的许多设备上出现了。据英特尔公司预测,到2020年,全球将有1.52亿辆联网汽车行驶在路面上,它们每年将产生多达1.1亿亿字节的数据,足以装满4万多个250 GB的硬盘。那么它们有多智能呢?用英特尔公司的话说,几乎和你一样智能。届时,联网汽车将分享和分析一系列数据,从而在行驶中随时做出决策。在大多数情况下,无人驾驶汽车的确比有人驾驶的汽车更安全,但我们所担心的正是那些异常情况。

    科幻小说家阿西莫夫著名的“机器人三定律”,为我们提出了未来智能设备如何在危急情况下进行决策的有益建议。

    1、机器人不能伤害人,也不能不作为而坐视人受到伤害。

    2、机器人必须服从人类的命令,除非人类的命令违背第一原则。

    3、在不违背第一及第二原则的情况下,机器人必须保护自己。

    阿西莫夫甚至还在“三定律”之前增加了一条级别高于三定律的“第零定律”:

    ·机器人不能伤害人类社会,也不能不作为而坐视人类社会受到伤害。

    阿西莫多虽然没能帮我们解决“撞车”悖论,不过随着传感器技术的进一步发展,数据的来源越来越多,数据处理能力越来越强,“撞车”决策已经被简化为冷冰冰的数据分析。

    当然,软件爱闹Bug是出了名的。如果有人恶意篡改这些系统,会造成怎样的灾难?到了机器智能真的能从人类手中接管方向盘的那一天,又会发生什么?这样做是否正确?到时候,购车者能否购买可以编程的“道德配置”,对自己的车子进行“道德定制”?既然有的汽车上帖着“我不为任何人踩刹车”的保险杠车贴,到时候会不会有“我不为任何人踩刹车”的人工智能?如果是这样的话,你怎么才能知道车子在危机时刻会做出怎样的反应——就算你知道,你又是否愿意坐上这样一台车呢?

    此外还有法律上的问题。如果一辆车本可以采取措施减少伤亡,但它却没有这样做,那会怎样?如果它根据“道德计算”,直接从行人身上碾过去了怎么办?这些都是人类驾驶汽车时所要担负的责任,但机器是按指令行事的,那么应该由谁(或者什么东西)来承担责任?如今智能手机、机场监控设备甚至连Facebook的面部识别技术都在不断进步,对于计算机来说,识别物体并且根据车速和路况迅速计算出一系列可能后果,并选择和采取相应行动,已经不是很困难的事了。到那个时候,置身事中的你,很可能连选择的机会都没有了。(财富中文网)

    本文作者Bill Buchanan是爱丁堡龙比亚大学分布式计算及网络与安全中心主任,本文最初发表于《The Conversation》。

    译者:朴成奎

    审校:任文科

    We make decisions every day based on risk – perhaps running across a road to catch a bus if the road is quiet, but not if it’s busy. Sometimes these decisions must be made in an instant, in the face of dire circumstances: a child runs out in front of your car, but there are other dangers to either side, say a cat and a cliff. How do you decide? Do you risk your own safety to protect that of others?

    Now that self-driving cars are here and with no quick or sure way of overriding the controls – or even none at all – car manufacturers are faced with an algorithmic ethical dilemma. On-board computers in cars are already parking for us, driving on cruise control, and could take control in safety-critical situations. But that means they will be faced with the difficult choices that sometimes face humans.

    How to program a computer’s ethical calculus?

    • Calculate the lowest number of injuries for each possible outcome, and take that route. Every living instance would be treated the same.

    • Calculate the lowest number of injuries for children for each possible outcome, and take that route.

    • Allocate values of 20 for each human, four for a cat, two for a dog, and one for a horse. Then calculate the total score for each in the impact, and take the route with the lowest score. So a big group of dogs would rank more highly than two cats, and the car would react to save the dogs.

    What if the car also included its driver and passengers in this assessment, with the implication that sometimes those outside the car would score more highly than those within it? Who would willingly climb aboard a car programmed to sacrifice them if needs be?

    A recent study by Jean-Francois Bonnefon from the Toulouse School of Economics in France suggested that there’s no right or wrong answer to these questions. The research used several hundred workers found through Amazon’s Mechanical Turk to analyze viewpoints on whether one or more pedestrians could be saved when a car swerves and hits a barrier, killing the driver. Then they varied the number of pedestrians who could be saved. Bonnefon found that most people agreed with the principle of programming cars to minimize death toll, but when it came to the exact details of the scenarios they were less certain. They were keen for others to use self-driving cars, but less keen themselves. So people often feel a utilitarian instinct to save the lives of others and sacrifice the car’s occupant, except when that occupant is them.

    Intelligent machines

    Science fiction writers have had plenty of leash to write about robots taking over the world (Terminator and many others), or where everything that’s said is recorded and analyzed (such as in Orwell’s 1984). It’s taken a while to reach this point, but many staples of science fiction are in the process of becoming mainstream science and technology. The internet and cloud computing have provided the platform upon which quantum leaps of progress are made, showcasing artificial intelligence against the human.

    In Stanley Kubrick’s seminal film 2001: A Space Odyssey, we see hints of a future, where computers make decisions on the priorities of their mission, with the ship’s computer HAL saying: “This mission is too important for me to allow you to jeopardize it”. Machine intelligence is appearing in our devices, from phones to cars. Intel predicts that there will be 152 million connected cars by 2020, generating over 11 petabytes of data every year – enough to fill more than 40,000 250 GB hard disks. How intelligent? As Intel puts it, (almost) as smart as you. Cars will share and analyze a range data in order to make decisions on the move. It’s true enough that in most cases driverless cars are likely to be safer than humans, but it’s the outliers that we’re concerned with.

    The author Isaac Asimov’s famous three laws of robotics proposed how future devices will cope with the need to make decisions in dangerous circumstances.

    • A robot may not injure a human being or, through inaction, allow a human being to come to harm.

    • A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

    • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    He even added a more fundamental “0th law” preceding the others:

    • A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    Asimov did not tackle our ethical dilemma of the car crash, but with greater sensors to gather data, more sources of data to draw from, and greater processing power, the decision to act is reduced to a cold act of data analysis.

    Of course software is notoriously buggy. What havoc could malicious actors who compromise these systems wreak? And what happens at the point that machine intelligence takes control from the human? Will it be right to do so? Could a future buyer purchase programmable ethical options with which to customize their car? The artificial intelligence equivalent of a bumper sticker that says “I break for nobody”? In which case, how would you know how cars were likely to act – and would you climb aboard if you did?

    Then there are the legal issues. What if a car could have intervened to save lives but didn’t? Or if it ran people down deliberately based on its ethical calculus? This is the responsibility we bear as humans when we drive a car, but machines follow orders, so who (or what) carries the responsibility for a decision? As we see with improving face recognition in smartphones, airport monitors and even on Facebook, it’s not too difficult for a computer to identify objects, quickly calculate the consequences based on car speed and road conditions in order to calculate a set of outcomes, pick one, and act. And when it does so, it’s unlikely you’ll have an choice in the matter.

    Bill Buchanan is head of the center for distributed computing, networks and security at Edinburgh Napier University. This article originally appeared on The Conversation.

  • 热读文章
  • 热门视频
活动
扫码打开财富Plus App