首页 500强 活动 榜单 商业 科技 领导力 专题 品牌中心
杂志订阅

人工智能将产生海量碳排放,鱼和熊掌如何兼得?

Jeremy Kahn
2021-04-23

算法在训练过程中能够产生多少碳足迹,在很大程度取决于算法的具体设计、用于训练的计算机硬件类型,以及训练发生地的发电类型的影响。

文本设置
小号
默认
大号
Plus(0条)

现在的人工智能算法已经可以完成一些非常复杂的任务了,比如撰写非常自然的文章,或者根据文字描述生成图片等等。不过训练这些程序需要大量的计算力,而计算力又跟电力成正比,这就让很多人开始担心起超大型人工智能系统的碳足迹问题,生怕它在环境上不具有可持续性。

那么,这些先进的人工智能系统的碳足迹究竟是多少?加州大学伯克利分校(University of California at Berkeley)和谷歌公司(Google)的科学家最近给出了迄今为止最准确的估计——毕竟当前市面上很多最流行的大型人工智能系统本身就是由谷歌开发的。

比如研究显示,旧金山的人工智能公司OpenAI创建的一款功能强大的语言模型GPT-3,它在软件的训练过程中产生了552吨的二氧化碳排放,相当于120辆乘用车一年的排放量。谷歌的先进聊天机器人Meena也排放了96吨的二氧化碳,相当于17个美国家庭一年的耗电量。

虽然这些数字大得吓人,但它比研究人员此前的估算还是小了一些,毕竟研究人员始终没有渠道获得苹果(Apple)和谷歌的内部数据。这篇研究文章在4月21日发表于不用经过同行审议的论文库arxiv.org里。该论文同时也表明,人工智能对气候造成的影响是能够缓解的。

研究人员发现,这些算法在训练过程中可以产生多少碳足迹,在很大程度取决于算法的具体设计、用于训练的计算机硬件类型,以及训练发生地的发电类型的影响。

谷歌的科学家们发现,只要改变以上这三个因素,就能够有效地降低这些超大型人工智能系统的碳排放,最多甚至有可能将排放量降低上千倍。例如一个用来训练人工智能程序的数据中心,如果它从煤炭聚集型的印度,搬到了可再生能源十分丰富的芬兰,它的碳排放就可以降低10倍到100倍。

该论文的第一作者、谷歌的科学家大卫·帕特森告诉《财富》杂志:“这就好比一个老笑话:‘搞房地产最重要的是什么?地段,地段,还是地段。’地段不一样,就会产生很大的区别。”

帕特森也是加州大学伯克利分校的名誉教授。他表示,这对人工智能行业来说最终是一个利好消息,因为大多数的人工智能算法是都是“在云端上”被训练的。训练的过程发生在数据中心里,而这个数据中心可能离该算法的创建者还有好几千公里远。“在云计算中,地理位置是最容易改变的事情。”他说。

不过,如果在对人工智能系统进行培训时,都要顾及系统的环保性的话,这最终还是会利好那些大型云服务提供商。因为微软(Microsoft)、谷歌、IBM和亚马逊(Amazon)等公司已经在各地设立了几十个数据中心,有些还设置在平均温度较低的地区,这样就节省了服务器机架的冷却成本,和使用清洁能源的成本。

用于语言处理的超大型人工智能系统究竟会给环境造成多大影响?在谷歌内部,也有一群人工智能伦理专家对此提出了质疑。也正因为如此,谷歌的人工智能伦理学家蒂姆尼特·格布鲁才会从谷歌离职,之后,另一名人工智能伦理学家玛格丽特·米切尔也被炒了鱿鱼,此二人都是谷歌人工智能伦理研究团队的负责人。

这篇关于减少人工智能碳足迹的论文共有9名作者,其中之一是谷歌研发部门的高级副总裁杰夫·迪恩。他也因为涉嫌参与逼走格布鲁而遭到不少诟病。迪恩之所以批判格布鲁,其中的一条“罪状”,就是格布鲁之前的一篇论文,没有谈到如何减轻大型语言模型的负面伦理影响。

除了搬到拥有清洁电网的地方,另一种提高人工智能程序能耗效率的方法,就是采用专门为神经网络设计的芯片。这种芯片可以一定程度地模拟人脑,很适合近几年人工智能的最新发展。目前,市面上的大多数人工智能程序所使用的芯片,都是原本用于电脑游戏的图形芯片。不过现在,诸如谷歌、微软和亚马逊等大公司都在他们的数据中心里安装了专门用于神经网络的计算机芯片。

研究发现,告别图形处理芯片,拥抱神经网络专用芯片,能够使训练超大型人工智能算法的用电量减少5倍。另外,如果你使用的是最新一代的人工智能芯片,它又可以比最老一代的芯片节能整整一半。

如果重新设计神经网络算法,使其成为计算机科学家所说的“稀疏算法”,那么这些算法还将变得更节能,甚至能够节省10倍的能耗。在这种架构下,网络里的大多数人工智能神经元只需要与相对较少的其他神经元相连,因此只需要较少的相关神经元,就可以计算人工智能算法在训练中遇到的每个新案例的数据权重。

参与这项研究的另一位谷歌研究员莫德·特希尔表示,她希望这篇论文有助于整个行业认识到人工智能算法的标准能耗,以及人工智能算法的碳足迹。

不过她也强调,这并不是一件容易的事情。要想精确计算人工智能算法的碳足迹,我们不光要知道某地的电网在多大程度上属于绿色电网,还要知道在算法的训练过程中,使用了多少比例的可再生能源和化石能源。虽然一些大型云服务商已经开始向客户提供详细的二氧化碳排放信息了,但用户真正要想获得这些数据,还是很困难的。(财富中文网)

译者:朴成奎

现在的人工智能算法已经可以完成一些非常复杂的任务了,比如撰写非常自然的文章,或者根据文字描述生成图片等等。不过训练这些程序需要大量的计算力,而计算力又跟电力成正比,这就让很多人开始担心起超大型人工智能系统的碳足迹问题,生怕它在环境上不具有可持续性。

那么,这些先进的人工智能系统的碳足迹究竟是多少?加州大学伯克利分校(University of California at Berkeley)和谷歌公司(Google)的科学家最近给出了迄今为止最准确的估计——毕竟当前市面上很多最流行的大型人工智能系统本身就是由谷歌开发的。

比如研究显示,旧金山的人工智能公司OpenAI创建的一款功能强大的语言模型GPT-3,它在软件的训练过程中产生了552吨的二氧化碳排放,相当于120辆乘用车一年的排放量。谷歌的先进聊天机器人Meena也排放了96吨的二氧化碳,相当于17个美国家庭一年的耗电量。

虽然这些数字大得吓人,但它比研究人员此前的估算还是小了一些,毕竟研究人员始终没有渠道获得苹果(Apple)和谷歌的内部数据。这篇研究文章在4月21日发表于不用经过同行审议的论文库arxiv.org里。该论文同时也表明,人工智能对气候造成的影响是能够缓解的。

研究人员发现,这些算法在训练过程中可以产生多少碳足迹,在很大程度取决于算法的具体设计、用于训练的计算机硬件类型,以及训练发生地的发电类型的影响。

谷歌的科学家们发现,只要改变以上这三个因素,就能够有效地降低这些超大型人工智能系统的碳排放,最多甚至有可能将排放量降低上千倍。例如一个用来训练人工智能程序的数据中心,如果它从煤炭聚集型的印度,搬到了可再生能源十分丰富的芬兰,它的碳排放就可以降低10倍到100倍。

该论文的第一作者、谷歌的科学家大卫·帕特森告诉《财富》杂志:“这就好比一个老笑话:‘搞房地产最重要的是什么?地段,地段,还是地段。’地段不一样,就会产生很大的区别。”

帕特森也是加州大学伯克利分校的名誉教授。他表示,这对人工智能行业来说最终是一个利好消息,因为大多数的人工智能算法是都是“在云端上”被训练的。训练的过程发生在数据中心里,而这个数据中心可能离该算法的创建者还有好几千公里远。“在云计算中,地理位置是最容易改变的事情。”他说。

不过,如果在对人工智能系统进行培训时,都要顾及系统的环保性的话,这最终还是会利好那些大型云服务提供商。因为微软(Microsoft)、谷歌、IBM和亚马逊(Amazon)等公司已经在各地设立了几十个数据中心,有些还设置在平均温度较低的地区,这样就节省了服务器机架的冷却成本,和使用清洁能源的成本。

用于语言处理的超大型人工智能系统究竟会给环境造成多大影响?在谷歌内部,也有一群人工智能伦理专家对此提出了质疑。也正因为如此,谷歌的人工智能伦理学家蒂姆尼特·格布鲁才会从谷歌离职,之后,另一名人工智能伦理学家玛格丽特·米切尔也被炒了鱿鱼,此二人都是谷歌人工智能伦理研究团队的负责人。

这篇关于减少人工智能碳足迹的论文共有9名作者,其中之一是谷歌研发部门的高级副总裁杰夫·迪恩。他也因为涉嫌参与逼走格布鲁而遭到不少诟病。迪恩之所以批判格布鲁,其中的一条“罪状”,就是格布鲁之前的一篇论文,没有谈到如何减轻大型语言模型的负面伦理影响。

除了搬到拥有清洁电网的地方,另一种提高人工智能程序能耗效率的方法,就是采用专门为神经网络设计的芯片。这种芯片可以一定程度地模拟人脑,很适合近几年人工智能的最新发展。目前,市面上的大多数人工智能程序所使用的芯片,都是原本用于电脑游戏的图形芯片。不过现在,诸如谷歌、微软和亚马逊等大公司都在他们的数据中心里安装了专门用于神经网络的计算机芯片。

研究发现,告别图形处理芯片,拥抱神经网络专用芯片,能够使训练超大型人工智能算法的用电量减少5倍。另外,如果你使用的是最新一代的人工智能芯片,它又可以比最老一代的芯片节能整整一半。

如果重新设计神经网络算法,使其成为计算机科学家所说的“稀疏算法”,那么这些算法还将变得更节能,甚至能够节省10倍的能耗。在这种架构下,网络里的大多数人工智能神经元只需要与相对较少的其他神经元相连,因此只需要较少的相关神经元,就可以计算人工智能算法在训练中遇到的每个新案例的数据权重。

参与这项研究的另一位谷歌研究员莫德·特希尔表示,她希望这篇论文有助于整个行业认识到人工智能算法的标准能耗,以及人工智能算法的碳足迹。

不过她也强调,这并不是一件容易的事情。要想精确计算人工智能算法的碳足迹,我们不光要知道某地的电网在多大程度上属于绿色电网,还要知道在算法的训练过程中,使用了多少比例的可再生能源和化石能源。虽然一些大型云服务商已经开始向客户提供详细的二氧化碳排放信息了,但用户真正要想获得这些数据,还是很困难的。(财富中文网)

译者:朴成奎

Artificial intelligence algorithms that power some of the most cutting-edge applications in technology, such as writing coherent passages of text or generating images from descriptions, can require vast amounts of computing power to train. And that in turn requires large amounts of electricity, leading many to worry about the carbon footprint of these increasingly popular ultra-large A.I. systems make them environmentally unsustainable.

New research from scientists at the University of California at Berkeley and Google, which deploys many of these large A.I. systems, provides the most accurate estimates to date for the carbon footprint of some of these state-of-the-art systems.

For instance, GPT-3, a powerful language model created by the San Francisco-based A.I. company OpenAI, produced the equivalent of 552 metric tons of carbon dioxide during its training, according to the study. That’s the same amount that would be produced by driving 120 passenger cars for a year. Google’s advanced chatbot Meena consumed 96 metric tons of carbon dioxide equivalent, or about the same as powering more than 17 homes for a year.

While those figures are frightening large, they are smaller than some previous estimates from researchers who did not have access to the same detailed information from inside Google and OpenAI. The research paper, which was posted to the non-peer reviewed research repository arxiv.org on April 21, also shows that the climate impact of A.I. can be mitigated.

The researchers conclude that the carbon footprint of training these algorithms varies tremendously depending on the design of the algorithm, the type of computer hardware used to train it, and the nature of electricity generation where that training takes place.

Altering all three of these factors can reduce the carbon footprint of training one of these very large A.I. algorithms by a factor of up to 1,000, the Google scientists found. Simply changing the datacenter used to train the algorithm from a place where power generation is coal intensive, like India, to one where the electrical grid runs on renewable power, such as Finland, can reduce it by a factor of between 10 and 100 times, the study concluded.

“It’s like that old joke about the three most important things in real estate: location, location, location,” David Patterson, the Google scientist who is lead author on the new paper, told Fortune. “Location made such a big difference.”

Patterson, who is also an emeritus professor at U.C. Berkeley, says that’s ultimately good news because most A.I. algorithms are trained “in the cloud,” with the actual processing taking place in data centers that can be hundreds or even thousands of miles away from where the person creating the system is sitting. “In cloud computing, location is the easiest thing to change,” he says.

But if environmental sustainability becomes a major consideration in training A.I. systems it is also to further cement the market position of the largest cloud service providers. That’s because companies such as Microsoft, Google, IBM and Amazon Web Services have dozens of data centers in many different places, including those in areas with colder average temperatures, reducing the cost of cooling all those server racks, and greener energy.

The environmental impact of ultra-large A.I. systems designed for processing language was one of the criticisms of such algorithms raised by a group of A.I. ethics specialists inside Google that played a role in the events that precipitated the ouster of Timnit Gebru and the subsequent firing of Margaret Mitchell, the two co-heads of the A.I. ethics research team.

Jeff Dean, a senior executive vice president at Google who heads the company’s research division and has been faulted by Gebru and her supporters for his role in forcing her out, is one of the nine authors credited on the new research paper on reducing the carbon footprint of these A.I. systems. One of his alleged criticisms of Gebru’s earlier paper is that it did not discuss ways to mitigate the negative ethical impacts of large language models.

Besides shifting to a location with a greener electricity grid, another way to improve the energy consumption of these models is to use computer chips that are specifically-designed for neural networks, a kind of machine learning software loosely modeled on the human brain that is responsible for most recent advances in A.I. Today, the majority of A.I. workloads are trained on computer chips that were originally designed for rendering the graphics in video games. But increasingly new kinds of computer chips designed just for neural networks are being installed in datacenters run by large cloud-computing companies such as Google, Microsoft, and Amazon Web Services.

Changing from graphics processing chips to these new neural network-specific chips can cut the energy needed to train an ultra-large algorithm by a factor of five, and it can be cut in half again by shifting from the earliest generation of these new A.I. chips to the latest versions of them, the researchers found.

An even bigger savings—a factor of 10—can be found by redesigning the neural network algorithms themselves so that they are what computer scientists call “sparse.” That means that most of the artificial neurons in the network connect to relatively few other neurons, and therefore need a smaller number of these neurons to update how they weight data for each new example the algorithm encounters during training.

Maud Texier, another Google researcher who worked on the study, says she hopes the paper helps drive the entire industry towards standardized benchmarks for measuring the energy consumption and carbon footprint of A.I. algorithms.

But she emphasized that this is not easy. To get an accurate estimate for the carbon footprint, it is important to know not just how green the electric grid in any particular location is in general, but exactly what the mix of renewable energy and fossil fuel-based electricity was during the specific hours when the A.I. algorithm was being trained. Obtaining this information from cloud service providers has been difficult, she says, although the large cloud service companies are starting to provide more detailed information on carbon dioxide emissions to customers.

财富中文网所刊载内容之知识产权为财富媒体知识产权有限公司及/或相关权利人专属所有或持有。未经许可,禁止进行转载、摘编、复制及建立镜像等任何使用。
0条Plus
精彩评论
评论

撰写或查看更多评论

请打开财富Plus APP

前往打开