立即打开
硅光学有望推动数据中心脱胎换骨

硅光学有望推动数据中心脱胎换骨

Clay Dillow 2013-09-13
英特尔研发的新一代光学通讯技术能将数据传输速度提升数倍,同时彻底改变数据中心的设计模式。不过,新技术还需要首先解决成本问题。

    上周,英特尔公司(Intel)揭开了新一代光学硅芯片的面纱,标志着过去十年的研究工作结出了硕果。这种芯片能大幅提高数据中心和超大规模计算机群间的数据传输速率。而在进行这一开发时,英特尔不仅仅是将光速物理学应用于数据传输而已,其“硅光子学”技术将彻底改变数据中心和高能计算设备的设计及架构方式,从而不光是为英特尔本身,而是为整个计算行业开辟宏大的远景。

    硅光学的概念其实并不复杂:铜线和其他传统数据传输载体在传输特定数量的数据时往往具有极大的局限性,而且没有什么能比光速更快。如果数据中心或超级计算机里那些盘根错节、到处蔓延的硬件设备能用光速传输载体相联,其速度和效率将立刻实现飞跃。而相应的挑战一直在于如何实现小型化,同时降低复杂性,这一点英特尔现在似乎已经克服了。

    简单地说,英特尔已经开发出了一种能将极其细微的激光束——以及能在电子信号和光信号之间实现双向转化的接收器和传输器——置入一块硅芯片的手段,同时开发出了大规模生产的技术。两周前面世的这块硅光子芯片传输速度可达到每秒100G,使现有的传输水平立刻相形见绌:现在机架上连接服务器的扩展插槽数据线的标准传输速度为每秒8G,而将机架服务器连接起来的以太网数据线传输速度充其量也就是每秒40G。

    到此为止,这件事似乎就只是服务器内部以及服务器之间的数据传输速度更快了,数据中心和超级计算机群的运行更高效了,同时也为英特尔带来了全新的巨额潜在收入【去年服务器的全球出货量达到810万台,像亚马逊(Amazon)、Facebook和苹果(Apple)这样的公司正投入巨资打造自己的云计算和大数据能力】。实际上,这件事的意义远不止于此。机架服务器内部及彼此间能实现超高速数据传输将彻底改变数据中心的设计方式,促进更加高效、更有效能的计算中心和数据中心的涌现。

    高德纳公司(Gartner)技术及服务供应商研究集团的首席研究分析师舍基思•穆塞尔称:“这使得重新定义计算系统的拓扑结构成为可能,而这正是关键所在。我们将能建造规模大得多的计算系统。以前我们每次只能增加一台服务器,今后则能建造超大规模的服务器。”

    现有的数据中心架构受到诸多技术限制的局限,其中很多与数据传输有关。一般来说,每个机架服务器都需要一组存储设备、处理器和网络基础设施才能有效运转,这是因为这些组件之间的物理隔离会导致处理延迟。这种系统往往需要花大量时间将电信号从一个物理位置通过铜质或网络数据线传输到另一个位置,结果导致整个系统运行速度降低。

    咨询公司Moor Insights & Strategy的高级分析师兼首席技术官保罗•泰西表示,很多硬件公司正在设法解决这个问题。一般情况下,它们的做法是,在每个机架服务器中搭建新的架构,在更小数量级的水平上进一步整合存储设备、网络和计算/处理器,以降低延迟,同时提高数据处理能力。不过英特尔却的开发方向却完全不同。

    Last week, a research and development effort reaching back well into the last decade came to a head as Intel pulled back the curtain on a new breed of optical silicon chips that could drastically boost data transmission rates within data centers and hyperscalecomputing arrays. But in doing so, Intel (INTC) hasn't just applied light-speed physics to the science of data transmission. Its "silicon photonics" technology could fundamentally upend the way data centers and high-powered computing facilities are designed and organized, spelling big things not only for Intel but for the entire computing enterprise.

    The idea behind silicon photonics is relatively simple: Copper wiring and other conventional data transmission methods suffer from fundamental limitations on how fast they can transfer a given amount of data, but nothing moves faster than light. If the sprawling, distributed hardware inside a modern data center or supercomputer could be linked by speed-of-light communications, its speed and efficiency could immediately make a massive leap forward. The challenge, which Intel now appears to have overcome, has always been one of miniaturization and complexity.

    Simply put, Intel has figured out a means to package tiny lasers -- as well as receivers and transmitters that can convert electrical signals to optical ones and vice-versa -- into a silicon chip and develop the technology for mass production. The iteration of silicon photonics unveiled by Intel last week can achieve data rates of 100 gigabits per second, eclipsing the standard eight-gigabits-per-second rate of copper PCI-E data cables that connect servers on a rack, or even the Ethernet networking cables that connect the racks together (those cables can generally handle roughly 40 gigabits per second at the high end).

    The story here, then, is one of faster data transmission within and between servers and higher efficiency for data centers and supercomputing arrays, as well as of a potentially significant new revenue stream for Intel (8.1 million servers shipped globally last year, and companies like Amazon (AMZN), Facebook (FB), and Apple (AAPL) are pouring millions into their cloud and data capabilities). But that's not the whole story. The ability to transmit data at super-high speeds within and between server racks will be a paradigm-shifter for data center design, allowing for far more efficient and capable computing and data centers.

    "This opens up the ability to redefine the topology of systems, and that's the key thing," says SergisMushell, a principal research analyst with Gartner's technology and service provider research group. "We're going to be able to build much more massive systems. Where before we added one server at a time, we're going to be able to build massive servers."

    The current architecture of data centers is dictated by a variety of technological limitations, many of them tied to data transmission. Each rack generally requires some mix of storage, processing, and networking infrastructure in order to be effective, because physical separation between these components leads to latency. The system simply spends too much time beaming electronic signals from one physical location to another over across copper or network cables, and the whole system slows down as a result.

    Many hardware companies are working on ways to solve this, says Paul Teich, senior analyst and CTO at Moor Insights & Strategy. Generally, these new architectures involve further integrating storage, networking, and computing/processing at an even more granular level within each rack in order to reduce latency and enhance throughput. Intel is moving in the other direction entirely.

热读文章
热门视频
扫描二维码下载财富APP