你看你看未来的“脸”
没有什么能比一颗活生生的头部更能代言“未来”。随着开发及设计人员纷纷着手研发下一代的游戏娱乐系统,这项预示着最新计算机生成图像效果的技术已经取得了进展。而这意味着一件事——我们将看到更多令人毛骨悚然、却又不由惊叹的3D合成头像。 美国动视(Activision)在上周于旧金山举办的一年一度的游戏开发者大会(Game Developer's Conference)上展示了他们的最新技术成果。这家游戏开发巨头的研发部门昨天发布了可以生成的渲染技术及代码。此处所示的动态人物是在显卡设备上实时渲染而成的。它意味着类似的创新技术要不了多久就会出现在各类商业性产品上。 “我们将让大家看到,每一处细节才是表现真实感的秘诀。”大会演示之前,研究人员乔治•吉梅内兹在自己的博客上写道:“对我们来说,这场挑战面向的不仅仅是娱乐应用,更多的是要打造一种媒介,能够更好地表现情绪、引发玩家共鸣。我们相信,这项技术能够为现世代的游戏角色注入次世代的生命。” 抱着这种想法的开发商并非仅只动视一家。芯片制造商英伟达(NVIDIA)最近在该公司于加利福尼亚召开的GPU技术大会(GPU Technology Conference)上大力鼓吹实时面部渲染技术。这套名为“面子工程”(Face Works)的程序采用了美国南加州大学(University of Southern California)创意技术研究所(Institute of Creative Technology)开发的面部及动作捕捉技术。它的核心系统“灯光舞台”(Light Stage)将处理通过摄影捕捉到的、精确到0.1毫米以内的演员面部参数。它也可模拟出光线穿过皮肤进行传播以及反射的效果,渲染脸红等各种精细的情绪表现。 索尼(Sony)出产的游戏机Playstation 4于今年更早时候发布时,马克斯•冯•西多曾上台露了下脸——以互动式3D人物的形式。创意工作室Quantic Dream创始人大卫•凯奇演示了游戏机制造商推出下一部硬件设备时可能呈现出怎样的图像。(为什么会有这么多老年男性的面部图像?具体原因不明,但是或许跟渲染各类延展曲折的皱纹难度很高有关。) 所有这一切似乎有望掀起一轮关于所谓的“恐怖谷”(uncanny valley)的争论。这项理论认为,人类复制品——机器人或是计算机渲染出的角色——在视觉上开始变得越来越写实却又不是纯正的人类,这会让真实的人类看到他们时感到反胃或憎恶。(将人类面对不同渲染程度的仿制人类所表现出的心理舒适度绘制成一张图表后,问题中的“谷”指的就是图表中的低谷区。)这个问题暂时尚未阻挡住工程师拓展技术疆域极限的尝试——或许他们指望着能够实现蛙跳式的进展,从而从根本上绕过这个问题吧。 译者:薄锦 |
Nothing says "the future" like a disembodied head. As developers and designers begin churning out the next generation of games and entertainment, the pace of technology demos showing what types of computer-generated graphics will soon be possible has picked up. And that means one thing: more creepy-yet-astonishing 3D-generated heads. Activision (ATVI) is showing off new technology at the annual Game Developer's Conference, taking place in San Francisco this week. The rendering techniques and code that create life-like animation were unveiled by the gaming giant's research and development division yesterday. The animated character shown here is being rendered in real-time on current video card hardware, suggesting innovations like these could be showing up in commercial products sooner rather than later. "We will show how each detail is the secret for achieving reality," wrote researcher Jorge Jimenez on his blog, before the presentation. "For us, the challenge goes beyond entertaining; it's more about creating a medium for better expressing emotions and reaching the feelings of the players. We believe this technology will bring current generation characters, into next generation life." Activision isn't alone. Chipmaker NVIDIA (NVDA) recently touted real-time face-rendering at its GPU Technology Conference in California. The program, dubbed Face Works, employs face- and motion-capture technology developed at the University of Southern California's Institute of Creative Technology. The center's Light Stage process records data to within a tenth of a millimeter using photography that captures the geometry of an actor's face. Light transmission through skin -- the key to rendering subtle emotional cues like blushing -- and reflections can be recreated as well. At Sony's (SNE) Playstation 4 launch even earlier this year, actor Max von Sydow made a brief appearance on stage -- as an interactive 3D model. David Cage, founder of innovative studio Quantic Dream, demoed what kinds of graphics would be possible on the console maker's next hardware release. (Why so many old men? It's not clear, but it may have something to do with the complexity of rendering wrinkles that move and bend.) All of this is likely to kickstart another round of debate about the so-called "uncanny valley." That concept suggests that when human replicas -- either robots or in computer renderings -- begin to look realistically but not perfectly human it can make real-life observers feel queasy or revolted. (The "valley" in questions is the dip in a graph of the comfort level of humans presented with a rendered human likeness.) As of yet, that hasn't stopped engineers from pushing the boundaries of what's technology possible -- perhaps in hopes of leapfrogging over the problem entirely. |