Five artistic technologists discuss and debate the future of graphics technology in film at the annual Jon Peddie Research Siggraph press luncheon.
By Randall S. Newton
Hollywood is generating a tsunami of computer graphics data these days as it rushes into the new world of virtual moviemaking. Not just fake explosions, giant fighting robots, cuddly blue aliens, and other custom CG objects, but complete virtual characters and sets and virtual cameras to view them from any angle. Filmmakers are now, more often than not, creating this flood of CG asset data with professional off-the-shelf software from leading vendors instead of custom (and fragile) software homegrown inside each studio. As CG-only studios proliferate and the traditional Hollywood studios become more adept with technology, glass ceilings such as the uncanny valley, control over intellectual property in the cloud, and focal point awareness arise to both frustrate and challenge filmmakers.
Virtual moviemaking, or virtual production, is a new, visually dynamic, non-linear workflow. It blends virtual camera systems, advanced motion and performance capture, 3D software, and practical 3D assets with real-time render display technology, enabling filmmakers to interactively visualize and explore digital scenes for the production of feature films and game cinematics.
It was into this boiling cauldron of technological issues and possibilities that Jon Peddie Research dropped five computer graphics and film industry experts, for the annual JPR Siggraph Press Luncheon August 10. After an introduction by Jon Peddie to the various trends and challenges inherent in the transition to virtual moviemaking, the five worked back and forth with the assembled journalists to define, debate, and debunk the brave new world of virtual moviemaking.
Some things were easy to agree upon. Static images, whether 2D or 3D, are no longer a major challenge. They can be photorealistic and visually appealing. But when the still image or model must move is when the challenges kick in. Capturing 3D reality is, technically speaking, capturing XYZ and RGB, noted Geomagic CEO Ping Fu. We have already reached an inflection point for gathering static 3D data, Fu noted, but added “replicating a static reality is insufficient.” We capture the static reality and we can capture motion, but “we are totally not there in capturing the emotion of reality.”
The problem, said Autodesk’s Brian Pohl, is the uncanny valley, a psychological phenomenon first observed in the 19th Century. The closer graphic arts get to expressing human realism the more difficult it is to gain the acceptance of the audience. There is an inherent rejection of nearly perfect. The panel generally agreed any solution that moves virtual movie making past the uncanny valley will be a mix of technology and art, such as the emerging notion of digital makeup noted by Rob Powers—an idea so new it doesn’t yet have a Wikipedia page of its own.
“Technology doesn’t replace an actor, a director, or a cinematographer,” said Powers, vice president of 3D Development for Newtek. “Technology amplifies these roles.” Powers was hired on to the Avatar project as animation technical director, and wound up eventually creating and supervising the Avatar virtual art department; he has a similar role in the upcoming Peter Jackson/Steven Spielberg movie Tintin. In the minds of some, Rob Powers’ place in history is secure because he was lead animator for the 3D dancing baby made famous on the “Ally McBeal” tv series in the 1990s.
The problem with traversing the uncanny valley, creating digital makeup, or all other the other aspects of virtual movie making is a simple one. “The magnitude of data required is huge,” Powers noted, adding that having terabytes of data means movie makers have more than reference material, they have a new manipulable asset for creating scenes and characters. Gathering enough data to do virtual movie making provides the opportunity to re-assemble the creative team and move beyond the pre- and post-production isolated, linear techniques now in place. Powers calls it a “centralized sandbox where everybody communicates and crosses over to collaborate in a new way…. it results in a better film.”
Inspiring the amateur
Virtual movie making is not just about blockbusters like Avatar. Steve Cooper, product manager for Poser at Smith Micro Software, says his customers are the people who were so inspired by Avatar they want to create their own virtual movies. His job becomes helping users focus on the story by making the software “as invisible and as seamless as possible. Simple tools empower our users.”
Darin Grant, head of production technology for DreamWorks Animation SKG, says DWA uses virtual movie making in its pre- and post-production sequences. Instead of traditional story boarding or creating rough layouts of scenes, they use real-time “prototyping” by having the animators and production assistants don motion capture suits to act out specific scenes. The result, Grant said, are often “happy accidents” of improvisation that improve the movie in ways traditional animation never found. Using virtual movie making in animation “means you progress so much faster,” Grant noted.
Two or three times during the luncheon questioners try to steer discussion toward the more technical aspects of virtual filmmaking, but clearly this panel of accomplished technologists was more interested in the creative processes than a hard-core discussion of server farms, rendering technologies, or the possible use of game engines in virtual movie making. There was general acknowledgement the movie industry needs common tools and standards for data retrieval and reuse.
File formats keep changing, noted Newtek’s Powers, comparing it to how Hollywood moved from silence to sound, black and white to color. “There is no structure in place for future relevance.”
A video of the complete JPR Siggraph luncheon session, including opening remarks by Jon Peddie, has been posted to YouTube for Jon Peddie Research by RichReport.