Nvidia adds to its game development toolbox

The gamer is the ultimate winner.

(Source: Nvidia)

Like Earth and Mars coming into alignment, this year had the Game Developers Conference (GDC) and Nvidia’s GPU Technology Conference (GTC) occurring simultaneously. The result of this convergence: Nvidia announcements and product rollouts that were focused on the gaming industry or especially attractive to this segment.

Omniverse components

Let’s start with Omniverse, Nvidia’s open platform for real-time 3D design collaboration and virtual world simulation. Within Omniverse, developers can use AI- and Nvidia RTX-enabled tools to tackle complex tasks, thereby streamlining the design and content creation processes. Last week, Nvidia unveiled new Omniverse features and functions that make it easier to share assets and work collaboratively while creating the large, photorealistic, immersive worlds that players expect.

May the cloud be with you. One bit of news that was well received by many designers and content creators both inside and outside the game development realm was that Nvidia is expanding its Omniverse platform, making it accessible to those using a wide range of devices. When available, users will no longer be required to have an RTX-based system to utilize Omniverse. Instead, they can connect via the cloud using Nvidia’s GeForce NOW, currently used for streaming games.

In addition, the company expanded its Connectors, plug-ins for third-party tools to connect to Omniverse. This includes the new Unreal Engine 5 Omniverse Connector for exchanging USD and material definition language data between the game engine and Omniverse. See the list of current Connectors here.

Nvidia Omniverse Audio2Face comes preloaded with the Digital Mark character model to make getting started easier. Users upload an audio track, which is fed into a pre-trained Deep Neural Network, and the output drives the 3D vertices of the character mesh. (Source: Nvidia)

In more game development news, Nvidia updated its Omniverse Audio2Face, an app that uses deep learning AI to generate high-quality facial animation from an audio file. The app now supports full facial animation, and artists can control the emotional performance as well.

Nvidia thinks its Audio2Face software will make more realistic avatars that can be used in the forthcoming and current meme—the metaverse. Nvidia uses what they call Toy Jensen (JT) to illustrate the point of using a recommender or Siri-like personality in the form of a lifelike (albeit cartoonish) avatar.

Omniverse wove a real CEO—and his toy counterpart—together with stunning demos at GTC. (Source: Nvidia)

The company also presented Omniverse DeepSearch, an AI-enabled service for Omniverse Enterprise users that enables developers to search their entire catalog of untagged 3D assets using natural language inputs and imagery.

Wait, there’s more

Nvidia continued its gift giving to the game development community with additional offerings, including Streamline, an open-source, cross-IHV solution for simplifying the integration of multiple super-resolution technologies and other graphics effects in games and applications by only having to code once, rather than manually integrating each SDK.

“Today, game developers do additional work to support each hardware vendor’s post-processing effects with different APIs,” said John Spitzer, vice president of developer and performance technology at Nvidia. “With Streamline, we can remove that redundancy and accelerate the adoption of new features in games, in turn improving the experience for our users.”

Streamline’s plug-and-play framework sits between the game and render API. (Source: Nvidia)

The framework can be used with non-super-resolution SDKs, as well, for instance, allowing users to add Nvidia Real-time Denoisers (NRD) to their games.

The Streamline SDK is available now on Github with support for Nvidia Deep Learning Super Sampling (DLSS) and Deep Learning Anti-Aliasing (DLAA) for boosting gaming performance and image quality. Streamline also supports DirectX 11 and DirectX 12. According to Nvidia, Streamline will support Image Scaling in the future. Meanwhile, Vulkan support is currently in beta.

In other news, Nvidia is offering Kickstart RT, a timesaving starter kit that makes it easier to add real-time ray tracing to game engines, resulting in realistic reflections, shadows, ambient occlusion, and global illumination. Nvidia also updated its individual ray-tracing SDKs.

Nvidia is making game testing faster and easier, too, with GeForce NOW Cloud Playtest. Through GeForce NOW’s network of 30-plus data centers, game developers can upload a game build to the cloud and schedule a playtest from any location, enabling players and observers to be located anywhere in the world.

Updates were announced to SDKs for Nvidia RTX Global Illumination, RTX Direct Illumination, Reflex, and Real-time Denoisers, in addition to tools including Nvidia Insight and Nvidia Virtual Reality Capture and Reply.

Fear not, Nvidia left a little gift in gamers’ baskets, too. What do gamers want most? Games. Good games. Games with next-level graphics. And some developers have been hard at work using Nvidia’s latest tools to give gamers, particularly GeForce RTX gamers, just that.

According to Nvidia, last month more than 150 games utilized its DLSS, and that number continues to grow with titles such as Ghostwire: Tokyo, a supernatural first-person action game from Tango Gameworks. Set in present-day Tokyo, the game features stunning images with ray-traced reflections and shadows. When activated, Nvidia DLSS offers up to a 2× performance boost. DLSS will also be included in Evil Dead: The Game, a multiplayer action/horror/comedy from Saber Interactive, when it launches on May 13.