Expanding the role of non-player game characters

 
Nvidia has announced ACE for Games, which gives game character the gift of (intelligent) gab. (Source: Nvidia and Convai)

Nvidia has announced ACE for Games, which infuses NPCs with intelligence, to allow AI-powered natural language interactions that are customized and is appropriately reflective of the game. ACE for Games includes three AI foundation models, which can be used in any combination, depending on the user’s needs. The AI model foundry service was unveiled at Computex 2023 and will be available in early access.

Nvidia’s generative AI technology has changed the landscape across a number of industries, and at the recent Computex 2023, Nvidia detailed how it will deliver brand-new experiences for video games. In March, Nvidia announced its Omniverse Avatar Cloud Engine (ACE) for building and customizing avatars for use in customer service and virtual assistance applications, for example. The company has expanded that tech and is now offering ACE for Games for adding intelligence to NPCs and giving them the ability to speak through AI natural language interactions.

With ACE for Games, developers can create customized speech, conversation, and animation AI models for their games. Middleware and tool vendors can likewise do so within their software. Such capability is a sea change from current NPC interactions containing a scripted response (or two). Instead, in this new era of generative AI and powerful large language models, there’s an opportunity for these character interactions to be much more natural and engaging.

Similar to ACE itself, ACE for Games is based on three core AI foundation module components that developers can use in part or in total. They can be deployed across the cloud as well as the PC. 

  • Riva—For automatic speech recognition and text-to-speech conversation to enable live speech conversation.
  • Audio2Face—For instantly generating expressive facial animation of a game character to match the speech.
  • NeMo—For building, customizing, and deploying language models.

In a nutshell, the process works like this: When a player character approaches and speaks to a non-player character, the speech is converted into text by Riva, and then it’s fed into Nemo to generate an intelligent natural language response. Riva then converts the response back into speech, and that speech is used to properly animate the NPC’s face.

According to Nvidia, the models can be customized for each game to reflect game lore and character backstories, for instance. To protect against unsafe conversations, there’s NeMo Guardrails, open-source software that helps ensure that smart applications powered by LLMs are accurate, secure, on topic, and appropriate. The Riva speech model also can be customized to match a character’s voice. These processes are then optimized for the performance and latency required for today’s gaming.

Jason Paul, VP of GeForce platform marketing, noted that ACE for Games can be scaled to work with more than one character simultaneously appearing in the same virtual space—just how many is hard to say at this time, however. Also, the technology does not discriminate among characters and can be used for a character talking to another character, or a character interacting with a player character.

Additionally, with Audio2Face, developers can add facial animation directly to their MetaHuman characters through the use of Omniverse connectors for Unreal Engine 5.

“Generative AI has the potential to revolutionize the interactivity players can have with game characters and dramatically increase immersion in games,” said John Spitzer, vice president of developer and performance technology at Nvidia. 

To underscore the capabilities of ACE for Games, Nvidia enlisted the assistance of Convai, a company providing conversational AI for virtual worlds and games, to deliver a gaming demo. Called “Kairos,” the demo enables players to interact with an NPC named Jin, whose natural language responses are consistent and supportive of the narrative backstory.

In “Kairos,” Nvidia’s ACE modules (Riva, Autod2Face, and NeMo) were integrated into the Convai platform and fed into Unreal Engine 5 and MetaHuman to bring the NPC Jin to life. Jin and his ramen shop were created by the Nvidia Lightspeed Studios art team and entered in UE5 using Nvidia RTX Direct Illumination (RTXDI) for ray-traced lighting and shadows, and DLSS for maximum frame rates and image quality. 

ACE for Games was just unveiled and will become available in early access. Meanwhile, Nvidia has been working with various partners that are creating middleware tools and services connected to intelligent game characters. Some developers are using Nvidia generative AI technology—specifically, Audio2Face—for upcoming games, including GSC Game world’s STALKER 2: Heart of Chernobyl and Fallen Leaf’s Fort Solis, for example.