NVIDIA today introduced its first on-device Small Language Model (SLM), powered locally by RTX AI, Nemotron-4 4B Instruct, at Gamescom 2024. This innovation, now integrated into NVIDIA ACE as an NVIDIA NIM microservice, promises to revolutionize the conversational capabilities of digital humans in a variety of industries, including gaming, customer service, and healthcare. NVIDIA took a deeper dive into how SLMs work and the benefits of implementing them in NVIDIA ACE. Developers can leverage this technology to enhance interactions with NPCs in their games and applications, creating more realistic and immersive digital human experiences. This is made possible by integrating multiple NVIDIA technologies, from Riva automatic speech recognition, to language processing with SLM and LLM, to text-to-speech, to Audio2Face for facial animation.
ACE’s strength lies in its ability to support a variety of AI models, both NVIDIA proprietary and third-party. It also offers the flexibility to run on cloud servers or locally on RTX AI PCs and workstations. Amazing Seasun Game’s Mecha Break is the first title to leverage ACE and digital human technologies. James is a customer service representative managed by ACE, available at ai.nvidia.com. Additionally, NVIDIA AI Foundry is positioned as a key tool for companies looking to train generative AI models on their own licensed data. The NVIDIA Edify multimodal AI architecture enables developers and designers to train models for generating images, videos, 3D assets, 360-degree high dynamic range imaging, and physically rendered materials, all from simple text instructions.
#Nvidia #Develops #Advanced #Video #Games