Search

Nvidia launches RTX workstations chips for content creation in the generative AI era - VentureBeat

kajasada.blogspot.com

We're thrilled to announce the return of GamesBeat Summit Next, hosted in San Francisco this October, where we will explore the theme of "Playing the Edge." Apply to speak here and learn more about sponsorship opportunities here


Nvidia is launching new RTX workstation graphics processing units for the generative AI era.

In partnership with workstation manufacturers including BOXX, Dell Technologies, HP, and Lenovo, Nvidia unveiled a new line of high-performance RTX workstations at the Siggraph graphics tech event.

The GPUs are designed specifically for development, content creation, and data science in the era of generative AI and digitalization. These advanced systems are built around the Nvidia RTX 6000 Ada Generation GPUs and incorporate Nvidia AI Enterprise and Nvidia Omniverse Enterprise software.

As part of this announcement, Nvidia has also introduced three new desktop workstation Ada Generation GPUs: the Nvidia RTX 5000, RTX 4500, and RTX 4000. These GPUs bring the latest advancements in AI, graphics, and real-time rendering technology to professionals worldwide.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now

Bob Pette, vice president of professional visualization at Nvidia, said in a press briefing that there are few workloads as demanding as generative AI and digitalization applications.

“Few workloads are as challenging as generative AI and digitalization applications, which require a full-stack approach to computing,” Pette said. “Professionals can now tackle these on a desktop with the latest Nvidia-powered RTX workstations, enabling them to build vast, digitalized worlds in the new age of generative AI.”

The newly unveiled RTX workstations offer up to four NVIDIA RTX 6000 Ada GPUs, each equipped with 48GB of memory. A single desktop workstation can deliver up to 5,828 TFLOPS of AI performance and 192GB of GPU memory. Depending on user requirements, the systems can be configured with either Nvidia AI Enterprise or Omniverse Enterprise software, providing the necessary power for a wide range of demanding generative AI and graphics-intensive workloads.

Nvidia NeMo

Nvidia RTX 5000 Ada Generation GPU.

Nvidia AI Enterprise 4.0, announced concurrently, introduces Nvidia NeMo, an end-to-end framework for building and customizing foundational models for generative AI. It also includes Nvidia Rapids libraries for data science and offers frameworks, pretrained models, and tools for common enterprise AI use cases such as recommenders, virtual assistants, and cybersecurity solutions.

Omniverse Enterprise, another integral part of the Nvidia ecosystem, is a platform for industrial digitalization that enables teams to develop interoperable 3D workflows and OpenUSD applications. Leveraging its OpenUSD-native platform, Omniverse empowers globally distributed teams to collaborate on full-design-fidelity datasets from hundreds of 3D applications.

Jason Schnitzer, CTO at Yurts, a company known for its full-stack generative AI solutions, said in a statement, “Yurts provides a full-stack generative AI solution aligning with multiple form factors, deployment models, and budgets of our customers. We’ve achieved this by leveraging LLMs for various natural language processing tasks and incorporating the RTX 6000 Ada. From private data centers to workstation-sized solutions that fit under a desk, Yurts remains committed to scaling our platform and offering alongside Nvidia.”

Nvidia AI Workbench

Nvidia CEO Jensen Huang at Siggraph keynote.

Additionally, Nvidia will soon introduce the Nvidia AI Workbench, an easy-to-use workspace that provides developers with a unified environment for creating, fine-tuning, and running generative AI models. Users of any skill level will be able to quickly create, test, and customize pretrained generative AI models on a PC or workstation and then scale them to data centers, public clouds, or Nvidia DGX Cloud.

The new NVIDIA RTX 5000, RTX 4500, and RTX 4000 desktop GPUs leverage the latest NVIDIA Ada Lovelace architecture technologies. These include increased NVIDIA CUDA cores for enhanced single-precision floating point throughput, third-generation RT Cores for improved ray tracing capabilities, and fourth-generation Tensor Cores for faster AI training performance. The GPUs also support DLSS 3, providing new levels of realism and interactivity for real-time graphics, as well as larger GPU memory options for error-free computing with large 3D models, rendered images, simulations, and AI datasets. Moreover, they offer extended-reality capabilities to meet the demands of creating high-performance AR, VR, and mixed-reality content.

The availability of RTX workstations featuring up to four RTX 6000 Ada GPUs, along with the NVIDIA AI Enterprise and Omniverse Enterprise software, is expected starting in the fall. The NVIDIA RTX 5000 GPU is already available and shipping from HP and distribution partners, while the RTX 4500 and RTX 4000 GPUs will be available in the fall from BOXX, Dell Technologies, HP, Lenovo, and their respective distribution partners.

Nvidia OVX Servers

Nvidia L40S GPU

Nvidia also introduced its Nvidia OVX servers with the Nvidia L40S GPU. Designed to meet the demands of compute-intensive applications such as AI training and inference, 3D design and visualization, video processing, and industrial digitalization, the L40S GPU is set to accelerate workflows and services across multiple industries.

The Nvidia OVX systems will support up to eight Nvidia L40S GPUs per server, each equipped with 48GB of memory. Powered by the cutting-edge Nvidia Ada Lovelace GPU architecture, the L40S GPU boasts fourth-generation Tensor Cores and an FP8 Transformer Engine, enabling over 1.45 petaflops of tensor processing power.

In the realm of generative AI workloads with billions of parameters and multiple data modalities, the L40S GPU offers up to 1.2 times more generative AI inference performance and up to 1.7 times faster training performance compared to the Nvidia A100 Tensor Core GPU.

The L40S GPU is also tailored to address the needs of professional visualization workflows, providing real-time rendering, product design, and 3D content creation capabilities. With 142 third-generation RT Cores delivering 212 teraflops of ray-tracing performance, creative professionals can create immersive visual experiences and produce photorealistic content.

The Nvidia L40S GPU is equipped with 18,176 CUDA cores, delivering nearly five times the single-precision floating-point (FP32) performance of the Nvidia A100 GPU. This significant computational power accelerates complex calculations and data-intensive analyses, making it ideal for computationally demanding workflows such as engineering and scientific simulations.

CoreWeave, a cloud service provider specializing in large-scale, GPU-accelerated workloads, is among the first to offer L40S instances.

Enterprises deploying L40S GPUs can leverage the Nvidia AI Enterprise software, which offers production-ready enterprise support and security for over 100 frameworks, pretrained models, toolkits, and software.

The Nvidia L40S will be available starting this fall. Global system builders, including ASUS, Dell Technologies, GIGABYTE, HPE, Lenovo, QCT and Supermicro, will soon offer OVX systems that include the Nvidia L40S GPUs.

Nvidia Maxine

Nvidia Maxine

Nvidia also introduced Maxine, a suite of GPU-accelerated software development kits and cloud-native microservices. Maxine aims to revamp real-time communications services and platforms by enabling professionals, teams, and creators to harness the power of AI and create high-quality audio and video effects.

Maxine’s advanced features, including Background Noise Removal, Super Resolution, and Eye Contact, enhance interpersonal communication experiences, allowing remote users to improve audio and video quality even in challenging environments with poor connectivity or while on the move. Furthermore, Nvidia partners have integrated Maxine into video editing workflows, opening up new possibilities for engaging and captivating video communication.

Maxine’s expansion into video editing introduces new features for professionals. With Maxine, users can maintain eye contact with the camera while referencing notes or scripts, enhancing their on-screen presence. Additionally, professionals can film videos in low resolution and later enhance the quality using AI-powered upscaling. Maxine also enables users to record videos in multiple languages and export them in English, facilitating multilingual content creation.

The forthcoming Maxine features to be released in early access this year include:

  • Interpreter: Translates speech from simplified Chinese, Russian, French, German, and Spanish to English while animating the user’s image to match the English speech.
  • Voice Font: Applies the characteristics of a speaker’s voice and maps it to the audio output, enabling customization and personalization.
  • Audio Super Resolution: Improves audio quality by increasing the temporal resolution of the audio signal and extending bandwidth, resulting in enhanced clarity and fidelity.
  • Maxine Client: Optimized for low-latency streaming, this application brings the AI capabilities of Maxine’s microservices to video-conferencing sessions on PCs. It utilizes GPU compute in the cloud to deliver seamless AI-enhanced communication experiences.

Maxine can be deployed in the cloud, on premises, or at the edge, ensuring that high-quality communication is accessible from virtually anywhere.

Several partners and customers have already integrated Maxine into their workflows and applications, delivering exceptional video conferencing and editing experiences. Descript, a software company, leverages Maxine’s Eye Contact feature, allowing users to maintain on-screen presence while delivering their scripts.

3D video for immersive communication

Nvidia Research has also announced a development in the field of immersive communication with the introduction of AI-powered 3D video technology.

The new research, detailed in a recently published paper, demonstrates how artificial intelligence can enable a 3D video-conferencing system with minimal capture equipment, revolutionizing the accessibility and cost-effectiveness of 3D telepresence.

Traditional 3D telepresence systems have been limited by their high costs, extensive spatial requirements, and reliance on high-bandwidth, volumetric video streaming. These limitations have hindered the widespread adoption of the technology. However, Nvidia Research has now presented a method that utilizes a VisionTransformer-based encoder to convert 2D video input from a standard webcam into a real-time 3D video representation.

The breakthrough technology, powered by AI, eliminates the need for the transmission of 3D data between conference participants, ensuring that bandwidth requirements remain the same as those for a standard 2D conference. By leveraging volumetric rendering, the system automatically generates a 3D representation, known as a neural radiance field (NeRF), from the user’s 2D video. This allows participants to stream 2D videos while decoding high-quality 3D representations in real time, creating a truly immersive communication experience. Additionally, Nvidia’s Maxine platform introduces Live Portrait, enabling users to bring their portraits to life in 3D.

The implications of AI-mediated 3D video conferencing are far-reaching. The technology has the potential to significantly reduce the cost of 3D capture, provide high-fidelity 3D representations, accommodate photorealistic or stylized avatars, and enable mutual eye contact during video conferences. Furthermore, this research lays the foundation for future Nvidia technologies in the field of video conferencing, promising to elevate communication and virtual interactions to new heights.

GamesBeat's creed when covering the game industry is "where passion meets business." What does this mean? We want to tell you how the news matters to you -- not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.

Adblock test (Why?)



"creation" - Google News
August 08, 2023 at 11:00PM
https://ift.tt/mfSnQjK

Nvidia launches RTX workstations chips for content creation in the generative AI era - VentureBeat
"creation" - Google News
https://ift.tt/B2Q8LS0
https://ift.tt/dpSEJOo

Bagikan Berita Ini

0 Response to "Nvidia launches RTX workstations chips for content creation in the generative AI era - VentureBeat"

Post a Comment

Powered by Blogger.