NVIDIA and AI: a promising love story | | | | Turtles AI
NVIDIA and AI: a promising love story
DukeRem3 May 2023
#NVIDIA has announced a series of promising #AI #research #projects that will enable #developers and #artists to bring their ideas to #life in #hyperrealistic or #fantastical ways, whether still or moving, in #2D or #3D. The company has collaborated with over a dozen universities in the #US, #Europe, and #Israel to develop around 20 research papers advancing #generative AI and #neural #graphics.
These research papers will be presented at SIGGRAPH 2023, the premier computer graphics conference, scheduled to take place from August 6-10 in Los Angeles. The papers include generative AI models that turn text into personalized images, inverse rendering tools that transform still images into 3D objects, and neural physics models that use AI to simulate complex 3D elements with stunning realism.
NVIDIA researchers have regularly shared their innovations with developers on GitHub and incorporated them into products, including the NVIDIA Omniverse platform for building and operating metaverse applications and NVIDIA Picasso, a recently announced foundry for custom generative AI models for visual design. NVIDIA's graphics research has helped bring film-style rendering to games, such as the recently released Cyberpunk 2077 Ray Tracing: Overdrive Mode, the world's first path-traced AAA title.
The research advancements presented at SIGGRAPH 2023 will help developers and enterprises rapidly generate synthetic data to populate virtual worlds for robotics and autonomous vehicle training. They will also enable creators in art, architecture, graphic design, game development, and film to more quickly produce high-quality visuals for storyboarding, previsualization, and even production.
One of the most exciting developments of NVIDIA's research is the creation of generative AI models that can transform text into images, providing powerful tools to create concept art or storyboards for films, video games, and 3D virtual worlds. NVIDIA researchers from Tel Aviv University have developed two SIGGRAPH papers that enable users to provide image examples that the model quickly learns from, allowing for a high level of specificity in the output of the generative AI model.
Another significant breakthrough is the use of AI to transform 2D images and videos into 3D representations that creators can import into graphics applications for further editing. NVIDIA Research has partnered with the University of California, San Diego, to create a method that can generate and render a photorealistic 3D head-and-shoulders model based on a single 2D portrait in real-time on a consumer desktop. The researchers also collaborated with Stanford University to create an AI system that can learn a range of tennis skills from 2D video recordings of real tennis matches and apply this motion to 3D characters.
NVIDIA has also developed neural physics models that can simulate tens of thousands of hairs in high resolution and in real-time using AI techniques, enabling both accurate and interactive physically based hair grooming. The company has extended programmable shading code with AI models that run deep inside NVIDIA's real-time graphics pipelines, such as neural texture compression that delivers up to 16x more texture detail without taking additional GPU memory.