Will Generative AI Replace 3D Rendering in Games? | PC vs console pros and cons | Games to play on pc free offline | PS5 sales games | Turtles AI

Will Generative AI Replace 3D Rendering in Games?
In the following deep dive, we primarily focus on future game graphics, exploring how they might leverage new generative technologies
DukeRem

In the last week, two intriguing articles, featured in Turtle’s AI magazine, sparked my interest, prompting me to further analyze the evolving relationship between AI and gaming.

The first article explored how Doom was essentially recreated using deep learning techniques and generative AI, demonstrating a novel approach to graphics generation.

The second article highlighted a forthcoming game that will incorporate natural language commands, allowing players to engage with NPCs more interactively and responsively.

These developments, one centered around enhancing visual realism through genAI, and the other on advancing player-NPC interactions, indicate significant potential shifts in the gaming world.

The integration of these technologies could redefine how we perceive and interact with digital environments, moving towards more dynamic and engaging experiences.

In the following deep dive, I will primarily focus on the topic of future game graphics, exploring how they might leverage new generative technologies.

I want to emphasize that these are my speculations and ideas, so they may not necessarily become reality, but I am very interested in reading your thoughts in the comments.

Gaming and technologies

The gaming industry has always been on the cutting edge of technology, pushing boundaries to create more immersive experiences. Techniques like 3D rendering and ray tracing have been shining examples of this effort, striving to achieve visual realism. These methods, while advanced, are computationally heavy. Even the most powerful graphics cards today, such as NVIDIA’s RTX 4090, struggle to make game graphics indistinguishable from reality. The question arises: Could there be a different path forward, one that doesn’t require an exponential increase in hardware capabilities?

One possible solution lies in genAI. Unlike traditional rendering methods, which require extensive computational power to simulate light and texture, generative AI could create photorealistic visuals with less (or anyway different) processing.

By taking a simplified 3D scene and using AI algorithms to enhance it in real-time, we might achieve visuals that are both lifelike and efficient.

 

The Generative AI Approach: How Does It Work?

GenAI models are designed to learn patterns from vast amounts of data. In the context of gaming, these models can be trained on millions of real-world images and videos. Once trained, they can generate new frames of "real-time" video that mimic the realism of the source material.

This method could deeply change how we think about game graphics by shifting from detailed, physics-based rendering to a more "data-driven" approach.

Several experiments have demonstrated the potential of this technology. For example, NVIDIA has been working on a project called GauGAN, which can turn rough sketches into photorealistic images.

This type of technology, if applied to gaming, could mean that a basic wireframe model is all that’s needed as a starting point. The AI would then "fill in" the details, generating realistic textures, lighting, and shadows on the fly.

This approach could significantly reduce the amount of computation required, making high-quality graphics accessible even on lower-end 3D hardware, under the condition that specialized TPUs and NPUs become more powerful.

 

Comparisons to Current Technologies: DLSS and Path Tracing

To understand the potential impact of genAI in 3D rendering, I think it’s helpful to compare it to existing technologies like NVIDIA’s DLSS (Deep Learning Super Sampling).

At its core, DLSS leverages artificial intelligence, specifically deep learning, to "upscale" lower-resolution rendered images to a higher resolution. Simplifying and summarizing greatly, DLSS analyzes low-res inputs, applies a pre-trained neural network to predict high-res details, utilizes motion vectors for accurate movement reconstruction, and incorporates data from multiple frames to enhance clarity, reducing GPU workload while maintaining visual quality.

When DLSS was first introduced, it was considered a breakthrough for allowing smoother gameplay with better graphics on less powerful hardware.

DLSS has gone further by increasing frames per second through Frame Generation, which uses AI and deep learning to interpolate and create entirely new frames between traditionally rendered frames, enhancing smoothness and visual flow

Similarly, path tracing, a form of ray tracing that simulates the way light paths travel through a scene, has achieved impressive results but remains incredibly demanding in terms of hardware. DLSS 3.5 improves ray tracing by using AI-driven Ray Reconstruction to enhance image quality. It replaces traditional denoisers, better handling reflections, lighting, and shadows. This AI approach reduces noise and artifacts, ensuring clearer visuals and improved performance in games with intensive ray-traced effects, while maintaining low computational costs.

GenAI could push these boundaries even further by not just upscaling or generating new interpolated frames, but actually creating new visual content.

It might offer a way to simulate all the graphical effects more efficiently, bypassing the need for complex calculations. Instead of simulating every photon of light, an AI model could predict likely outcomes based on its training data, achieving similar effects with a fraction of the computational cost.

 

Challenges and Potential Setbacks

While generative AI holds much promise, several challenges need to be addressed. Consistency is a significant issue; AI-generated content can sometimes look unrealistic or contain visual artifacts or even slightly change frame to frame.

Unlike traditional rendering, which is predictable and deterministic, AI outputs can vary depending on the model’s training data and the inputs it receives. This variability can be problematic in a gaming context, where visual consistency is key to maintaining immersion.

Additionally, there’s the question of ethical use that we dealt with several times, already. GenAI requires vast amounts of data for training, and much of this data comes from real-world images and videos. Ensuring that this data is used ethically and that privacy is maintained is an ongoing concern. Moreover, the use of AI in content creation raises questions about intellectual property. If a game environment or character is generated by AI, who owns the rights to that creation?

Another challenge is integrating genAI with existing game development pipelines. Traditional game engines like Unreal Engine and Unity are built around well-established rendering techniques. Adopting generative AI would require (significant?) changes to how games are developed, potentially increasing costs and development time in the short term.

Developers would need new tools and skills, and existing workflows might need to be completely rethought.

 

The Future of Game Graphics: What to Expect?

If genAI continues to advance at its current pace, we might see a future where game graphics are not only more realistic but also more diverse and dynamic. Games could feature environments that evolve in real-time, based on player actions or other in-game factors, without the need for extensive pre-rendering. Imagine a game where every playthrough looks and feels unique, with an AI continually generating new textures, lighting conditions, and even entire landscapes.

In the initial stages, genAI could serve as a powerful tool for dynamically increasing textures’ resolution in real-time and generating high-resolution textures on demand. This application could evolve the gaming experience by drastically reducing the need for massive storage space, which nowadays is typically required for high-quality graphics.

For instance, instead of storing a multitude of pre-rendered textures, a game could use AI algorithms to generate these textures in real-time as needed. This approach would not only save space but also reduce the game’s overall loading times and improve performance on a broader range of hardware.

One of the practical applications of this technology could be seen in the generation of textures for game assets. To some extent, this is already done by the DLSS technique mentioned, but it could be taken a step further.

For example, this approach could enable a new level of customization and artistic style. Players could experience a game world that dynamically shifts in visual style, such as being "painted by Van Gogh" or rendered in the stark, detailed style of a cyberpunk noir.

Additionally, genAI could simulate real-time damage and wear on objects within the game, such as vehicles or buildings, adapting textures in response to in-game events like collisions or environmental changes.

This could create a more immersive and responsive gaming environment where visual elements react fluidly to player actions and environmental conditions, enhancing both realism and engagement.

By using genAI to generate these visual elements dynamically, game developers could offer a more personalized and adaptable gaming experience, tailoring graphics to each player’s preferences or the specific demands of the gameplay, further pushing the boundaries of what’s possible in digital entertainment.

However, this vision of the future is not without its hurdles. The game industry is notoriously risk-averse, and adopting such a radically different approach would require a significant shift in thinking and expertise.