
That may seem like a minor bump, but Intel seems confident that more efficient algorithms could make all the difference. This is expensive to calculate, so Intel’s algorithm essentially reduces the GGX distribution to a simple-to-calculate slope based on the angle of the camera, making real-time rendering possible.īased on Intel’s internal benchmarks, it leads to upwards of a 7.5% speed up in rendering path-traced scenes. The idea behind GGX is that surfaces are made up of microfacets that reflect and transmit light in different directions. The company is building upon the GGX mathematical function, which Intel says is “used in every CGI movie and video game.” The algorithm reduces this mathematical distribution to a hemispherical mirror that is “extremely simple to simulate on a computer.” Nvidia / CD Projekt Red It’s doing so by introducing a new algorithm that is “simpler than the state-of-the-art and leads to faster performance,” according to Intel. Intel’s paper is introducing a way to make that process more efficient. You’d need a flagship GPU like the RTX 4080 or RTX 4090 to even run these games at higher resolutions, and that’s with Nvidia’s tricky DLSS Frame Generation enabled. For as impressive as path tracing is, though, it’s extremely demanding. Path tracing is essentially the hard way of doing ray tracing, and we’ve already seen it be used to great effect in games like Cyberpunk 2077 and Portal RTX. In addition to this technique, Intel is also introducing an efficient path-tracing algorithm that it says, in the future, will make complex path-tracing possible on mid-range GPUs and even integrated graphics. As we’ve seen from games like Redfall recently, VRAM limitations can cause even close objects to show up with muddy textures and little detail as you pass them. It’s applied as a level of detail (LoD) technique for objects, allowing them to look more realistic from further away. Intel’s paper, however, looks to tackle complex 3D objects, such as vegetation and hair. It doesn’t seem dissimilar from Nvidia’s Neural Texture Compression, which it also introduced through a paper submitted to Siggraph.

Intel says the “limited amount of onboard memory can limit practical rendering of complex scenes.” Intel is introducing a neural level of detail representation of objects, and it says it can achieve compression rates of 70% to 95% compared to “classic source representations, while also improving quality over previous work.” No, Intel isn’t introducing a DLSS 3 rival, but it is looking to leverage AI to render complex scenes.

The paper aims to make real-time path tracing possible with neural rendering. The company is introducing seven new research papers to Siggraph 2023, an annual graphics conference, one of which tries to address VRAM limitations in modern GPUs with neural rendering.

Intel is making a big push into the future of graphics.
