Over the past few years, Nvidia’s Deep Learning Super-Sampling (DLSS) standard has largely delivered on its magical promise: smoother gaming performance and crisper imagery, all based off of zillions of machine-farm computations to predict 3D game visuals. (You can see comprehensive DLSS breakdowns in my reviews of the RTX 3060 and RTX 3080 Ti.) The catch remains that your computer needs a compatible Nvidia “RTX” GPU to tap into the proprietary standard, which has become an ever-tougher pill to swallow in a chip-shortage world.
Still, if you run a DLSS-compatible game on an Nvidia RTX GPU, the performance gains can range from a solid 25 percent to an astonishing 90 percent—usually with greater returns coming from higher resolutions. Up until this week, one demanding PC-gaming use case has somehow not been a part of the DLSS ecosystem: virtual reality.
The default pixel resolution on popular headsets like Oculus Quest 2, Valve Index, and HP Reverb G2 often surpasses an average 4K display, and those headsets also demand higher frame rates for the sake of comfort. Thus, the DLSS promise seems particularly intriguing there. When DLSS works as advertised, a given game renders fewer pixels. This is when Nvidia’s RTX GPUs leverage their “tensor” processing cores to fill in the missing details in ways that, theoretically, look better than standard temporal anti-aliasing (TAA) methods.