Nvidia’s upcoming flagship, the RTX 4090, was tested in Cyberpunk 2077. It did a great job, but the results were far better with DLSS 3 enabled.
The card managed to surprise us in two ways. One, the maximum clock was higher than expected, and two, DLSS 3 actually managed to lower the card’s power draw by a considerable amount.
The card was tested in 1440p in a system with an Intel Core i9-12900K CPU, running the highest possible settings that Cyberpunk 2077 has to offer, meaning with ultra ray tracing enabled and on Psycho (max) settings. First, let’s look at how the GPU was doing without DLSS 3 enabled.
At the native resolution, the game was running at an average of 59 frames per second (fps) with a latency that hovered around 72 to 75 milliseconds (ms). The RTX 4090 was able to hit a whopping 2.8GHz clock speed, and that’s without overclocking — those are stock speeds, even though the maximum advertised clock speed for the RTX 4090 is just over 2.5GHz. This means an increase of roughly 13% without an overclock. During the demo, the GPU reached 100% utilization, but the temperatures stayed reasonable at around 55 degrees Celsius.
It’s a different story once DLSS 3 is toggled on, though. As Wccftech notes in its report, the GPU was using a pre-release version of DLSS 3, so these results might still change. For now, however, DLSS 3 is looking more and more impressive by the minute.
Enabling DLSS 3 also enables the DLSS Frame Generation setting, and for this test, the Quality preset was used. Once again, the GPU hit maximum utilization and a 2.8GHz boost clock, but the temperature was closer to 50C rather than 55C. The fps gains were nothing short of massive, hitting 119 fps and an average latency of 53ms. This means that the frame rates doubled while the latency was reduced by 30%.
We also have the power consumption figures for both DLSS 3 on and off, and this is where it gets even more impressive. Without DLSS 3, the GPU was consuming 461 watts of power on average, and the performance per watt (Frames/Joule) was rated at 0.135 points. Enabling DLSS 3 brought the wattage down to just 348 watts, meaning a reduction of 25%, while the performance per watt was boosted to 0.513 — nearly four times that of the test without DLSS 3.
Wccftech has also tested this on an RTX 3090 Ti and found similar, albeit worse, results. The GPU still saw a boost in performance (64%) and a drop in power draw (10%), so the energy consumption numbers are not as impressive, confirming that DLSS 3 will offer a real upgrade over its predecessor.
The reason behind this unexpected difference in power consumption might lie in the way the GPU is utilized with DLSS 3 enabled. The load placed on the FP32 cores moves to the GPU tensor cores. This helps free up some of the load placed on the whole GPU and, as a result, cuts the power consumption.
It’s no news that the RTX 4090 is one power-hungry card, so it’s good to see that DLSS 3 might be able to bring those figures down a notch or two. Now, all we need is a game that can fully take advantage of this kind of performance. Nvidia’s GeForce RTX 4090 is set to release on October 12 and will arrive with a $1,599 price tag. With less than a month left until its launch, we should start seeing more comparisons and benchmarks soon.
NVIDIA’s GeForce RTX 40 series GPUs won’t just rely on brute force to deliver high-performance visuals. The company has unveiled Deep Learning Super Sampling 3 (aka DLSS 3), a new version of its AI-based rendering accelerator. Rather than generating ‘only’ pixels, the third-gen technology can create entire new frames independently. It’s a bit like the frame interpolation you see (and sometimes despise) with TVs, although this is clearly more sophisticated — NVIDIA is improving performance, not just smoothing out video.
The technique relies on both fourth-gen Tensor Cores and an “Optical Flow Accelerator” that predicts movement in a scene by comparing two high-resolution frames and generating intermediate frames. As it doesn’t involve a computer’s main processor, the approach is particularly helpful for Microsoft Flight Simulator and other games that are typically CPU-limited. A new detail setting in Cyberpunk 2077 runs at 62FPS in 4K resolution using DLSS2 in NVIDIA’s tests, but jumps beyond 100FPS with DLSS 3.
Roughly 35 apps and games will offer DLSS 3 support early on. This includes Portal RTX, older titles like The Witcher 3: Wild Hunt and releases based on Unreal Engine 4 and 5.
It’s too soon to say how well DLSS 3 works in practice. NVIDIA is choosing games that make the most of DLSS, and the technology might not help as much with less constrained titles. Nonetheless, this might be useful for ensuring that more of your games are consistently smooth. Provided, of course, that you’re willing to spend the $899-plus GPU makers are currently asking for RTX 40-based video cards.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.
Nvidia’s Deep Learning Super Sampling (DLSS) has been an undeniable selling point for RTX GPUs since its launch, and AMD’s attempts to fight back haven’t exactly been home runs.
But what if FidelityFX Super Resolution (FSR) could grant the huge performance gains of DLSS without all the restrictions imposed by Nvidia? If that sounds too good to be true, I wouldn’t blame you. After all, Nvidia’s special sauce of machine learning wasn’t supposed to be easily replicated.
Well, hold on to your hat because a modder recently discovered how easily FSR could ape off DLSS. And after trying out the solution myself, it’s made me more excited about the potential for FSR than ever.
What we have now
Before we get to the mod itself, it’s worth setting the stage for how we got here. FSR was AMD’s first attempt at a DLSS killer, and unfortunately, it left a bad taste in our mouths. Despite the rapid adoption in the first generation of FSR 1.0, the performance and image quality just didn’t cut it.
All that changed with the release of the technology’s second generation. I’ve tested FSR 2.0 in its launch title, Deathloop, and the results are clear: DLSS provides a slightly higher performance boost, but FSR 2.0 is almost identical in terms of image quality. Based on Deathloop, you should use DLSS if you can, but FSR 2.0 is a very close second if you don’t have a supported GPU.
My expectations were surpassed further when I tested God of War, seeing the margin with DLSS shrink even more. In fact, FSR 2.0 was actually around 4% faster than DLSS with the Ultra Performance preset. You’re not trading much of anything with image quality, either. Even at the intense Ultra Performance preset, it’s nearly impossible to spot any differences between FSR 2.0 and DLSS while playing.
This is the real deal. The only problem? FSR 2.0 is available technically, but it’s not seeing the rapid adoption that the first version did. It’s available in only four games now: Deathloop, Farming Simulator 22, God of War, and Tiny Tina’s Wonderlands. The upcoming list isn’t all that exciting, either, headlined by Hitman 3, Eve Online, and the recently delayed Forspoken.
Hence, the need for a seemingly impossible solution that takes the goodness of FSR 2.0 and widely expands its effect to as many titles as possible. And that’s where the fun begins.
A look into the future
About a month ago, modder PotatoOfDoom released an FSR 2.0 “hack” for Cyberpunk 2077. What the modder realized was that DLSS and FSR 2.0 require basically the same information — motion vectors, color values, and the depth buffer. That allowed PotatoOfDoom to create a simple instruction translation, using the DLSS backbone to send FSR 2.0 instructions. It’s like how Wine works for Windows games on Linux, according to the modder.
I’ll circle back to what these similarities between DLSS and FSR 2.0 mean, but let’s get games out of the way first. I followed the instructions and was able to implement the mod in Cyberpunk 2077, Dying Light 2, and Doom Eternal — all games that don’t currently support FSR 2.0. Doom Eternal was the only game that struggled with the mod, blocking out the DLSS option in the settings menu entirely. That was a no-go.
But Cyberpunk 2077 and Dying Light 2 were an absolute treat. The mod isn’t quite as powerful as a native implementation, but it’s still very close. The difference is less than 10% at most, even with all of the settings cranked up at 4K (including the highest ray tracing options).
Image quality was just as good, even on this self-described hack. In a still image, Dying Light 2 actually looked slightly better with FSR 2.0, and it was nearly identical in Cyberpunk 2077. The main difference, as was the case in God of War and Deathloop, is that FSR 2.0 doesn’t handle distant fine detail as well. You can see that on the phone lines in Cyberpunk 2077 below. It’s damn close, though.
DLSS and FSR 2.0 look largely the same with a still image, but it’s the motion that matters. I saw heavy ghosting in Dying Light 2 that wasn’t present with DLSS or FSR 1.0, and flat textures cause some issues with masking.
Certain elements, like the smog from the sewer in the Cyberpunk 2077 screenshot below, don’t include motion vectors. FSR 2.0 and DLSS get around the issue with masking the element (like in Photoshop) so it’s not included in the supersampling. Unfortunately, they go about the masking in different ways, leading to the nasty pixelation with the FSR 2.0 hack that you can see below.
Even with those issues, it’s remarkable how close DLSS and FSR 2.0 are, both on a gameplay and a technical level. PotatoOfDoom summed up how much they share in an interview with Eurogamer: “I expected to work on [adding FSR 2.0] for several days, but was pleasantly surprised that it only took me a few hours to integrate.”
The point isn’t that you should necessarily go out and use this mod to add FSR 2.0 to every game. Rather, this mod reveals the deep similarities between DLSS and FSR 2.0 — something Nvidia might not want to readily admit.
A big reason why Nvidia’s GPUs sell above list price is DLSS, even if it doesn’t need to be.
There are a lot of similarities between DLSS and FSR 2.0, even concerning Nvidia’s machine learning bit. DLSS is using a neural network and FSR 2.0 is using an algorithm, but both are fed with the same inputs and use the same overall system to render the final output. The fact that PotatoOfDoom was able to develop one mod that works across several DLSS titles in a few hours is a testament to that.
The main issue now isn’t that DLSS is bad — it’s excellent, and you should use it if you can — but that the feature is exclusive to only a few expensive graphics cards. Even when GPU prices are falling, Nvidia’s low-end and midrange models continue to sell for above list price. And a big reason why is DLSS, even if it doesn’t need to be.
General-purpose solutions like FSR 2.0 and Unreal Engine’s TSR (temporal super resolution) are the way of the future. They work with basically all modern hardware, and developers consistently insist that they only take a few hours to get working.
DLSS doesn’t need to go away, but it would be nice to see Nvidia leverage its relationships with developers to get a general-purpose supersampling feature into games that support DLSS already. And no, Nvidia Image Sharping, which is basically FSR 1.0, doesn’t count.
FSR 2.0 is genuinely impressive, but game support is holding it back. Far more games support DLSS than even FSR 1.0, and the official list of four FSR 2.0 is embarrassing. I’m not excited for too many of the upcoming FSR 2.0 titles, either, with the list mostly comprised of older or smaller games.
PotatoOfDoom’s mod is a hopeful sign, but we need more FSR 2.0 games for it to even stand a chance against DLSS. It might be tempting to root for AMD here, but it’s important to remember that DLSS still has a minor lead and is supported in far more games. AMD has a lot of ground to cover, and FSR 2.0 isn’t being added into games at nearly the rate that FSR 1.0 was.
Still, it will be interesting to see how the dynamic between DLSS and FSR 2.0 adjusts over the rest of the year. AMD just released the FSR 2.0 source code in June, after all. For now, DLSS is still the way to go for its game support and slightly better image quality, but it’s not a selling point on an Nvidia GPU like it once was.
This article is part of ReSpec – an ongoing biweekly column that includes discussions, advice, and in-depth reporting on the tech behind PC gaming.
Digital Trends may earn a commission when you buy through links on our site.
With more PlayStation games coming to PC, graphics card manufacturer Nvidia has been working on making the PC versions of titles better than their console counterparts. Horizon Zero Dawn is getting the DLSS treatment, while God of War will get the same, along with a suite of other graphical improvements.
Starting today, anyone with one of Nvidia’s beefier cards in their computer can play Horizon Zero Dawn on PC with Nvidia’s DLSS tech. DLSS, or Deep Learning Super Sampling, boosts frame rates in-game without reducing resolutions by using A.I. rendering. The technique lets players run their games at high resolutions, with maxed-out settings, or even with ray tracing enabled, without shedding too many frames. While Horizon Zero Dawn doesn’t have ray tracing, the game is quite demanding, although Nvidia claims that DLSS can boost the game’s performance by “up to 50%.”
As for next year’s PC rerelease of God of War, the blockbuster title will receive numerous changes and improvements when it moves off of consoles. Along with Nvidia’s DLSS, anyone playing God of War on PC with an Nvidia graphics card will be able to use Nvidia Reflex, which reduces latency. The game will also have a full bevy of graphics settings options, letting players turn on high-resolution shadows, higher rendering resolutions, and more. And thanks to an uncapped frame rate, players can finally play God of War at 144 frames per second.
Nvidia also shared God of War‘s PC system requirements, revealing that the title won’t be too demanding to run. God of War is set to launch on PC on January 14, 2022.
NVIDIA has released a major update for its technology. With of the software, the company says the AI algorithm makes smarter use of motion vectors to improve how objects look when they’re moving. The update also helps to reduce ghosting, make particle effects look clearer and improve temporal stability. The latter has traditionally been one of the weakest aspects of the technology, so DLSS 2.3 represents a major improvement. As of today, 16 games feature support for DLSS 2.3. Highlights include Cyberpunk 2077, Deathloop and Doom Eternal.
If you don’t own an but still want to take advantage of the performance boost you can get from upscaling a game, NVIDIA has updated its Image Scaling technology to improve both fidelity and performance. Accessible through the NVIDIA Control Panel, the tool uses spatial upscaling to do the job. That means the result isn’t as clean as the temporal method DLSS uses, but the advantage is you don’t need special hardware. To that end, NVIDIA is releasing an SDK that will allow any GPU, regardless of make, to take advantage of the technology. In that way, NVIDIA says game developers can offer the best of both worlds: DLSS for the best possible image quality and NVIDIA Image Scaling for cross-platform support.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
I just reviewed AMD’s new Radeon RX 6600, which is a budget GPU that squarely targets 1080p gamers. It’s a decent option, especially in a time when GPU prices are through the roof, but it exposed a trend that I’ve seen brewing over the past few graphics card launches. Nvidia’s Deep Learning Super Sampling (DLSS) tech is too good to ignore, no matter how powerful the competition is from AMD.
In a time when resolutions and refresh rates continue to climb, and demanding features like ray tracing are becoming the norm, upscaling is essential to run the latest games in their full glory. AMD offers an alternative to DLSS in the form of FidelityFX Super Resolution (FSR). But FSR isn’t a reason to buy an AMD graphics card, and DLSS is a reason to buy an Nvidia one even if it shouldn’t be.
Nvidia’s walled garden
Nvidia only offers DLSS on its last two generations of graphics cards — in particular, RTX 30-series and 20-series cards. Walling off features like this isn’t something new for Nvidia. For years, it restricted its G-Sync variable refresh rate technology to monitors that included a dedicated (and costly) proprietary module, instead of adopting the open-source FreeSync developed by AMD.
Similarly, many machine learning applications are built to run using Nvidia’s CUDA GPU computing platform, not the OpenCL platform that AMD cards use. Developers have fixed the problem in software libraries like TensorFlow, but there’s still a trend with these libraries: CUDA gets first priority.
That leaves us with DLSS, which is also a technology restricted only to Nvidia hardware. There’s a good reason why — DLSS uses an A.I. model that can only run on the Tensor cores on recent Nvidia graphics cards. Right now, AMD cards don’t have these dedicated A.I. accelerators, but it’s hard to imagine Nvidia taking them into consideration if they existed.
In fairness to Nvidia, the company has taken steps to break down its proverbial walls. For example, G-Sync now works with a range of FreeSync monitors that don’t include a dedicated module. The important thing to know is that Nvidia has traditionally developed new features with only its hardware in mind, while AMD usually takes an open-source approach.
That’s true for DLSS and FSR, too. The difference between DLSS and Nvidia’s other walled-off features is that it’s significantly better than FSR.
Performance parity, and why DLSS is too good to ignore
The massive asterisk is DLSS. When AMD announced FSR, it looked like an open-source competitor to DLSS that could run on AMD and Nvidia cards alike. In reality, it’s an upscaling tool based on dated tech that manages to increase frame rates, but at a significant cost to image quality.
DLSS doesn’t have that problem. Both DLSS and FSR accomplish the same goal by upscaling a low-resolution image to a high-resolution one by filling in the missing pixels. The difference is that FSR uses a baked-in algorithm with a sharpening filter while DLSS uses an A.I. model that’s been trained on what the final image should look like. Basically, DLSS has a lot more information to work with, and Nvidia graphics cards have the A.I. accelerators to take advantage of it.
Making FSR open source was an inclusive move for AMD, but it was also a compromise. DLSS is a reason to buy an Nvidia graphics card given its image quality, and even if AMD restricted FSR to its own platform, it wouldn’t be enough to compete with the feature set of Team Green. You can see that in the recent Back 4 Blood, where DLSS holds up much better than FSR (even if FSR offers higher frame rates overall).
To be clear, I’m not advocating for another walled garden — I don’t like the fact that Nvidia restricts DLSS to its platform, either, and as Intel’s XeSS supersampling feature shows, it’s possible to develop this tech in an inclusive way. The point is that Nvidia isn’t going to develop DLSS for other hardware, but AMD could have developed FSR to go toe-to-toe with DLSS while sticking with an open-source approach.
But it may not stay that way for long. Intel is set to release its Arc Alchemist cards soon, which include XeSS. It works like DLSS, but Intel is also offering a general-purpose version that can run on a variety of hardware. AMD could have jumped on that opportunity but didn’t. It looks like Intel is filling the gap.
In the future, I hope to see AMD, Nvidia, and Intel reach performance and feature parity. At least then we don’t have one dominant graphics card maker resting on its laurels while the rest of the market tries to catch up. AMD has said it will continue working on FSR, and XeSS will be available early next year, so hopefully that shift is right around the corner.
Nvidia’s RTX features have been among the primary selling points of its graphics cards in recent years. But now, the mastermind behind those advanced graphics features now works for one of Nvidia’s new rivals in the world of gaming graphics: Intel.
Nvidia RTX consists of two primary features: Real-time ray tracing and Deep Learning Super Sampling (DLSS), both of which are critical for running the latest games with all the visual glitter turned on. DLSS is critical for running the latest games with ray tracing enabled. It’s the bedrock that has allowed ray tracing to flourish in video games, and it’s a big reason why Nvidia still holds an edge over AMD in the space. Now, Intel looks to be joining the fray.
Intel has now hired the person behind both technologies, Anton Kaplanyan, suggesting that Intel could be working on its own DLSS competitor for its upcoming graphics cards.
Anton Kaplanyan had a short but meaningful stint at Nvidia from 2015 to 2017, during which he helped design RTX ray-tracing hardware and DLSS.
“After the hardware was done, my Nvidia Research colleagues and I realized that the hardware performance would not suffice for real-time visuals, so we started developing a completely new direction of real-time image reconstruction methods,” Kaplanyan wrote in a blog post.
Intel could be working on a similar technology for its upcoming graphics cards — the blog post is careful not to mention DLSS by name, after all. Kaplanyan’s hire is, at least in part, based on his experience with graphics and machine learning. “New differentiating technologies in graphics and machine learning is the missing cherry on the cake,” Kaplanyan wrote.
That would make sense for Intel. AMD has already fired back at Nvidia with its competing FidelityFX Super Resolution technology, and some recent job postings suggest Microsoft is working on a similar feature. With Intel’s DG2 graphics card on the horizon, the company looks like it’s ready to play ball with the latest graphics technologies.
Intel is forming an all-star roster of graphics experts. In 2017, the company picked up Raja Koduri, who’s known for working in AMD’s Radeon division on the Polaris, Vega, and Navi architectures. Koduri now heads up Intel’s graphics and software sector, leading the charge on the company’s first foray into desktop graphics cards.
Kaplanyan is likely a key part of that strategy, aiding in the development of ray tracing and the software it requires to run in real time. Before joining Intel, Kaplanyan worked as a researcher at Facebook for the company’s virtual reality (VR) endeavors. During that time, Kaplanyan published a paper on neural supersampling, which looks an awful lot like DLSS.
The future of Intel’s graphics department looks bright, assuming the pieces fall in place as they should. With ray tracing pushing graphics more than ever before, as well as the rise of high-resolution and high refresh rate monitors, a supersampling method is essential.
“I think we are at the edge of a new era in graphics — an era where visual computing will become more distributed, more heterogeneous, more power-efficient, more accessible, and more intelligent,” Kaplanyan wrote.
Nvidia just made the Deep Learning Super Sampling (DLSS) software development kit (SDK) freely available to developers. In a move that’s likely in response to AMD’s FidelityFX Super Resolution (FSR) technology, Nvidia has shown that it knows that the feature’s superior quality isn’t enough to compete with FSR alone.
Although the move certainly makes DLSS a more equal competitor to FSR, it alone isn’t enough to cement DLSS as the go-to upscaling feature. Nvidia has a commanding position with DLSS at the moment, and by borrowing a key feature from FSR, it could make AMD’s upscaling technique obsolete. Here’s how.
DLSS is winning right now
In the battle between DLSS and FSR, Nvidia’s feature is already winning. That’s not because it’s inherently better than FSR, but because Nvidia has been working on the technology for nearly three years. In that time, the list of supported DLSS titles has continued to grow, despite Nvidia asking developers to apply to use the technology.
In a matter of days after FSR arriving on AMD’s GPUOpen platform, for example, Marvel’s Avengers received the feature, and AMD didn’t announce the game beforehand. This rapid adoption is likely what triggered Nvidia to make its SDK freely available to developers, especially as adopters of the feature claimed that AMD’s technology was easier to work with.
DLSS is already winning in the games race, and the move to make the SDK readily available greases the wheels of adoption. The latest SDK also brings Linux support, which wasn’t previously available with DLSS. FSR has worked on Linux since it launched.
For Nvidia, it’s not about DLSS winning against FSR. It’s about maintaining a position it has already built for itself over a few years, which shouldn’t be hard to do given the quality that DLSS provides over FSR, particularly with more demanding upscaling modes.
That’s not true of the lower quality settings, however. As the internal render resolution shrinks, the problems with FSR start to become clear. Although the quality certainly drops with DLSS as the internal render resolution does, Nvidia’s upscaling tech still holds up much better than FSR does in the more demanding modes.
There are two main reasons for this. The first is the artificial intelligence (A.I.) training that DLSS uses. There’s a generalized A.I. model that Nvidia trains with high-quality scans offline, which gives the upscaling algorithm more information to work with. Although that extra information isn’t important at high-quality modes, it becomes essential at low ones.
In addition, DLSS uses motion vectors while FSR doesn’t. This temporal data allows DLSS to use information from previous and future frames to accurately track moving objects in a scene, reducing visual artifacts. This is especially noticeable for distant objects, like molten steel pouring in Necromunda: Hired Gun and the flicker of wispy clouds in Death Stranding.
FSR is a very close approximation of DLSS. Once the tech is pushed to the limit, however, it’s clear that FSR is just an approximation. DLSS remains the benchmark due to its ability to take advantage of dedicated hardware, the A.I. model, and temporal information.
Quality alone may not be enough to give DLSS staying power over FSR, though.
The low-end linchpin
The DLSS versus FSR discussion really isn’t relevant on high-end hardware. Simply turning on the feature to one of the higher quality modes will render excellent image quality and a significant boost in performance. Low-end hardware is the linchpin to FSR’s rapid adoption, and it remains the most potent threat to DLSS.
There are obvious performance differences between the two cards, but the fact remains that budget PC builders who most need access to an upscaling feature have been effectively priced out of DLSS. This is all the more frustrating because DLSS shows its clearest strengths at lower quality settings, which most benefit low-end hardware.
Even with a clear advantage in game support and quality, DLSS won’t be able to maintain its lead over FSR if budget hardware isn’t accounted for. FSR is a generalized solution that works pretty much regardless of the hardware you have, making it an obvious choice for users with inexpensive graphics cards or APUs.
Unfortunately, Nvidia can’t just adopt other graphics cards into DLSS. The feature requires Tensor cores, and they’re only available on the last two generations of Nvidia GPUs. However, Nvidia could use its experience with DLSS, A.I., and temporal upscaling to offer a feature that accounts for players who don’t have access to RTX graphics cards.
This is all the more important given the ongoing problems with finding a graphics card. When options are few and far between, builders are going to reach for whatever’s available. And when developers see that shift, they’ll be more likely to adopt a feature like FSR that works for their player base.
With DLSS, Nvidia has already optimized for the future. We’ve seen things like 8K gameplay above 60 frames per second thanks to the feature, accelerating what’s possible with PC gaming. In order to stay competitive with FSR, though, Nvidia also needs to optimize for the past.
Over the last month, Nvidia’s Deep Learning Super Sampling (DLSS) and AMD’s FidelityFX Super Resolution (FSR) have been in a battle for the limelight. Both tools offer upscaling in supported games to deliver features like ray tracing at high resolutions and frame rates. There might be a new competitor entering the ring, though, and it comes from Microsoft.
Two job postings (spotted by TechSpot) hint at a possible DLSS competitor from Microsoft. The first job posting is for a Senior Software Engineer in the Xbox division of the company.
“Xbox is leveraging machine learning to make traditional rendering algorithms more efficient and to provide better alternatives. The Xbox graphics team is seeking an engineer who will implement machine learning algorithms in graphics software to delight millions of gamers,” the job description reads.
The other posting for a Principal Software Engineer for Graphics is a bit more general, but it specifically mentions “state-of-the-art GPU capabilities on Xbox and Windows for AAA game developers,” as well as experience with machine learning and shader compilation.
A DLSS competitor from Microsoft would be good news for gamers. It’s possible Microsoft could deliver machine learning-assisted upscaling through DirectX. Microsoft’s DirectML library already helps optimize GPU resources for machine learning, and with both job postings referring to gaming on Xbox and Windows, Microsoft could be looking for ways to leverage that library in its gaming sector.
That would make sense given Microsoft’s renewed interest in the gaming market. The recently announced Windows 11 includes a swath of Xbox features, including the Direct Storage API for faster loading times and Auto HDR.
Microsoft’s version may not look the same as DLSS or FSR, however. Microsoft already allows developers to use FSR for game development on Xbox consoles, which it’s able to do thanks to FSR’s open-source approach. DLSS, on the other hand, requires proprietary hardware from Nvidia and is, according to at least one developer, more difficult to work with than FSR.
This upscaling feature would likely come through the DirectX interface, which would give developers more options for upscaling. That should mean the feature won’t require any specific hardware, which is a big deal for aging GPUs and APUs that don’t have the power to stand up to modern AAA games.
As with DLSS and FSR, though, the longevity of Microsoft’s implementation will come down to image quality and performance. In our FidelityFX Super Resolution review, we found that it delivers a solid performance increase at 4K, though it struggles with image quality at the more aggressive upscaling modes. DLSS produces a better result overall, but it requires an Nvidia graphics card.
Where Microsoft’s version, if it exists, falls on that spectrum remains to be seen. If it is coming, Xbox and Windows gamers have a lot to look forward to.
In almost every industry, competition drives innovation up and prices down, and in PC hardware, there’s little as hotly competitive as the graphics card market. But what’s good for the gamer is good for the gander too, because while AMD may be riding high on a wave of publicity, goodwill, and mindshare after the release of its FidelityFX Super Resolution upscaling algorithm, Nvidia is more than happy to coast atop it. It’s been trying for a long time to make upscaling relevant, and in releasing a solution that’s available to almost everyone, AMD has done it instead.
But the real strength of FSR is that almost anyone can use it. Whether you’re running a cutting-edge 4K GPU or an entry-level card from five years ago. It works on Nvidia cards too, and there are even some claims of it working on Intel GPUs. Consoles could follow, giving FSR an immediate and enormous potential player base.
That’s very different from Nvidia’s DLSS strategy. As much as its own supporting cards have grown more popular over the past three years, RTX 2000 and 3000 cards are still very much in the minority. Nvidia’s own GTX 10 series and 16-series cards dominate the Steam Hardware Survey, and though Nvidia has been magnanimous enough to give them the ability to stutteringly try ray tracing, it’s kept DLSS firmly locked behind a paywall that in 2021 has grown to truly ridiculous heights.
Which is why Nvidia must be thanking its lucky stars that AMD launched FSR to such great success. It now has a fight on its hands in convincing gamers that not only is DLSS better than FSR, but that it’s worth buying a new graphics card for.
That’s a fight that it wants to have, because it believes it can win.
Thanks to FSR, gamers can now get a taste of what upscaling technology can do, especially at the low end of the market which Nvidia has stubbornly refused to cater to (at least for now). But when GPU prices finally stabilize and we can all afford to buy graphics cards again, those gamers looking to upgrade will far better understand the potential of DLSS and may just opt to pay for its privilege.
Nvidia Ultra Quality – More than just a name
Following the release of FSR, and its subsequent near-universal praise, Nvidia has been quick with a retort of its own, pushing the story that DLSS is available on more and bigger games, like Fortnite, Minecraft, and Doom Eternal.
It’s also stressed the overall greater image quality of DLSS, which is a claim few would have argued with. But it may improve it further still, with rumors of a new Ultra Quality mode set to launch soon. Discovered during a recent release of the Unreal Engine 5 documentation, this preset seems likely to use a greater input resolution than the existing Quality preset, potentially leading to an image quality that is closer to native, without quite the same performance benefits as the lower settings.
It seems awfully convenient that this is coming about shortly after the release of FSR, though, which has its own Ultra Quality setting. Whether it’s because the image quality of FSR surprised Nvidia and feels the need to improve its own, or it merely doesn’t like FSR having a mode that sounds like it’s better than its existing DLSS Quality mode, will likely remain a mystery.
But there’s no denying that DLSS is a smarter and more in-depth upscaling process. Its use of motion vectors gives it a real advantage over FSR, and though Nvidia’s technique does have its own unique visual artifacts to deal with, they aren’t the kind of heavy-handed sharpening that FSR users experience on most of the settings outside Ultra Quality.
You could certainly make the argument that DLSS is the premium upscaler available today. The problem still remains, however, that it’s very much a premium feature, for gamers who can afford to pay for it.
It’s no good if few people can use it
DLSS is great. It’s effective, looks good, offers great performance advantages, and it’s getting better all the time. But – and I know Nvidia feels I’m wrong on this one – it’s still barely available to anyone. Yes, the RTX 2060 is a popular card, but the overall RTX 2000 and 3000 series barely encapsulates 15% of all Steam gamers.
If AMD had made FSR available to only its own GPUs, it would be in a similar state. But it didn’t, and it isn’t. FSR works on everything, even cards it wasn’t designed to benefit, like Nvidia’s 900-series Maxwell GPUs. DLSS works on the latest and the greatest, which is great for the greatest of gamers with the grandest of wallets, but it’s largely useless for anyone else.
FSR, though, isn’t. Whether you use it or not, whether Nvidia will optimize for it or not, FSR is good for DLSS, and it’s good for Nvidia. FSR is going to put upscaling on the map and Nvidia knows all too well that it has the potential to lead people right to its new-generation GPUs. If Nvidia wants DLSS to remain relevant, it may well need it to.
Nvidia told Digital Trends that we can expect tensor cores in all GeForce GPUs moving forward, and that if we look at recent developments on the laptop front, we can expect something similar on desktop before long. That means more affordable RTX 3050 and 3050 Ti GPUs are likely going to hit desktop in the not-too-distant future.
That’s great. It’s exactly what DLSS needs to help those it has the most potential to benefit. Nvidia will no doubt keep tensor core counts low, and retain the biggest performance benefits for its premium customers, but it’s a start. It doesn’t seem likely that Nvidia will repeat its 16-series generational mistakes of keeping its entry-level consumers firmly locked in the past.
The upscaler wars are here, but should you care?
Not at all. This is one of those wonderful cases where the actions of both AMD and Nvidia are going to benefit almost every gamer. FSR is forcing Nvidia’s hand to make DLSS more inclusive and of higher quality. That in turn will keep AMD hunting for the perfect upscaler, with FSR offering a good-enough solution that looks set to redefine what upscaling is for most gamers.
If I had to put money on which will be more popular long term, I’d say FSR, thanks to its likely use on consoles, its more open standard approach, and its relative simplicity. But I don’t think DLSS is going the way of HairWorks any time soon. If all goes to plan, future Nvidia fans will be able to enable DLSS in the games they want, giving them a great quality and performance boost. For everything else, FSR will be there, giving every gamer those much-needed extra frames per second they’re always chasing.