Categories
Computing

Nvidia’s RTX 4000 get new specs, and it’s not all good news

Nvidia’s upcoming Ada Lovelace graphics cards just received a new set of rumored specifications, and this time around, it’s a bit of a mixed bag.

While the news is good for one of the GPUs, the RTX 4070 actually received a cut when it comes to its specs — but the leaker says this won’t translate to a cheaper price.

And TBP, 450/420?/300W.

— kopite7kimi (@kopite7kimi) June 23, 2022

The information comes from kopite7kimi, a well-recognized name when it comes to PC hardware leaks, who has just revealed an update to the specifications of the RTX 4090, RTX 4080, and the RTX 4070. While we’ve already heard previous whispers about the specs of the RTX 4090 and the RTX 4070, this is the first time we’re getting predictions about the specs of the RTX 4080.

Let’s start with the good news. If this rumor is true, the flagship RTX 4090 seems to have received a slight bump in the core count. The previously reported number was 16,128 CUDA cores, and this has now gone up to 16,384 cores, which translates to an upgrade from 126 streaming multiprocessors (SMs) to 128. As for the rest of the specs, they remain unchanged — the current expectation is that the GPU will get 24GB of GDDR6X memory across a 384-bit memory bus, as well as 21Gbps bandwidth.

The RTX 4090 includes the AD102 GPU, which maxes out at 144 SMs, but it seems unlikely that the RTX 4090 itself will ever reach such heights. The full version of the AD102 GPU is probably going to be found in an even better graphics card, be it a Titan or simply an RTX 4090 Ti. It’s also rumored to have monstrous power requirements. This time around, kopite7kimi didn’t reveal anything new about that card, and as of now, we still don’t know for a fact that it even exists.

Moving on to the RTX 4080 with the AD103 GPU, it’s said to come with 10,240 CUDA cores and 16GB of memory. However, according to kopite7kimi, it would rely on GDDR6 memory as opposed to GDDR6X. Seeing as the leaker predicts it to be 18Gbps, that would actually make it slower than the RTX 3080 with its 19Gbps memory. The core count is exactly the same as in the RTX 3080 Ti. So far, this GPU doesn’t sound very impressive, but it’s said to come with a much larger L2 cache that could potentially offer an upgrade in its gaming performance versus its predecessors.

Jacob Roach / Digital Trends

When it comes to the RTX 4070, the GPU was previously rumored to come with 12GB of memory, but now, kopite7kimi predicts just 10GB across a 160-bit memory bus. It’s said to offer 7,168 CUDA cores. While it’s certainly an upgrade over the RTX 3070, it might not quite be the generational leap some users are hoping for. It’s also supposedly not going to receive a price discount based on the reduction in specs, but we still don’t know the MSRP of this GPU, so it’s hard to judge its value.

Lastly, the leaker delivered an update on the power requirements of the GPUs, which have certainly been the subject of much speculation over the last few months. The predicted TBP for the RTX 4090 is 450 watts. It’s 420 watts for the RTX 4080 and 300 watts for the RTX 4070. Those numbers are a lot more conservative than the 600 watts (and above) that we’ve seen floating around.

What does all of this mean for us — the end-users of the upcoming RTX 40-series GPUs? Not too much just yet. The specifications may yet change, and although kopite7kimi has a proven track record, they could be wrong about the specs, too. However, as things stand now, only the RTX 4090 seems to mark a huge upgrade over its predecessor while the other two are a much more modest change. It remains to be seen whether the pricing will reflect that or not.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia’s GeForce RTX 3090 Ti GPU Set for January Launch

Nvidia is reportedly set to add three new variations to its GeForce RTX 30 series of GPUs, with the flagship RTX 3090 Ti apparently due for a release next month.

According to an embargoed document uncovered by VideoCardz, the highly anticipated RTX 3090 Ti GPU will be released on January 27, 2022. Also expected to be released on that same date is the GeForce RTX 3050 8GB graphics card. With CES 2022 around the corner, expect Nvidia to formally introduce these video cards at the event.

Elsewhere, Nvidia is said to be planning to announce the upgraded RTX 3070 Ti 16GB model next week on December 17, while a launch to consumers is scheduled for January 11. As for its specifications, VideoCardz notes how the GPU will contain the same CUDA core count as the 8GB model, in addition to the same clock speeds.

The card will also come with 16GB of GDDR6X memory, according to Wccftech, which means the standard GDDR6 modules found on the current GeForce RTX 3070 graphics card are being upgraded.

Nvidia’s GeForce RTX 3050 8GB, meanwhile, is rumored to deliver 3072 CUDA cores in 24 SM units through the GA106-150 GPU, joined by 8GB of GDDR6 memory. Ultimately, such specs would make the card an attractive option in the mainstream segment of the market.

As for the powerful RTX 3090 Ti, which is obviously geared toward enthusiasts, previous rumors have given us an insight into what to expect from the card. It’s expected to feature 21Gbps of GDDR6X memory based on 2GB GDDR6X memory modules. Notably, this will allow the GPU to sport 1TBps of bandwidth.

Next-generation standards such as PCIe Gen 5.0 will be supported by a new 16-pin connector, while a 450 W TDP will offer increased power consumption; the RTX 3090 Ti is set to be Nvidia’s first video card for the consumer market to utilize the full GA102 GPU via its 10,752 CUDA cores.

Nvidia’s keynote at CES 2022 takes place on January 4, aptly providing it with an opportunity to unveil the aforementioned Ampere graphics cards. Getting your hands on these upcoming GPUs, however, is another discussion entirely due to the current worldwide shortage. Nvidia recently stated that long-term agreements with manufacturers meant supplies could improve during the second half of 2022.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia Confirms RTX 2060 Super, Arriving Next Week

Digital Trends may earn a commission when you buy through links on our site.

Nvidia announced a 12GB variant of its RTX 2060 graphics card, and it should arrive on December 7. The announcement follows months of rumors and speculation that Nvidia could revitalize its popular last-gen card to ease the burden of the GPU shortage. It brings with it a significant memory upgrade.

The announcement didn’t come at a big tech event or closed-door press briefing. Instead, Nvidia revealed the card through the patch notes of GeForce driver 497.09. Originally, this seemed like a mistake. Nvidia didn’t acknowledge the card through a blog post about the driver, instead burying in on the fourth page of the patch notes (direct PDF link).

Dan Baker/Digital Trends

It looks like this silent announcement was Nvidia’s plan all along. You can find the 12GB RTX 2060 alongside the standard 6GB RTX 2060 on Nvidia’s product page. The updated version includes twice the amount of video memory as the base model, as well as some boosts to core count and clock speed.

The 12GB RTX 2060 matches the specs of the RTX 2060 Super, but note that Nvidia isn’t using the Super branding on the new card. The 12GB model includes 2,176 CUDA cores, a base clock speed of 1,470MHz, and a boost clock speed of 1,470MHz — the same specs as the RTX 2060 Super.

The lack of Super branding is important. Nvidia hasn’t announced the pricing of the card yet, but the company says that the price will reflect that the 12GB model is a premium version of the $349 RTX 2060. The RTX 2060 Super launched for $399, so hopefully this 12GB model will be below that mark.

Originally, we viewed a 12GB RTX 2060 as nothing but hot air. At the beginning of 2021, Nvidia introduced some additional RTX 2060 Super supply into the market. The company didn’t make a fuss about it publicly, idly letting the cards bolster supply while it focused on manufacturing additional RTX 30-series cards.

Since then, rumors of a 12GB RTX 2060 variant have run amok. There wasn’t much to lend creditability to these rumors outside of the murmurs from Twitter and YouTube leakers. That was until Gigabyte, one of Nvidia’s desktop graphics card partners, filed several listings for 12GB RTX 2060 graphics cards.

We should see various 12GB RTX 2060 models next week. It seems Nvidia is intentionally launching the card under the radar, perhaps in a bid to deter scalpers and bots from snatching up the cards when they launch.

That could be why Nvidia hasn’t revealed the pricing. Given the shortages of components, the list prices set by Nvidia and AMD are unrealistic at best and inaccurate at worst. That’s something we saw with the RX 6600 XT, where some models launched for $200 more than the price set by AMD. No official pricing from Nvidia means that board partners are free to set a reasonable price based on the cost of components.

Although these are positive efforts, we’re not sure how much they’ll help alleviate the GPU shortage. Even the base RTX 2060 has been subject to price increases, so we’ll be waiting to see if this new 12GB model changes that.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Gigabyte Inadvertently Confirms 12GB Nvidia RTX 2060 Rumors

When rumors of a 12GB Nvidia RTX 2060 Super refresh started making the rounds, we said that they probably weren’t true. But it looks like we may have been wrong. Graphics card maker Gigabyte filed a new listing with the Eurasian Economic Commission (EEC) that inadvertently confirms this card’s existence..

Twitter user @momomo_us uncovered the listing, which lists four Gigabyte graphics cards. Although the listing doesn’t call out the 12GB RTX 2060 Super by name, the model numbers all line up with previous Gigabyte RTX 2060 cards, with one notable change — 12GB of RAM. The GV-N2060WF2OC-6GD (Gigabyte’s Windforce RTX 2060), for example, is listed as GV-N2060WF2OC-12GD.

Dan Baker/Digital Trends

The listing comes amid mounting evidence for a refresh to Nvidia’s last-gen card. On November 14, a day before the listing went live, YouTube channel Gamers Nexus published a video saying that a 12GB RTX 2060 Super was on the way. This isn’t a channel that normally leaks new releases, but that, combined with the ECC filing and murmurs from around the community, has an 12GB RTX 2060 Super looking likely.

A dedicated leaking channel, Moore’s Law is Dead, revealed in October that the card would arrive in 2022 to take on low-end AMD RDNA 2 graphics cards. Rumors of Nvidia reintroducing the RTX 2060 in some form date back to January 2021, and they haven’t stopped since.

The question: Why? Nvidia released the RTX 3080 more than a year ago, so it’s a strange move to resurrect a GPU that’s more than two years old. There could be a good reason to bring it back, though. It’s no secret that graphics cards are tough to find right now, and Nvidia could be splitting its manufacturing efforts to get more cards out in the wild.

Evidence of the GPU shortage emerged when it was revealed that Nvidia was having manufacturing yield issues with its RTX 30-series graphics cards. Nvidia chose Samsung as its manufacturing partner, and reports circulating shortly after the launch showed that the manufacturer produced fewer usable chips than expected.

Samsung didn’t build the RTX 2060 Super — chipmaker TSMC did. TSMC is the semiconductor company behind AMD’s Ryzen 5000 processors and Radeon RX 6000 graphics cards, as well as a longtime partner for Nvidia. It looks like Nvidia could be splitting its manufacturing to bypass supply chain issues.

That’s something the company did with its GTX 10-series GPUs. The range started on TSMC’s 16nm manufacturing process, but Nvidia eventually moved to Samsung’s 14nm process. Reintroducing the RTX 2060 Super allows Nvidia to quickly produce new cards on a node the company is already familiar with.

The strange bit is the 12GB of video memory. The RTX 2060 Super originally launched with 6GB, and doubling that to 12GB probably won’t do much for gaming performance. That’s something Nvidia’s RTX 3060 proved — even with 12GB of video memory, which is more than the RTX 3080, it performs below other cards in the range.

Unfortunately, an RTX 2060 Super refresh may not be enough to alleviate supply chain issues. Nvidia has been clear that it expects the GPU shortage to continue throughout 2022, so hunting down a graphics card will continue to be a practice in patience.

It’s also possible that the 12GB RTX 2060 Super won’t ever see the light of day. Although multiple sources have confirmed the existence of the card, it’s possible that Nvidia has shelved the idea. That’s something Nvidia already did with the 20GB RTX 3080 Ti, which was reportedly canned earlier this year.

Editors’ Choice






Repost: Original Source and Author Link

Categories
Computing

New HP Omen Laptop Comes With Alder Lake and RTX 3080 Ti

Based on a recent Geekbench test, it seems that HP may be releasing a laptop fully decked out with the latest components. The notebook comes with not just the newest Intel Alder Lake-P processor, but also an Nvidia GeForce RTX 3080 Ti graphics card.

If the rumors prove to be true, the laptop will be released with two pieces of hardware that are not yet obtainable on the consumer market, as both the GPU and the CPU are unavailable in their laptop forms as of yet.

NVIDIA

The CPU used in this benchmark is the Intel Core i7-12700H, which is an Alder Lake laptop CPU. As reported by Wccftech last month, this CPU should have 14 cores, six of which are Golden Cove and eight of which are Gracemont. It runs on a 2.45GHz base clock and can be boosted up to 4.2GHz. This is combined with a 24 MB L3 cache and a fairly conservative TDP of 35 to 45 watts.

The graphics card is the still unreleased mobility version of the RTX 3080 Ti, and it may come in a standard and a Max-Q model. It may be based on a new Ampere GA103 chip and should feature 58 compute units, adding up to a total of 7424 CUDA cores. The GPU has a base clock speed of 1395MHz and 16GB of GDDR6 memory. The RTX 3080 Ti should also have a bandwidth of around 12Gbps with a 256-bit bus, as well as a TDP between 150W and 200W.

Although the desktop version of RTX 3080 Ti is markedly better than the RTX 3080, comparing the desktop RTX 3080 to the mobility RTX 3080 Ti reveals that the former will still reign supreme. The desktop RTX 3080 scored 132,909 in a Vulkan test, compared to the RTX 3080 Ti for laptops with just 90,114.

Interestingly, the RTX 3080 Ti also scored less than the laptop version of the RTX 3080, although the difference is negligible. The card outperformed the Max-Q version of the RTX 3070 for laptops and the previous-gen RTX 2080 for desktops.

Intel Alder Lake pin layout.

The mobility version of Intel Core i7-12700H was also compared to some other current CPUs in this Geekbench test. It scored 1,328 in single-core operations and 10,517 in multi-core. Unsurprisingly, it was vastly outperformed by the Intel Core i9-12900HK (1,851) and the Apple M1 Max chip (1,785).

It was also beaten by the previous generation of processors for laptops, including the Core i9-11980HK and the Ryzen 5980HX. However, all of these chips are more on the premium end of the scale than the Core i7-12700H. The new chip performed more favorably in multi-core operations, beating the Core i9-11980HK and the Ryzen 5980HX.

It’s important to remember that this hardware is still unreleased and the benchmarks may change. Drivers often play a part in the performance of components prior to their official release.

The exact release date for both the laptop and the GPU remains unknown, but it’s likely that we will learn more during CES 2022 in January.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

NVIDIA’s new ‘GeForce Now RTX 3080’ streams games at 1440p and 120 fps

NVIDIA has unveiled its next-generation cloud gaming platform called GeForce Now RTX 3080 with “desktop-class latency” and 1440p gaming at up to 120 fps on PC or Mac. The service is powered by a new gaming supercomputer called the GeForce Now SuperPod and costs double the price of the current Priority tier.

The SuperPod is “the most powerful gaming supercomputer ever built,” according to NVIDIA, delivering 39,200 TFLOPS, 11,477, 760 CUDA Cores and 8,960 CPU Cores. NVIDIA said it will provide an experience equivalent to 35 TFLOPs, or triple the Xbox Series X, roughly equal to a PC with an 8-core CPU, 28GB of DDR4-3200 RAM and a PCI-GEN4 SSD. 

NVIDIA launches GeForce Now RTX 3080-class gaming at up to 1440p 120fps

NVIDIA

As such, you’ll see 1440p gaming at up to 120fps on a Mac or PC, and even 4K HDR on a shield, though NVIDIA didn’t mention the refresh rate for the latter. It’ll also support 120 fps on mobile, “supporting next-gen 120Hz displays,” the company said. By comparison, the GeForce Now Priority tier is limited to 1080p at 60 fps, with adaptive VSync available in the latest update.

It’s also promising a “click-to-pixel” latency down to 56 milliseconds, thanks to tricks like adaptive sync that reduces buffering, supposedly beating other services and even local, dedicated PCs. However, that’s based on a 15 millisecond round trip delay (RTD) to the GeForce Now data center, something that obviously depends on your internet provider and where you’re located. 

NVIDIA’s claims aside, it’s clearly a speed upgrade over the current GeForce Priority tier, whether you’re on a mobile device or PC. There’s a price to pay for that speed, though. The GeForce Now premium tier started at $50 per year and recently doubled to $100, which is already a pretty big ask. But the RTX 3080 tier is $100 for six months (around double the price) “in limited quantities,” with Founders and priority early access starting today. If it lives up to the claims, it’s cheaper than buying a new PC, in any case. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

Mastermind Behind Nvidia RTX DLSS Just Got Hired By Intel

Nvidia’s RTX features have been among the primary selling points of its graphics cards in recent years. But now, the mastermind behind those advanced graphics features now works for one of Nvidia’s new rivals in the world of gaming graphics: Intel.

Nvidia RTX consists of two primary features: Real-time ray tracing and Deep Learning Super Sampling (DLSS), both of which are critical for running the latest games with all the visual glitter turned on. DLSS is critical for running the latest games with ray tracing enabled. It’s the bedrock that has allowed ray tracing to flourish in video games, and it’s a big reason why Nvidia still holds an edge over AMD in the space. Now, Intel looks to be joining the fray.

Nvidia

Intel has now hired the person behind both technologies, Anton Kaplanyan, suggesting that Intel could be working on its own DLSS competitor for its upcoming graphics cards.

Anton Kaplanyan had a short but meaningful stint at Nvidia from 2015 to 2017, during which he helped design RTX ray-tracing hardware and DLSS.

“After the hardware was done, my Nvidia Research colleagues and I realized that the hardware performance would not suffice for real-time visuals, so we started developing a completely new direction of real-time image reconstruction methods,” Kaplanyan wrote in a blog post.

Intel could be working on a similar technology for its upcoming graphics cards — the blog post is careful not to mention DLSS by name, after all. Kaplanyan’s hire is, at least in part, based on his experience with graphics and machine learning. “New differentiating technologies in graphics and machine learning is the missing cherry on the cake,” Kaplanyan wrote.

Anton Kaplanyan headshot.
Intel

That would make sense for Intel. AMD has already fired back at Nvidia with its competing FidelityFX Super Resolution technology, and some recent job postings suggest Microsoft is working on a similar feature. With Intel’s DG2 graphics card on the horizon, the company looks like it’s ready to play ball with the latest graphics technologies.

Intel is forming an all-star roster of graphics experts. In 2017, the company picked up Raja Koduri, who’s known for working in AMD’s Radeon division on the Polaris, Vega, and Navi architectures. Koduri now heads up Intel’s graphics and software sector, leading the charge on the company’s first foray into desktop graphics cards.

Kaplanyan is likely a key part of that strategy, aiding in the development of ray tracing and the software it requires to run in real time. Before joining Intel, Kaplanyan worked as a researcher at Facebook for the company’s virtual reality (VR) endeavors. During that time, Kaplanyan published a paper on neural supersampling, which looks an awful lot like DLSS.

The future of Intel’s graphics department looks bright, assuming the pieces fall in place as they should. With ray tracing pushing graphics more than ever before, as well as the rise of high-resolution and high refresh rate monitors, a supersampling method is essential.

“I think we are at the edge of a new era in graphics — an era where visual computing will become more distributed, more heterogeneous, more power-efficient, more accessible, and more intelligent,” Kaplanyan wrote.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia RTX 40 Series GPUs Might Be Even More Power Hungry

A flurry of recent rumors suggests that Nvidia’s upcoming RTX 40-series graphics cards will be even more power-hungry than what’s currently available. Leakers peg the power consumption in the range of 400W to 500W for the flagship card, which is higher than even the obscenely powerful RTX 3090.

3DCenter, who has previously covered the roller coaster of GPU prices in Europe, nailed down multiple leakers claiming the card will use at least 400W of power. That’s certainly not out of the question, as the RTX 3090 already requires 350W of power. Assuming Nvidia wants to push even more power out of the upcoming range, a 400W+ power requirement could be possible.

Nvidia hasn’t announced anything about the RTX 40-series yet, so it’s likely that developers are still tweaking the final design. Kopite7kimi, one of the leakers who claimed a 400W+ power limit and is known for Nvidia leaks, said the upcoming range will be built on chipmaker TSMC’s 5nm node, breaking from the 8nm Samsung process Nvidia used on RTX 30-series graphics cards.

The next-generation architecture, tentatively named Lovelace, is rumored to arrive in late 2022 or early 2023. The rumor mill suggests that the graphics core powering the range will be capable of housing up to 18,432 CUDA cores, which is nearly 8,000 more than the RTX 3090.

AMD’s upcoming cards are rumored to require equally as much power. The RDNA 3 range is also rumored to consume between 400W and 500W of power with TSMC’s 5nm process. During a recent investors call, AMD CEO Lisa Su confirmed that 5nm is the goal and that the GPUs are on track for a 2022 launch.

Unlike Lovelace, RDNA 3 cards are rumored to use a multi-chip-module (MCM) GPU package. Essentially, the upcoming range is rumored to utilize multiple dies on the same package, unlike the RTX 40-series’ traditional monolithic design.

A diagram of an MCM on RDNA 3.

Nvidia is rumored to be working on its own MCM design, currently named Hopper. Originally, rumors pegged Hopper as the successor to the current Ampere range, though recent speculation suggests Nvidia is locked on delivering Lovelace sooner.

Both new generations are rumored to offer up to a 2.5x improvement over the current generation. As for where they’ll fall in relation to each other, it’s too soon to say.

As is the case with all early rumors and speculation, you shouldn’t take this information as law. We’re still far out from launch, so AMD and Nvidia are more than likely still finalizing the design and tweaking specs to meet their price, power, and performance targets.

Based on what we know so far, however, a higher power draw will likely be something PC builders need to deal with. Nvidia pushed past the 250W ceiling with the RTX 3080, RTX 3080 Ti, and RTX 3090, surpassing even the most powerful cards from the generations that proceeded them. It’s too soon to say for sure, but you might need to invest in a new power supply when these cards finally arrive.

Editors’ Choice






Repost: Original Source and Author Link

Categories
Computing

AMD RX 6600 XT Is 15% Faster Than the RTX 3060, but $50 More

Following months of leaks and rumors, AMD finally pulled back the curtain on the RX 6600 XT. The new graphics card is a 1080p addition to the RDNA 2 range, which should provide high frame rates at 1080p and 1440p with a little help from FidelityFX Super Resolution (FSR).

The Radeon RX 6600 XT is set to launch on August 11 for $379. In addition to board partner designs, AMD will supply units to desktop makers like Acer, Alienware, and HP. Although AMD showed off a render of a reference design, it won’t be manufacturing a reference model for the 6600 XT.

The card targets 1080p high refresh rate monitors with performance somewhere between an RTX 3060 and RTX 3060 Ti. In Doom Eternal, for example, the RX 6600 XT averaged 155 frames per second (fps) compared to 134 fps with the RTX 3060. Similarly, the card hit 92 fps in Assassin’s Creed Valhalla compared to 69 fps on Nvidia’s card. Overall, AMD claims the card is 15% faster on average.

It’s important to point out that these benchmarks come from AMD, so we’ll need to wait for further testing to draw any firm conclusions. AMD also ran the tests with Smart Access Memory (SAM) enabled, which is a feature that can boost frame rates with Ryzen 5000 and select Ryzen 3000 processors.

Here are the specs we know right now:

RX 6600 XT
GPU Navi 23
Interface PCIe 4.0
Compute units 32
Stream processors 2,048
Ray accelerators 32
Game clock 2,359MHz
Memory 8GB GDDR6
Memory speed 16Gbps
Bandwidth Up to 256 GB/s
Memory bus 128-bit
TDP 160W

Although the performance is impressive, the suggested price of $379 is higher than the direct competition. That’s only $20 less than the RTX 3060 Ti and $50 more than the RTX 3060, the latter of which matches the RX 6600 XT in games like Cyberpunk 2077 and Horizon Zero Dawn. 

AMD set the price to be representative of where the market currently is. At launch, select designs from AMD’s partners will be available at $379, though the company pointed out how challenging this price is to meet given the ongoing GPU shortage.

RX 6600 XT models from board partners.

The biggest win for the RX 6600 XT looks like FSR. At 1080p with max settings and ray tracing turned on, the card was able to surpass 100 fps in Godfall and boost frame rates by up to 74% in The Riftbreaker. It also managed to increase the frame rate in Resident Evil Village, though only by a modest 13%.

FSR also allows you to push the resolution above 1080p. With ray tracing off at 1440p, AMD showed the RX 6600 XT jumping from 113 fps to 243 fps in Resident Evil Village. Similarly, Marvel’s Avengers climbed from 57 fps at native 1440p to 96 fps in FSR’s aggressive Performance mode.

RX 6600 XT benchmarks with FSR turned on.

With FSR available, the RX 6600 XT looks like the 1080p gamer’s dream. However, availability will likely be a problem. “We are doing our best to get supply, but the demand is unprecedented,” an AMD spokesperson said.

AMD isn’t releasing a reference design for the RX 6600 XT, but models from ASRock, Gigabyte, MSI, Asus, PowerColor, and more will be available on August 11.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

AMD RX 6600 XT vs. Nvidia RTX 3060 Ti vs. RTX 3060

The latest generation of graphics cards from AMD and Nvidia has raised the bar for budget gamers. The RTX 3060 Ti, RTX 3060, and RX 6600 XT represent the cream of the crop for 1080p gaming, and the cards are even capable of running some demanding games at 1440p. But which one should you choose?

AMD hasn’t officially announced the RX 6600 XT, but multiple leaks point to a release date coming soon. Before it arrives, we pitted Nvidia’s two budget GPUs against AMD’s upcoming one to see which one is the best.

Pricing and availability

Nvidia released the RTX 3060 Ti on December 1, 2020, for $399. The slightly slower RTX 3060 came later on February 25 for $329. As per usual, the price set by Nvidia is for the Founders Edition models of each card, so options from board partners may be slightly more expensive depending on their cooling ability and features.

AMD hasn’t announced the RX 6600 XT yet, but the card is expected to arrive on August 11. Competing directly with the RTX 3060 Ti, the card is rumored to cost $399. AMD hasn’t revealed the card, much less any details about it, so the price and release date are subject to change.

The good news for availability is also the bad news. The two Nvidia cards are consistently out of stock at retailers, and we expect the RX 6600 XT to sell out immediately when it launches. That’s the bad news. The good news is that you don’t have to make a choice based on availability.

The ongoing GPU shortage has caused a lot of problems for graphics cards, even though it is possible to buy a graphics card in 2021. You’ll struggle to find most models in stock at all, and if you do, they probably won’t be at list price. Expect to pay a few hundred dollars on top of the list price at retailers like Micro Center and Newegg.

On the secondhand market, the situation is even worse. The RTX 3060 Ti pushes toward $900 in many cases, and the RTX 3060 can cost as much as $750. We don’t have pricing details on the RX 6600 XT yet, though it’s safe to assume it will be similarly expensive on the secondhand market.

Performance

RTX 3060 Ti RTX 3060 RX 6600 XT
GPU GA104 GA106 Navi 23
Interface PCIe 4.0 PCIe 4.0 PCIe 4.0
CUDA cores/stream processors 4,864 3,584 2,048
Tensor cores 152 112 N/A
RT cores 38 28 32
Base clock 1,410MHz 1,320MHz 2,200MHz
Boost clock 1,665MHz 1,777MHz 2,500MHz
Memory 8GB GDDR6 12GB GDDR6 8GB GDDR6
Memory speed 1,750MHz 1,875MHz TBA
Bandwidth 448GBps 360GBps TBA
Memory bus 256-bit 192-bit 128-bit
TDP 200W 170W 180W

A spec comparison of the RTX 3060 Ti, 3060, and RX 6600 XT doesn’t reveal much. The RTX 3060 Ti and 3060 alone don’t really match each other, with the cheaper card featuring more graphics memory but less bandwidth. In addition, AMD and Nvidia use different designs, leading to a much higher clock speed on the AMD card and a bigger memory bus on the Nvidia ones.

The RX 6600 XT isn’t out yet, either, so we don’t know the official specs. The specs listed in the table above are rumored, not confirmed.

Starting with the two cards that have been released, it shouldn’t come as a surprise that the RTX 3060 Ti is faster than the RTX 3060. It’s around 15% faster, placing it on par with last-generation’s RTX 2080 Super. The RTX 3060 is more closely aligned with last-gen’s RTX 2070, performing slightly better than that card but slightly worse than the RX 5700 XT.

In our testing of the RTX 3060, we found it was around 14% behind the RTX 3060 Ti in synthetic benchmarks. That said, we still hit 84 frames per second in Battlefield V, 94 fps in Fortnite, and 114 fps in Civilization VI at 1440p with all the sliders cranked up. More demanding games like Control and Cyberpunk 2077 struggled to hit 60 fps at 1440p. Dropping to 1080p produced up to a 32% increase in frame rate, however.

We don’t have benchmarks for the RX 6600 XT yet, though a benchmark leaked not too long ago. The leak shows that the card performs within the range of the RTX 3060 Ti, but it didn’t reveal any specific frame rates. We expect performance to at least match the RTX 3060 Ti, but it’s hard to say for sure right now.

Between the three, the RTX 3060 is likely in last place. With the current GPU pricing situation, though, it could be in the range of $200 in savings. Frankly, the RTX 3060 performs much better than it has any right to, and when hundreds of dollars are on the table, a 15% performance difference doesn’t mean much.

Ray tracing, upscaling, and more

A demonstration of DLSS in Control.

For features, all three of our competitors are much closer than they were a few months ago. The standout features for the RTX 3060 Ti and RTX 3060 are Deep Learning Super Sampling (DLSS) and ray tracing, which are both part of the RTX features package. Ray tracing helps lighting look more accurate and games, and DLSS improves frame rates with artificial intelligence-assisted upscaling. Neither are stellar at ray tracing but are substantially faster than their counterparts from the previous generation, while DLSS can make a huge difference in support games’ performance.

AMD cards used to lack these features, but not any longer. The RX 6600 XT should support ray tracing like the rest of the RX 6000 range, and AMD now offers its FidelityFX Super Resolution (FSR) upscaling tech. FSR accomplishes the same goal as DLSS, and although it’s not quite as impressive, it gets very close. Ray tracing is unlikely to be hugely impressive on the 6600 XT, as AMD’s RDNA2 cards just aren’t as fast at it as Nvidia’s newer-generation options.

The Nvidia cards have the lead at the moment. AMD is late to the party when it comes to upscaling tech and ray tracing, though it’s quickly catching up to Nvidia. At the moment, we recommend one of the Nvidia cards for features. In a matter of months, however, the race will likely be much tighter.

Editors’ Choice




Repost: Original Source and Author Link