Nvidia addresses rumors about RTX 40 GPUs’ power consumption

The new Nvidia GeForce RTX 40 lineup includes some of the most power-hungry graphics cards on the market. Because of that, you may be wondering if you’ll need a new power supply (PSU) in order to support the borderline monstrous capabilities of the RTX 4090.

To answer some of these concerns, Nvidia released new information about the power consumption of its new GPUs. The conclusion? Well, it’s really not all that bad after all.


Prior to the official announcement of the RTX 40-series, the cards have been the subject of much power-related speculation. The flagship RTX 4090 received the most coverage of all, with many rumors pointing toward insane requirements along the lines of 800-900W. Fortunately, we now know that those rumors weren’t true.

The RTX 4090 has a TGP of 450W, the same as the RTX 3090 Ti, and calls for a minimum 850W PSU. The RTX 4080 16GB takes things down a few notches with a 320W TGP and a 750W power supply. Lastly, the RTX 4070 in disguise, also known as the RTX 4080 12GB, draws 285W and calls for a 700W PSU.

Nvidia claims that this is not an increase from the previous generation, but it kind of is — after all, the RTX 3090 had a TGP of 350W. With that said, it’s not as bad as we had thought, but many are still left to wonder if they need to upgrade their existing PSUs or not.

Nvidia has now assured its customers that they can stick to the PSU they currently own as long as it meets the wattage requirements for that given card.

Similarly, Nvidia doesn’t expect there to be any problems when it comes to 8-pin to PCIe Gen 5 16-pin adapter compatibility. As said by Nvidia on its FAQ page: “The adapter has active circuits inside that translate the 8-pin plug status to the correct sideband signals according to the PCIe Gen 5 (ATX 3.0) spec.”

There’s also another fun little fact to be found in that FAQ: Nvidia confirms that the so-called smart power adapter will detect the number of 8-pin connectors that are plugged in. When four such connectors are used versus just three, it will enable the RTX 4090 to draw more power (up to 600 watts) for extra overclocking capabilities.

Nvidia CEO Jensen Huang with an RTX 4090 graphics card.

There have also been questions about the durability of the PCIe 5.0 connectors, which are rated at 30 cycles. Some might consider that to not be much, but Nvidia clears this up by saying that this has almost always been the case, or at least has been over the past twenty years.

Lastly, Nvidia clarified the matter of the possibility of an overcurrent or overpower risk when using the 16-pin power connector with non-ATX 3.0 power supply units. It had, indeed, spotted an issue during the early stages of development, but it has since been cleared up. Again, seemingly nothing to worry about there.

All in all, the power consumption fears have largely been squelched. Nvidia did ramp up the power requirements, but not as significantly as expected, so as long as your PSU matches what the card asks for, you should be fine. Let’s not breathe that sigh of relief yet, though — the RTX 4090 Ti might still happen, and that will likely be one power-hungry beast.

Editors’ Choice

Repost: Original Source and Author Link


Why I’m almost ready to switch to AMD GPUs for streaming

Although AMD makes some of the best graphics cards, they’ve been much less competent than Nvidia GPUs for streaming. Nvidia GPUs have almost always offered better encoding performance and extra features absent on AMD cards. It’s one of the reasons why I’ve decided to switch to Nvidia graphics despite being a longtime fan of AMD; I just don’t want to give up a good streaming experience.

But all that might be different now thanks to two key updates to AMD software: a brand new encoder and AMD Noise Suppression, which are competitors to Nvidia NVENC and RTX Voice. I tested out AMD’s new tools and the results make me think switching to AMD is a possibility now.

Streaming looks just as good on AMD

From left to right, AMF vs. NVENC. Matthew Connatser/Digital Trends

When it comes to streaming, it’s crucial to have a good encoder, and if you’re streaming games you’ll probably want to use GPU encoding rather than CPU encoding. Not only does Nvidia’s NVENC encoder have good quality, but it also doesn’t use very much data, which is crucial for streaming. You want the best ratio of visual quality to data usage possible, and in this area, Nvidia’s encoder was far ahead of AMD’s. But now that the latest version of AMD’s encoder AMF is finally out, I think Nvidia has lost this advantage.

The above image is from the opening shot of 3DMark’s Time Spy benchmark, which I recorded using streaming optimized settings at 6000Kbps. I selected this specific part because there’s lots of foliage, which is often difficult to capture with good quality (especially when there’s very little data to go around), but as you can see the difference between AMF and NVENC is essentially nonexistent. AMF did well in the rest of the benchmark as well, and you wouldn’t be able to tell the difference if the two recordings weren’t labeled.

It’s especially important that AMF was able to achieve this using the same bitrate that NVENC was using. It would be pretty pointless if AMF looked good but needed a substantially higher bitrate to compensate. Twitch, arguably the most popular game streaming platform, only allows up to 6000Kbps, which is a very small amount of data to work with. With respect to recording, each video was only about 3 minutes long and each was about 100MB, which is really good for people who upload unedited stream VODs to YouTube for archival purposes.

AMD RX 6950 XT graphics card on a pink background.
Jacob Roach / Digital Trends

That being said, only AMD GPUs based on the RDNA2 architecture (which includes RX 6000 series GPUs) can take full advantage of the AMF encoder because older GPUs don’t have support for B-Frames, which help to increase image quality. This is a limitation of the hardware, not the software, so your RX 5700 XT will never be quite as good as an RX 6950 XT for streaming.

What AMD really needs to focus on in the future is updating its encoder just as often as Nvidia. The newest version of AMF was done and just sitting as open source software until Open Broadcast Software contributors finally added it to the app, and now we have to wait for all the streaming services to update so you can use the new encoder. I’d like to see AMD assume as active of a role as Nvidia has in this area, not just when it comes to making updates but also distributing those updates.

AMD Noise Suppression is good but has poor support

A podcast microphone with headphones on top.
Getty Images

Audio quality is an important (and sometimes neglected) part of streaming, and here too Nvidia held the edge thanks to its RTX Voice software, which is basically an AI-enhanced noise gate. AMD is catching up in this area with its new Noise Suppression tool, which is supposed to do the exact same thing as RTX Voice.

Given that AMD GPUs have no AI acceleration features like Nvidia GPUs, I was skeptical that Noise Suppression would be any good. Much to my surprise, the results were quite good: my gaming keyboard was nearly inaudible, even while I was talking, and the quality of my voice wasn’t reduced. If I switched to AMD Noise Suppression, I don’t think anyone that watches my streams would be able to tell the difference.

But did AMD GPUs even need this feature? Why not just set a noise gate in OBS? Well, the problem with noise gates is that they can only work based on volume, and background noise can get quite loud, especially the clicky noises from gaming keyboards. RTX Voice is a critical part of my streaming setup because it can intelligently separate my voice and my keyboard. Now that AMD GPUs have the same exact functionality, I can actually consider streaming on AMD hardware, like my ROG Zephyrus G14.

I also like that AMD’s Noise Suppression is built into the Radeon driver suite, whereas RTX Voice is only usable by installing Nvidia Broadcast. Not only is AMD’s solution more simple, but it’s also more reliable. I can’t tell you how many times I started my stream just to realize my mic audio wasn’t coming through because Nvidia Broadcast was closed for some reason. Nvidia could learn much from AMD when it comes to driver suites, not just for this specific feature but in general.

But I do have quite a bit of criticism for AMD here when it comes to support. Not only does Noise Suppression require an RX 6000 GPU, but it also requires a Ryzen 5000 CPU or newer. The CPU requirement in particular is frustrating and almost certainly arbitrary. Not only does it lock out users running older versions of Ryzen (most of which are still fast enough in 2022), but it also excludes everyone that uses an Intel CPU. It’s impossible to justify this requirement when some of the best CPUs available today are made by Intel.

Finally caught up in key areas, for now

Having finally bridged the gap in video and audio quality features, AMD GPUs are finally as capable as Nvidia GPUs for streaming in the most important areas. While the level of support AMD offers leaves much to be desired, with current generation AMD hardware you can stream games with the same kind of quality you’d see from an Nvidia-powered PC. There are some other features that Nvidia offers, such as a digital green screen for webcam users, but AMD doesn’t really need to offer the same feature when third-party software can do the same thing.

AMD’s focus right now should be ensuring it never falls this far behind ever again. AMF was worse than NVENC for several years, and RTX Voice has been around since 2020. Technology is always a moving target, and it’s hard to see Nvidia resting on its laurels any time soon. In order to compete with Nvidia, AMD can’t just rely on open source software and hoping someone makes something. AMD needs to do that itself.

Editors’ Choice

Repost: Original Source and Author Link


What is AI hardware? How GPUs and TPUs give artificial intelligence algorithms a boost

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Most computers and algorithms — including, at this point, many artificial intelligence (AI) applications — run on general-purpose circuits called central processing units or CPUs. Though, when some calculations are done often, computer scientists and electrical engineers design special circuits that can perform the same work faster or with more accuracy. Now that AI algorithms are becoming so common and essential, specialized circuits or chips are becoming more and more common and essential. 

The circuits are found in several forms and in different locations. Some offer faster creation of new AI models. They use multiple processing circuits in parallel to churn through millions, billions or even more data elements, searching for patterns and signals. These are used in the lab at the beginning of the process by AI scientists looking for the best algorithms to understand the data. 

Others are being deployed at the point where the model is being used. Some smartphones and home automation systems have specialized circuits that can speed up speech recognition or other common tasks. They run the model more efficiently at the place it is being used by offering faster calculations and lower power consumption. 

Scientists are also experimenting with newer designs for circuits. Some, for example, want to use analog electronics instead of the digital circuits that have dominated computers. These different forms may offer better accuracy, lower power consumption, faster training and more. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

What are some examples of AI hardware? 

The simplest examples of AI hardware are the graphical processing units, or GPUs, that have been redeployed to handle machine learning (ML) chores. Many ML packages have been modified to take advantage of the extensive parallelism available inside the average GPU. The same hardware that renders scenes for games can also train ML models because in both cases there are many tasks that can be done at the same time. 

Some companies have taken this same approach and extended it to focus only on ML. These newer chips, sometimes called tensor processing units (TPUs), don’t try to serve both game display and learning algorithms. They are completely optimized for AI model development and deployment. 

There are also chips optimized for different parts of the machine learning pipeline. These may be better for creating the model because it can juggle large datasets — or, they may excel at applying the model to incoming data to see if the model can find an answer in them. These can be optimized to use lower power and fewer resources to make them easier to deploy in mobile phones or places where users will want to rely on AI but not to create new models. 

Additionally, there are basic CPUs that are starting to streamline their performance for ML workloads. Traditionally, many CPUs have focused on double-precision floating-point computations because they are used extensively in games and scientific research. Lately, some chips are emphasizing single-precision floating-point computations because they can be substantially faster. The newer chips are trading off precision for speed because scientists have found that the extra precision may not be valuable in some common machine learning tasks — they would rather have the speed.

In all these cases, many of the cloud providers are making it possible for users to spin up and shut down multiple instances of these specialized machines. Users don’t need to invest in buying their own and can just rent them when they are training a model. In some cases, deploying multiple machines can be significantly faster, making the cloud an efficient choice. 

How is AI hardware different from regular hardware? 

Many of the chips designed for accelerating artificial intelligence algorithms rely on the same basic arithmetic operations as regular chips. They add, subtract, multiply and divide as before. The biggest advantage they have is that they have many cores, often smaller, so they can process this data in parallel. 

The architects of these chips usually try to tune the channels for bringing the data in and out of the chip because the size and nature of the data flows are often quite different from general-purpose computing. Regular CPUs may process many more instructions and relatively fewer data. AI processing chips generally work with large data volumes. 

Some companies deliberately embed many very small processors in large memory arrays. Traditional computers separate the memory from the CPU; orchestrating the movement of data between the two is one of the biggest challenges for machine architects. Placing many small arithmetic units next to the memory speeds up calculations dramatically by eliminating much of the time and organization devoted to data movement. 

Some companies also focus on creating special processors for particular types of AI operations. The work of creating an AI model through training is much more computationally intensive and involves more data movement and communication. When the model is built, the need for analyzing new data elements is simpler. Some companies are creating special AI inference systems that work faster and more efficiently with existing models. 

Not all approaches rely on traditional arithmetic methods. Some developers are creating analog circuits that behave differently from the traditional digital circuits found in almost all CPUs. They hope to create even faster and denser chips by forgoing the digital approach and tapping into some of the raw behavior of electrical circuitry. 

What are some advantages of using AI hardware?

The main advantage is speed. It is not uncommon for some benchmarks to show that GPUs are more than 100 times or even 200 times faster than a CPU. Not all models and all algorithms, though, will speed up that much, and some benchmarks are only 10 to 20 times faster. A few algorithms aren’t much faster at all. 

One advantage that is growing more important is the power consumption. In the right combinations, GPUs and TPUs can use less electricity to produce the same result. While GPU and TPU cards are often big power consumers, they run so much faster that they can end up saving electricity. This is a big advantage when power costs are rising. They can also help companies produce “greener AI” by delivering the same results while using less electricity and consequently producing less CO2. 

The specialized circuits can also be helpful in mobile phones or other devices that must rely upon batteries or less copious sources of electricity. Some applications, for instance, rely upon fast AI hardware for very common tasks like waiting for the “wake word” used in speech recognition. 

Faster, local hardware can also eliminate the need to send data over the internet to a cloud. This can save bandwidth charges and electricity when the computation is done locally. 

What are some examples of how leading companies are approaching AI hardware?

The most common forms of specialized hardware for machine learning continue to come from the companies that manufacture graphical processing units. Nvidia and AMD create many of the leading GPUs on the market, and many of these are also used to accelerate ML. While many of these can accelerate many tasks like rendering computer games, some are starting to come with enhancements designed especially for AI. 

Nvidia, for example, adds a number of multiprecision operations that are useful for training ML models and calls these Tensor Cores. AMD is also adapting its GPUs for machine learning and calls this approach CDNA2. The use of AI will continue to drive these architectures for the foreseeable future. 

As mentioned earlier, Google makes its own hardware for accelerating ML, called Tensor Processing Units or TPUs. The company also delivers a set of libraries and tools that simplify deploying the hardware and the models they build. Google’s TPUs are mainly available for rent through the Google Cloud platform.

Google is also adding a version of its TPU design to its Pixel phone line to accelerate any of the AI chores that the phone might be used for. These could include voice recognition, photo improvement or machine translation. Google notes that the chip is powerful enough to do much of this work locally, saving bandwidth and improving speeds because, traditionally, phones have offloaded the work to the cloud. 

Many of the cloud companies like Amazon, IBM, Oracle, Vultr and Microsoft are installing these GPUs or TPUs and renting time on them. Indeed, many of the high-end GPUs are not intended for users to purchase directly because it can be more cost-effective to share them through this business model. 

Amazon’s cloud computing systems are also offering a new set of chips built around the ARM architecture. The latest versions of these Graviton chips can run lower-precision arithmetic at a much faster rate, a feature that is often desirable for machine learning. 

Some companies are also building simple front-end applications that help data scientists curate their data and then feed it to various AI algorithms. Google’s CoLab or AutoML, Amazon’s SageMaker, Microsoft’s Machine Learning Studio and IBM’s Watson Studio are just several examples of options that hide any specialized hardware behind an interface. These companies may or may not use specialized hardware to speed up the ML tasks and deliver them at a lower price, but the customer may not know. 

How startups are tackling creating AI hardware

Dozens of startups are approaching the job of creating good AI chips. These examples are notable for their funding and market interest: 

  • D-Matrix is creating a collection of chips that move the standard arithmetic functions to be closer to the data that’s stored in RAM cells. This architecture, which they call “in-memory computing,” promises to accelerate many AI applications by speeding up the work that comes with evaluating previously trained models. The data does not need to move as far and many of the calculations can be done in parallel. 
  • Untether is another startup that’s mixing standard logic with memory cells to create what they call “at-memory” computing. Embedding the logic with the RAM cells produces an extremely dense — but energy efficient — system in a single card that delivers about 2 petaflops of computation. Untether calls this the “world’s highest compute density.” The system is designed to scale from small chips, perhaps for embedded or mobile systems, to larger configurations for server farms. 
  • Graphcore calls its approach to in-memory computing the “IPU” (for Intelligence Processing Unit) and relies upon a novel three-dimensional packaging of the chips to improve processor density and limit communication times. The IPU is a large grid of thousands of what they call “IPU tiles” built with memory and computational abilities. Together, they promise to deliver 350 teraflops of computing power. 
  • Cerebras has built a very large, wafer-scale chip that’s up to 50 times bigger than a competing GPU. They’ve used this extra silicon to pack in 850,000 cores that can train and evaluate models in parallel. They’ve coupled this with extremely high bandwidth connections to suck in data, allowing them to produce results thousands of times faster than even the best GPUs.  
  • Celestial uses photonics — a mixture of electronics and light-based logic — to speed up communication between processing nodes. This “photonic fabric” promises to reduce the amount of energy devoted to communication by using light, allowing the entire system to lower power consumption and deliver faster results. 

Is there anything that AI hardware can’t do? 

For the most part, specialized hardware does not execute any special algorithms or approach training in a better way. The chips are just faster at running the algorithms. Standard hardware will find the same answers, but at a slower rate.

This equivalence doesn’t apply to chips that use analog circuitry. In general, though, the approach is similar enough that the results won’t necessarily be different, just faster. 

There will be cases where it may be a mistake to trade off precision for speed by relying on single-precision computations instead of double-precision, but these may be rare and predictable. AI scientists have devoted many hours of research to understand how to best train models and, often, the algorithms converge without the extra precision. 

There will also be cases where the extra power and parallelism of specialized hardware lends little to finding the solution. When datasets are small, the advantages may not be worth the time and complexity of deploying extra hardware.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


Engadget Podcast: The repairable iPhone 14 and NVIDIA’s RTX 4000 GPUs

Surprise! The iPhone 14 is pretty repairable, it turns out. This week, Cherlynn and Devindra chat with Engadget’s Sam Rutherford about this move towards greater repairability and what it means for future iPhones. Also, they dive into NVIDIA’s powerful (and expensive!) new RTX 4080 and 4090 GPUs. Sure, they’re faster than before, but does anyone really need all that power?

Listen above, or subscribe on your podcast app of choice. If you’ve got suggestions or topics you’d like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcasts, the Morning After and Engadget News!



  • The iPhone 14 is surprisingly repairable – 1:17

  • NVIDIA announces RTX 4090 and 4080 GPUs (and a Portal mod with ray tracing) – 21:08

  • Huge hack at Rockstar leaks GTA 6 videos and dev code – 34:22

  • Uber was also hacked last week by the same crew that hit Rockstar – 38:37

  • Windows 11 2022 Update – 40:21

  • Google is offering a $30 1080p HDR Chrome cast with Google TV – 44:05

  • Does anyone need the Logitech G cloud gaming handset? – 46:59

  • Twitch is banning gambling streams on October 18 – 51:56

  • Working on – 55:34

  • Pop culture picks – 1:01:35


Hosts: Cherlynn Low and Devindra Hardawar
Guest: Sam Rutherford
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien
Livestream producers: Julio Barrientos
Graphic artists: Luke Brooks and Brian Oh

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link


‘Portal’ will get ray tracing to show off NVIDIA’s 4000-series GPUs

Portal 3 may never happen, but at least we’ve got a new way to experience the original teleporting puzzle shooter. Today during his GTC keynote, NVIDIA CEO Jensen Huang announced Portal with RTX, a mod that adds support for real-time ray tracing and DLSS 3. Judging from the the short trailer, it looks like the Portal we all know and love, except now the lighting around portals bleeds into their surroundings, and just about every surface is deliciously reflective. 

Similar to what we saw with Minecraft RTX, Portal’s ray tracing mod adds a tremendous amount of depth to a very familiar game. And thanks to DLSS 3, the latest version of NVIDIA’s super sampling technology, it also performs smoothly with plenty of RTX bells and whistles turned on. This footage likely came from the obscenely powerful RTX 4090, but it’ll be interesting to see how well Portal with RTX performs on NVIDIA’s older 2000-series cards. Current Portal owners will be able to play the RTX mod in November.  



Huang says the company developed the RTX mod inside of its Omniverse environment. To take that concept further, NVIDIA is also launching RTX Remix, an application that will let you capture existing game scenes and tweak their objects and environments with high resolution textures and realistic lighting. The company’s AI tools can automatically give materials “physically accurate” properties—a ceiling in Morrowind, for example, becomes reflective after going through RTX Remix. You’ll be able to export remixed scenes as mods, and other players will be able to play them through the RTX renderer. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link


EVGA is done making GPUs, and it’s because of Nvidia

Among Nvidia’s 3rd-party GPU manufacturers, EVGA is perhaps the most famous. The brand is well known for high-quality RTX and GTX graphics cards with generous consumer policies, as well as power supplies, coolers, and motherboards. The partnership between Nvidia and EVGA, which lasted over two decades, is now over, however, and not only will EVGA stop making Nvidia GPUs, it has no plans on making any GPUs ever again. It’s not a clean breakup either.

EVGA Terminates NVIDIA Partnership, Cites Disrespectful Treatment

In a statement to Gamers Nexus, which broke the news, EVGA stated “this is not a financial decision, it is a principled decision.” EVGA has accused Nvidia of keeping partners out of the loop on future products, cutting GPU prices without warning, and limiting what prices GPUs can be set at. According to an Nvidia staff member who spoke to Gamers Nexus, Nvidia CEO Jensen Huang sometimes wonders “why are these guys [EVGA and other Nvidia partners] making money when they’re not doing much?”

One of the main issues is that Nvidia sells its own Founder’s Edition models for significantly less than models from partners. EVGA reportedly loses hundreds of dollars on each RTX 3080, 3090, and 3090 Ti it sells as it needs to cut prices to remain competitive with Nvidia. That figure only considers manufacturing, however.

Although EVGA says this isn’t a financial decision, finances are certainly at play. Jon Peddie Research noted that the gross margin for Nvidia has continued to increase year after year while the already small margin for GPU partner companies has declined. In its 2022 estimate, Jon Peddie Research believes Nvidia will see about 65% gross margin for its entire business while AIB partners will just see 5%. Declining margins are down to increasing costs for production, R&D, and marketing. According to Jon Peddie Research, making up low margins on volume is no longer appealing.

Gamers Nexus was skeptical of EVGA’s story, however. In its report, host Steve Burke suggest the company was probably ordering too many GPUs during the crypto boom and could have gotten burned by the sudden decline in mining. Burke notes that something similar happened with its RTX 20-series GPUs when the company lost money in the six-digit range.

EVGA CEO Andrew Han might also have personal reasons for ending the partnership. Gamers Nexus says that Han, who is in his 60s and has been CEO since EVGA was founded in 2000, wants to spend more time with his family as he approaches retirement and feels that Nvidia’s allegedly disrespectful attitude is no longer worth the trouble.

EVGA isn’t going out of business, yet

EVGA RTX 3060 sitting on a table.

Although 78% of EVGA’s revenue is derived from its graphics business, the company says it will continue to operate its other ventures. The company’s next largest venture is power supplies, and while it only makes up 20% of EVGA’s revenue, it has four times the gross margin of its graphics business. Losing the vast majority of its revenue is still problematic, however, though EVGA explicitly denied that there would be any layoffs.

Han also denied he would sell EVGA. The company is apparently in a healthy financial position. Furthermore, the CEO didn’t want to deprecate EVGA’s reputation by selling it to another company that might only be interested in profit.

While EVGA could potentially partner up with AMD or Intel to preserve its AIB GPU business, the company has make it clear that it will not be making any GPUs in the future. Gamers Nexus speculates that EVGA’s CEO might have personal reasons for not wanting to pursue a partnership with Nvidia’s competitors, similar to his personal reasons for terminating the partnership with Nvidia.

As for existing EVGA GPUs, the company confirmed it would honor warranties and RMAs as long as supplies last. Its supply of RTX 30-series cards will run out by the end of the year, however, and it’s not certain how easy it will then be for EVGA to uphold its warranties, whether or not the company is willing.

Editors’ Choice

Repost: Original Source and Author Link


Have the Intel Arc GPUs been canceled? I sure hope not

A  rumor is circulating that Intel’s Arc graphics cards are being canceled, and unlike previous rumors we’ve heard on the matter, this one seems to hold some weight. The first discrete desktop GPUs from Intel have seen ups and downs since being announced around a year ago, but this is the first word we’ve gotten that the company may abandon the project.

Headlines and YouTube thumbnails don’t tell the full story here, though, and they’re primed to spread misinformation considering that Intel’s first-gen Arc Alchemist GPUs aren’t available in the U.S. yet and should launch soon. It’s impossible to say if Arc will eventually bite the dust, but there’s a compelling reason it shouldn’t.


If you’re unaware, the rumor concerning Arc’s cancellation centers around a video from YouTube channel Moore’s Law is Dead (MILD). MILD is known for rumor and leak videos around CPUs and GPUs, several of which we’ve reported on in the past. This video shares several quotes that the YouTuber says were gathered from a range of sources, including one source that said: “The decision’s been made at the top to end discrete.”

To be clear, the video talks specifically about Arc as a segment at Intel, not the upcoming Arc Alchemist GPUs. The video claims that Intel plans to cancel the project — which is formally known as AXG or Accelerated Computing Systems and Graphics — beyond the launch of the second generation Battlemage GPUs. The rumor also specifically refers to discrete GPUs; Intel won’t stop making integrated GPUs any time soon.

Raja Koduri, Intel’s executive vice president of the AXG group, responded to the rumor with a tweet showing a road map Intel shared earlier this year.

Attached is what we said in Feb'22 and are continuing to execute this strategy.

— Raja Koduri (Bali Makaradhwaja) (@RajaXg) September 12, 2022

Koduri also shared that the Intel team is shrugging off these rumors, saying “They don’t help the team working hard to bring these to market, they don’t help the PC graphics community…one must wonder, who do they help?”

This isn’t the first time Intel executives have taken to Twitter to dispel rumors circulated by the YouTube channel. In July, Intel’s graphics marketing lead Ryan Shrout took to Twitter to clarify that an Intel Arc A780 never existed, which was a rumor that MILD started. The YouTuber stuck to their guns despite Shrout shooting the rumor down.

Despite some rumors to the contrary, there is no Intel Arc A780 and there was never planned to be an A780. Let’s just settle that debate. 🤣

— Ryan Shrout (@ryanshrout) July 16, 2022

Problems are expected, solutions are rare

It’s no secret that Intel has had several issues with Arc up to this point, but it’s still much too soon to see the multi-year AXG roadmap abandoned. Many of the issues present now, like those concerning 40 issues found in Intel’s drivers, won’t persist throughout generations. The problems we’re seeing with Arc Alchemist now are the worst problems Intel will face, as future generations can learn from previous ones to deliver a better product.

The last thing you want to be is shortsighted when investing so much in a new group. Analyst Jon Peddie penned an editorial in late July estimating that Intel had invested around $3.5 billion in the AXG group — more than it had ever invested in another business. That estimate came around the time that Intel CEO Pat Gelsinger axed six businesses, saving Intel around $1.5 billion in costs. Gelsinger shared a few days ago that the company plans to exit other businesses in the future, as well.

Two Intel Arc GPUs running side by side.
Linus Tech Tips

Although the first generation of Arc graphics cards have seen problems, Intel is addressing them at breakneck pace. Many driver issues have been fixed, and in response to fans calling for more transparency, Intel has created a dedicated Intel Arc page that shares updates on features and more details about the GPUs.

If cancellation is on the minds of Intel executives, we don’t think it’s coming soon. And given the current state of the graphics card market, it shouldn’t come at all.

Why Intel shouldn’t cancel Arc

Although some analysts have called for Intel to dissolve the AXG group, Arc still represents a third player challenging the AMD/Nvidia duopoly, and it delivers in an area that neither AMD nor Nvidia have been able to fully capture. From what we know right now, the Intel Arc A750 should beat the RTX 3060 by about 13%, which is a decent boost if the card has the right price.

Intel Arc A750M Limited Edition graphics card sits on a desk.

Assuming it’s priced right, that’s a compelling offer for gamers. Even now that GPU prices are back to normal, the RTX 3060 still sells for around $400. If Intel can deliver around $330 as the list price of the RTX 3060, you have a very valuable mainstream GPU that Nvidia and AMD haven’t hit quite yet.

And a lot of that doesn’t come down to performance, but features. Arc’s Xe Super Sampling (XeSS) feature is competitive with Nvidia’s Deep Learning Super Sampling (DLSS) based on what Intel has shared, and the ray tracing capabilities of Arc Alchemist, at least from an architectural standpoint, trump what AMD is currently offering (that may change with next-gen RX 7000 GPUs, though, for what it’s worth).

With that combination of features and the right price, Intel is set up for success. Even with GPU prices down, the options between $200 and $400 are pitiful. The RX 6500 XT is one of the worst GPUs in recent memory, and despite Nvidia listing cards in this price bracket, real models rarely sell for what Nvidia advertises.

If you look at Steam hardware stats, there’s a reason that Nvidia’s GTX cards still top the charts, as they offer a value that you just can’t get with current-gen offerings from AMD. As Nvidia’s CEO said in a recent earnings call, the average price of GPUs is going up, and that’s evidenced by cards like the 12GB RTX 3080. Even without scalpers in the mix, Intel has an opportunity to deliver sub-$400 GPUs that are competitive on performance and features for a segment of the market that has largely frozen.

It’s easy to assume Intel quickly spun up the AXG group to capitalize on the colossal GPU prices toward the end of 2020, but that’s not what happened. This has been a focus for Intel for years. We’re on the edge of next-gen GPUs from AMD and Nvidia, and given what we’ve seen over the past few generations, a third player to shake up that dynamic and force prices down is exactly what we need.

Editors’ Choice

Repost: Original Source and Author Link


NVIDIA looks set to reveal its next-gen GeForce RTX GPUs on September 20th

NVIDIA’s GPU Technology Conference goes down this month and the company has revealed when CEO Jensen Huang’s keynote will take place. You’ll be able to watch it at 11AM ET on September 20th. The keynote will kick off with a GeForce Beyond special broadcast, which will also stream on and .

The company says the event will include “the latest breakthroughs in gaming, creating and graphics technology.” NVIDIA is expected to reveal its RTX 40-series graphics cards during the broadcast — an image the company shared to promote the event includes the GeForce RTX Logo. NVIDIA previously said it would release its this year. Those will supplant graphics cards with the current Ampere architecture.

It remains to be seen just how well the RTX 40-series cards will perform. In the meantime, the 30-series GPUs after the cryptocurrency market cratered.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link


Nvidia has an exciting announcement about the RTX 4000 GPUs

If you’ve been awaiting news about the upcoming Nvidia GeForce RTX 40-Series GPU lineup, it’s officially time to get excited. Nvidia has just announced that a “special broadcast” will take place on September 20.

“PC enthusiasts, don’t miss the GeForce Beyond special broadcast,” said Nvidia in its quick teaser, adding that this is an event you won’t want to miss. Here’s what’s happening and how you can tune in too.


— NVIDIA GeForce (@NVIDIAGeForce) September 7, 2022

While Nvidia doesn’t say it outright, it makes perfect sense for this to be the long-awaited RTX 4000 announcement. The company has hinted as much during a recent earnings call, where Nvidia CEO Jensen Huang said that we can expect to hear more about the next generation of GPUs “next month,” referring to September. This might still change, of course, but all signs point to it being the plan for now.

At 8 a.m. PT on September 20, Huang will present his GTC 2022 keynote. The keynote is said to revolve around Nvidia’s latest breakthroughs in gaming, content creation, and graphics technology. However, before the keynote actually begins, a special GeForce Beyond broadcast will take place. This is, presumably, when Nvidia will break the big news.

What to expect from Nvidia’s special announcement


It seems very likely, if not almost guaranteed, that Nvidia will use this broadcast to finally reveal more about its upcoming “Ada Lovelace” GPU lineup. This doesn’t mean that we’ll get to hear about the entire range, though.

The most likely GPU to lead the way is the flagship Nvidia GeForce RTX 4090, rumored to release ahead of its less powerful counterparts, the RTX 4080 and RTX 4070. Nvidia is almost certainly readying budget-friendly graphics cards too, but most rumors pinpoint the launch of these cheaper cards as early 2023. The graphics cards are expected to deliver a marked leap in performance, with some sources saying that they could even double it compared to the RTX 30-series.

There have also been whispers of a GPU that will allegedly be even more powerful than the RTX 4090. With a monstrous TBP that is said to be as high as 900 watts, this GPU could either be an RTX 4090 Ti or a Titan GPU if Nvidia chooses to bring that back.

It’s difficult to say just how much Nvidia will reveal on September 20, but an announcement of the initial wave of GPUs seems likely. Whether we learn the exact release dates and pricing remains to be seen.

How to watch the announcement

NVIDIA CEO Jensen Huang on stage.

Anyone can watch the keynote live when it takes place on September 20. Nvidia will be streaming it on its GTC 2022 website, and it will also likely end up on YouTube closer to the date.

If you’re not able to watch it, don’t worry — we’ll keep you posted with all the latest news about Nvidia GPUs.

Editors’ Choice

Repost: Original Source and Author Link


6 best Nvidia GPUs of all time

Nvidia sets the standard so high for its gaming graphics cards that it’s actually hard to tell the difference between an Nvidia GPU that’s merely a winner and an Nvidia GPU that’s really special.

Nvidia has long been the dominant player in the graphics card market, but the company has from time to time been put under serious pressure by its main rival AMD, which has launched several of its own iconic GPUs. Those only set Nvidia up for a major comeback, however, and sometimes that led to a real game-changing card.

It was hard to choose which Nvidia GPUs were truly worthy of being called the best of all time, but I’ve narrowed down the list to six cards that were truly important and made history.

GeForce 256

The very first

VGA Museum

Although Nvidia often claims the GeForce 256 was the world’s first GPU, that’s only true if Nvidia is the only company that gets to define what a GPU is. Before GeForce there were the RIVA series of graphics cards, and there were other companies making their own competing graphics cards then, too. What Nvidia really invented was the marketing of graphics cards as GPUs, because in 1999 when the 256 came out, terms like graphics card and graphics chipset were more common.

Nvidia is right that the 256 was important, however. Before the 256, the CPU played a very important role in rendering graphics, to the point where the CPU was directly completing steps in rendering a 3D environment. However, CPUs were not very efficient at doing this, which is where the 256 came in with hardware transforming and lighting, offloading the two most CPU intensive parts of rendering onto the GPU. This is one of the primary reasons why Nvidia claims this is the first GPU.

As a product, the GeForce 256 wasn’t exactly legendary: Anandtech wasn’t super impressed by its price for the performance at the time of its release. Part of the problem was the 256’s memory, which was single data rate, or SDR. Due to other advances, SDR was becoming insufficient for GPUs of this performance level. A faster dual data rate or DDR (the same DDR as in DDR5) launched just before the end of 1999, which finally met Anandtech’s expectations for  performance, but the increased price tag of the DDR version was hard to swallow.

The GeForce 256, first of its name, is certainly historical, but not because it was an amazing product. The 256 is important because it inaugurated the modern era of GPUs. The graphics card market wasn’t always a duopoly; back in the 90s, there were multiple companies competing against each other, with Nvidia being just one of them. Soon after the GeForce 256 launched, most of Nvidia’s rivals exited the market. 3dfx’s Voodoo 5 GPUs were uncompetitive and before it went bankrupt many of its technologies were bought by Nvidia; Matrox simply quit gaming GPUs altogether to focus on professional graphics.

By the end of 2000, the only other graphics company in town was ATI. When AMD acquired ATI in 2006, it brought about the modern Nvidia and AMD rivalry we all know today.

GeForce 8800 GTX

A monumental leap forward

The GeForce 8800 GTX.
VGA Museum

After the GeForce 256, Nvidia and ATI attempted to best the other with newer GPUs with higher performance. In 2002, however, ATI threw down the gauntlet by launching its Radeon 9000 series, and at a die size of 200mm squared, the flagship Radeon 9800 XT was easily the largest GPU ever. Nvidia’s flagship GeForce4 Ti 4600 at 100mm had no hope of beating even the midrange 9700 Pro, which inflicted a crushing defeat on Nvidia. Making a GPU was no longer just about the architecture, the memory, or the drivers; in order to win, Nvidia would need to make big GPUs like ATI.

For the next four years, the size of flagship GPUs continued to increase, and by 2005 both companies had launched a GPU that was around 300mm. Although Nvidia had regained the upper hand during this time, ATI was never far behind and its Radeon X1000 series was fairly competitive. A GPU sized at 300mm was far from the limit of what Nvidia could do, however. In 2006 Nvidia released its GeForce 8 series, led by the flagship 8800 GTX. Its GPU, codenamed G80, was nearly 500mm and its transistor count was almost three times higher than the last GeForce flagship.

The 8800 GTX did to ATI what the Radeon 9700 Pro and the rest of the 9000 series did to Nvidia, with Anandtech describing the moment as “9700 Pro-like.” A single 8800 GTX was almost twice as fast as ATI’s top-end X1950 XTX, not to mention much more efficient. At $599, the 8800 GTX was more expensive than its predecessors, but its high level of performance and DirectX 10 support made up for it.

But this was mostly the end of the big GPU arms race that had characterized the early 2000s for two main reasons. Firstly, 500mm was getting pretty close to the limit of how large a GPU could be, and even today 500mm is relatively big for a processor. Even if Nvidia wanted to, making a bigger GPU just wasn’t feasible. Secondly, ATI wasn’t working on its own 500mm GPU anyway, so Nvidia wasn’t in a rush to get an even bigger GPU to market. Nvidia had basically won the arms race by outspending ATI.

That year also saw the acquisition of ATI by AMD, which was finalized just before the 8800 GTX launched. Although ATI now had the backing of AMD, it really seemed like Nvidia had such a massive lead that Radeon wouldn’t challenge GeForce for a long time, perhaps never again.

GeForce GTX 680

Beating AMD at its own game

The GeForce GTX 680.

Nvidia’s next landmark release came in 2008 when it launched the GTX 200 series, starting with the GTX 280 and GTX 260. At nearly 600mm squared the 280 was a worthy monstrous successor to the 8800 GTX. Meanwhile, AMD and ATI signaled that they would no longer be launching high-end GPUs with big dies in order to compete, rather focusing on making smaller GPUs in a gambit known as the small die strategy. In its review, Anandtech said “Nvidia will be left all alone with top performance for the foreseeable future.” As it turned out, the next four years were pretty rough for Nvidia.

Starting with the HD 4000 series in 2008, AMD assaulted Nvidia with small GPUs that had high value and almost flagship levels of performance, and that dynamic was maintained throughout the next few generations. Nvidia’s GTX 280 wasn’t cost effective enough, then the GTX 400 series was delayed, and the 500 series was too hot and power hungry.

One of Nvidia’s traditional weaknesses was its disadvantage when it came to process, the way processors are manufactured. Nvidia was usually behind AMD, but it had finally caught up by using the 40nm node for the 400 series. AMD, however, wanted to regain the process lead quickly and decided its next generation would be on the new 28nm node, and Nvidia decided to follow suit.

AMD won the race to 28nm with its HD 7000 series, with its flagship HD 7970 putting AMD back in first place for performance. However, the GTX 680 launched just two months later, and not only did it beat the 7970 in performance, but also power efficiency and even die size. As Anandtech put it, Nvidia had “landed the technical trifecta” and that completely flipped the tables on AMD. AMD did reclaim the performance crown yet again by launching the HD 7970 GHz Edition later in 2012 (notable for being the first 1GHz GPU), but having the lead in efficiency and performance per millimeter was a good sign for Nvidia.

The back and forth battle between Nvidia and AMD was pretty exciting after how disappointing the GTX 400 and 500 series had been, and while the 680 wasn’t an 8800 GTX, it signaled Nvidia’s return to being truly competitive against AMD. Perhaps most importantly, Nvidia was no longer weighed down by its traditional process disadvantage, and that would eventually pay off in a big way.

GeForce GTX 980

Nvidia’s dominance begins

The GeForce GTX 980.
Bill Roberson/Digital Trends

Nvidia found itself in a very good spot with the GTX 600 series, and it was because of TSMC’s 28nm process. Under normal circumstances, AMD would have simply gone to TSMC’s next process in order to regain its traditional advantage, but this was no longer an option. TSMC and all other foundries in the world (except for Intel) had an extraordinary amount of difficulty progressing beyond the 28nm node. New technologies were needed in order to progress further, which meant Nvidia didn’t have to worry about AMD regaining the process lead any time soon.

Following a few years of back and forth and AMD floundering with limited funds, Nvidia launched the GTX 900 series in 2014, inaugurated by the GTX 980. Based on the new Maxwell architecture, it was an incredible improvement over the GTX 600 and 700 series despite being on the same node. The 980 was between 30% and 40% faster than the 780 while consuming less power, and it was even a smidge faster than the top-end 780 Ti. Of course, the 980 also beat the R9 290X, once again landing the trifecta of performance, power efficiency, and die size. In its review, Anandtech said the 980 came “very, very close to doing to the Radeon 290X what the GTX 680 did to Radeon HD 7970.”

AMD was incapable of responding. It didn’t have a next-generation GPU ready to launch in 2014. In fact, AMD wasn’t even working on a complete lineup of brand-new GPUs to even the score with Nvidia. AMD instead was planning on rebranding the Radeon 200 series as the Radeon 300 series, and would develop one new GPU to serve as the flagship. All of these GPUs were to launch in mid 2015, giving the entire GPU market to Nvidia for nearly a full year. Of course, Nvidia wanted to pull the rug right from under AMD and prepared a brand-new flagship.

Launching in mid 2015, the GTX 980 Ti was about 30% faster than the GTX 980, thanks to its significantly higher power consumption and larger die size at just over 600mm squared. It beat AMD’s brand-new R9 Fury X a month before it even launched. Although the Fury X wasn’t bad, it had lower performance than the 980 Ti, higher power consumption, and much less VRAM. It was a demonstration of how far ahead Nvidia was with the 900 series; while AMD was hastily trying to get the Fury X out the door, Nvidia could have launched the 980 Ti any time it wanted.

Anandtech put it pretty well: “The fact that they get so close only to be outmaneuvered by Nvidia once again makes the current situation all the more painful; it’s one thing to lose to Nvidia by feet, but to lose by inches only reminds you of just how close they got, how they almost upset Nvidia.”

Nvidia was basically a year ahead of AMD technologically, and while what they had done with the GTX 900 series was impressive, it was also a bit depressing. People wanted to see Nvidia and AMD duke it out like they had done in 2012 and 2013, but it started to look like that was all in the past. Nvidia’s next GPU would certainly reaffirm that feeling.

GeForce GTX 1080

The GPU with no competition but itself

The GeForce GTX 1080.

In 2015, TSMC had finally completed the 16nm process, which could achieve 40% higher clock speeds than 28nm at the same power or half the power of 28nm at similar clock speeds. However, Nvidia planned to move to 16nm in 2016 when the node was more mature. Meanwhile, AMD had absolutely no plans to utilize TSMC’s 16nm but instead moved to launch new GPUs and CPUs on GlobalFoundries’s 14nm process. But don’t be fooled by the names: TSMC’s 16nm was and is better than GlobalFoundries’s 14nm. After 28nm, nomenclature for processes became based in marketing rather than scientific measurements. This meant that for the first time in modern GPU history, Nvidia had the process advantage against AMD.

The GTX 10-series launched in mid-2016, based on the new Pascal architecture and TSMC’s 16nm node. Pascal wasn’t actually very different from Maxwell, but the jump from 28nm to 16nm was massive, like Intel going from 14nm on Skylake to 10nm on Alder Lake. The GTX 1080 was the new flagship, and it’s hard to overstate how fast it was. The GTX 980 was a little faster than the GTX 780 Ti when it came out. By contrast, the GTX 1080 was over 30% faster than the GTX 980 Ti, and for $50 less, too. The die size of the 1080 was also extremely impressive, at just over 300mm squared, nearly half the size of the 980 Ti.

With the 1080 and the rest of the 10-series lineup, Nvidia effectively took the entire desktop GPU market for itself. AMD’s 300 series and the Fury X were simply no match. At the midrange, AMD launched the RX 400 series, but these were just three low to mid-range GPUs that were a throwback to the small die strategy, minus the part where Nvidia’s flagship was in striking distance like with the GTX 280 and the HD 4870. In fact, the 1080 was nearly twice as fast as the RX 480. The only GPU AMD could really beat was the mid-range GTX 1060, as the slightly cut down GTX 1070 was just a little too fast to lose to the Fury X.

AMD did eventually launch new high-end GPUs in the form of RX Vega, a full year after the 1080 came out. With much higher power consumption and the same selling price, the flagship RX Vega 64 beat the GTX 1080 by a hair but wasn’t very competitive. However, the GTX 1080 was no longer Nvidia’s flagship; with relatively small die size and a full year to prepare, Nvidia launched a brand-new flagship a whole three months before RX Vega even launched; it was a repeat of the 980 Ti. The new GTX 1080 Ti was even faster than the GTX 1080, delivering yet another 30% improvement to performance. As Anandtech put it, the 1080 Ti “further solidifie[d] Nvidia’s dominance of the high-end video card market.”

AMD’s failure to deliver a truly competitive high-end GPU meant that the 1080’s only real competition was Nvidia’s own GTX 1080 Ti. With the 1080 and the 1080 Ti, Nvidia achieved what is perhaps the most complete victory we’ve seen so far in modern GPU history. Over the past 4 years, Nvidia kept increasing its technological advantage over AMD, and it was hard to see how Nvidia could ever lose.

GeForce RTX 3080

Correcting course

RTX 3080 graphics card on a table.

After such a long and incredible streak of wins, perhaps it was inevitable that Nvidia would succumb to hubris and lose sight of what made Nvidia’s great GPUs so great. Nvidia did not follow up the GTX 10 series with yet another GPU with a stunning increase in performance, but with the infamous RTX 20 series. Perhaps in a move to cut AMD out of the GPU market, Nvidia focused on introducing hardware accelerated ray tracing and A.I. upscaling instead of delivering better performance in general. If successful, Nvidia could make AMD GPUs irrelevant until the company finally made Radeon GPUs with built-in ray tracing.

RTX 20-series was a bit of a flop. When the RTX 2080 and 2080 Ti launched in late 2018, there weren’t even any games that supported ray tracing or deep learning super sampling (DLSS). But Nvidia priced RTX 20-series cards as if those features made all the difference. At $699, the 2080 had a nonsensical price, and the 2080 Ti’s $1,199 price tag was even more insane. Nvidia wasn’t even competing with itself anymore.

The performance improvement in existing titles was extremely disappointing, too; the RTX 2080 was only 11% faster than the GTX 1080, though at least the RTX 2080 Ti was around 30% faster than the GTX 1080 Ti.

The next two years were a course correction for Nvidia. The threat from AMD was starting to become pretty serious; the company had finally regained the process advantage by moving to TSMC’s 7nm and the company launched the RX 5700 XT in mid-2019. Nvidia was able to head it off once again by launching new GPUs, this time the RTX 20 Super series with a focus on value, but the 5700 XT must have worried Nvidia. The RTX 2080 Ti was three times as large yet was only 50% faster, meaning AMD was achieving much higher performance per millimeter. If AMD made a larger GPU, it could be difficult to beat.

Both Nvidia and AMD planned for a big showdown in 2020. Nvidia recognized AMD’s potential and pulled out all the stops: the new 8nm process from Samsung, the new Ampere architecture, and an emphasis on big GPUs. AMD meanwhile stayed on TSMC’s 7nm process but introduced the new RDNA 2 architecture and would also be launching a big GPU, its first since RX Vega in 2017. The last time both companies launched brand-new flagships within the same year was 2013, nearly a decade ago. Though the pandemic threatened to ruin the plans of both companies, neither company was willing to delay the next generation and launched as planned.

Nvidia fired first with the RTX 30-series, led by the flagship RTX 3090, but most of the focus was on the RTX 3080 since at $699 it was far more affordable than the $1,499 3090. Instead of being a repeat of the RTX 20-series, the 3080 delivered a sizeable 30% bump in performance at 4K over the RTX 2080 Ti, though the power consumption was a little high. At lower resolutions, the performance gain of the 3080 was somewhat less, but since the 3080 was very capable at 4K, it was easy to overlook this. The 3080 also benefitted from a wider variety of games supporting ray tracing and DLSS, giving value to having an Nvidia GPU with those features.

Of course, this wouldn’t matter if the 3080 and the rest of the RTX 30-series couldn’t stand up to AMD’s new RX 6000 series, which launched two months later. At $649, the RX 6800 XT was AMD’s answer to the RTX 3080. With nearly identical performance in most games and at most resolutions, the battle between the 3080 and the 6800 XT was reminiscent of the GTX 680 and the HD 7970. Each company had its advantages and disadvantages, with AMD leading in power efficiency and performance while Nvidia had better performance in ray tracing and support for other features like A.I. upscaling.

The excitement over a new episode in the GPU war quickly died out though, because it quickly came apparent that nobody could buy RTX 30 or RX 6000 or even any GPUs at all. The pandemic had seriously reduced supply while crypto increased demand and scalpers snatched up as many GPUs as they could. At the time of writing, the shortage has mostly ended, but most Nvidia GPUs are still selling for usually $100 or more over MSRP. Thankfully, higher-end GPUs like the RTX 3080 can be found closer to MSRP than lower-end 30 series cards, which keeps the 3080 a viable option.

On the whole, the RTX 3080 was a much-needed correction from Nvidia. Although the 3080 has marked the end of Nvidia’s near total domination of the desktop GPU market, it’s hard not to give the company credit for not losing to AMD. After all, the RX 6000 series is on a much better process and AMD has been extremely aggressive these past few years. And besides, it’s good to finally see a close race between Nvidia and AMD where both sides are trying really hard to win.

So what’s next?

Unlike AMD, Nvidia always keeps its cards close to its chest and rarely ever reveals information on upcoming products. We can be pretty confident the upcoming RTX 40 series will launch sometime this year, but everything else is uncertain. One of the more interesting rumors is that Nvidia will utilize TSMC’s 5nm for RTX 40 GPUs, and if this is true then that means Nvidia will have parity with AMD once again.

But I think that as long as RTX 40 isn’t another RTX 20 and provides more low-end and mid-range options than RTX 30, Nvidia should have a good enough product next generation. I would really like for it to be so good that it makes the list of best Nvidia GPUs of all time, but we’ll have to wait and see.

Editors’ Choice

Repost: Original Source and Author Link