Categories
Computing

Nvidia addresses rumors about RTX 40 GPUs’ power consumption

The new Nvidia GeForce RTX 40 lineup includes some of the most power-hungry graphics cards on the market. Because of that, you may be wondering if you’ll need a new power supply (PSU) in order to support the borderline monstrous capabilities of the RTX 4090.

To answer some of these concerns, Nvidia released new information about the power consumption of its new GPUs. The conclusion? Well, it’s really not all that bad after all.

Nvidia

Prior to the official announcement of the RTX 40-series, the cards have been the subject of much power-related speculation. The flagship RTX 4090 received the most coverage of all, with many rumors pointing toward insane requirements along the lines of 800-900W. Fortunately, we now know that those rumors weren’t true.

The RTX 4090 has a TGP of 450W, the same as the RTX 3090 Ti, and calls for a minimum 850W PSU. The RTX 4080 16GB takes things down a few notches with a 320W TGP and a 750W power supply. Lastly, the RTX 4070 in disguise, also known as the RTX 4080 12GB, draws 285W and calls for a 700W PSU.

Nvidia claims that this is not an increase from the previous generation, but it kind of is — after all, the RTX 3090 had a TGP of 350W. With that said, it’s not as bad as we had thought, but many are still left to wonder if they need to upgrade their existing PSUs or not.

Nvidia has now assured its customers that they can stick to the PSU they currently own as long as it meets the wattage requirements for that given card.

Similarly, Nvidia doesn’t expect there to be any problems when it comes to 8-pin to PCIe Gen 5 16-pin adapter compatibility. As said by Nvidia on its FAQ page: “The adapter has active circuits inside that translate the 8-pin plug status to the correct sideband signals according to the PCIe Gen 5 (ATX 3.0) spec.”

There’s also another fun little fact to be found in that FAQ: Nvidia confirms that the so-called smart power adapter will detect the number of 8-pin connectors that are plugged in. When four such connectors are used versus just three, it will enable the RTX 4090 to draw more power (up to 600 watts) for extra overclocking capabilities.

Nvidia CEO Jensen Huang with an RTX 4090 graphics card.

There have also been questions about the durability of the PCIe 5.0 connectors, which are rated at 30 cycles. Some might consider that to not be much, but Nvidia clears this up by saying that this has almost always been the case, or at least has been over the past twenty years.

Lastly, Nvidia clarified the matter of the possibility of an overcurrent or overpower risk when using the 16-pin power connector with non-ATX 3.0 power supply units. It had, indeed, spotted an issue during the early stages of development, but it has since been cleared up. Again, seemingly nothing to worry about there.

All in all, the power consumption fears have largely been squelched. Nvidia did ramp up the power requirements, but not as significantly as expected, so as long as your PSU matches what the card asks for, you should be fine. Let’s not breathe that sigh of relief yet, though — the RTX 4090 Ti might still happen, and that will likely be one power-hungry beast.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Engadget Podcast: The repairable iPhone 14 and NVIDIA’s RTX 4000 GPUs

Surprise! The iPhone 14 is pretty repairable, it turns out. This week, Cherlynn and Devindra chat with Engadget’s Sam Rutherford about this move towards greater repairability and what it means for future iPhones. Also, they dive into NVIDIA’s powerful (and expensive!) new RTX 4080 and 4090 GPUs. Sure, they’re faster than before, but does anyone really need all that power?

Listen above, or subscribe on your podcast app of choice. If you’ve got suggestions or topics you’d like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcasts, the Morning After and Engadget News!

Subscribe!

Topics

  • The iPhone 14 is surprisingly repairable – 1:17

  • NVIDIA announces RTX 4090 and 4080 GPUs (and a Portal mod with ray tracing) – 21:08

  • Huge hack at Rockstar leaks GTA 6 videos and dev code – 34:22

  • Uber was also hacked last week by the same crew that hit Rockstar – 38:37

  • Windows 11 2022 Update – 40:21

  • Google is offering a $30 1080p HDR Chrome cast with Google TV – 44:05

  • Does anyone need the Logitech G cloud gaming handset? – 46:59

  • Twitch is banning gambling streams on October 18 – 51:56

  • Working on – 55:34

  • Pop culture picks – 1:01:35

Livestream

Credits
Hosts: Cherlynn Low and Devindra Hardawar
Guest: Sam Rutherford
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien
Livestream producers: Julio Barrientos
Graphic artists: Luke Brooks and Brian Oh

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Computing

The Aorus RTX 4090 Master is the biggest GPU we’ve ever seen

Gigabyte’s Aorus RTX 4090 Master is the biggest GPU we’ve ever seen. We don’t yet know the full specs of this GeForce RTX 4090 model, but we do know we’re going to need a very large case to house this beast.

This is a monster unit. It needs four slots all to itself on a motherboard. It comes with three 11 cm fans. It is 35.8 cm (14.1 inches) long and 16.2 cm (6.4 inches) wide, meaning we could literally stack several smaller RTX cards inside of it and still have some room to spare. Videocardz.com did the math and determined they could fit 10 Radeon RX 6400 cards inside.

Gigabyte

This huge GPU should be enough to house the GeForce RTX 4090, which Nvidia announced on September 20. These are the newest graphics cards featuring Ada Lovelace architecture, offer better ray tracing, significantly improved rendering, and DLSS 3. It comes with up to 24GB of GDDR6X memory.

Nvidia is promising 2.5GHz of clock speed on the 4090 while gulping 450W of power. The RTX 4090 doesn’t come out until October 12, so we’ll need to wait to put it through real-world use, but we can expect impressive performance thanks to double the number of CUDA cores.

But we are drooling at the thought of one of those high-end maxed out RTX 4090s inside this enormous Aorus Master unit. The power draw must be impressive and we’re not looking forward to the electric bill (nor the incredibly high prices), but maybe here is something that will finally catapult gaming into the next era. After all, this is still a GPU waiting for a game worthy of its power.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia’s DLSS 3 may cut the RTX 4090’s insane power demands

Nvidia’s upcoming flagship, the RTX 4090, was tested in Cyberpunk 2077. It did a great job, but the results were far better with DLSS 3 enabled.

The card managed to surprise us in two ways. One, the maximum clock was higher than expected, and two, DLSS 3 actually managed to lower the card’s power draw by a considerable amount.

The card was tested in 1440p in a system with an Intel Core i9-12900K CPU, running the highest possible settings that Cyberpunk 2077 has to offer, meaning with ultra ray tracing enabled and on Psycho (max) settings. First, let’s look at how the GPU was doing without DLSS 3 enabled.

At the native resolution, the game was running at an average of 59 frames per second (fps) with a latency that hovered around 72 to 75 milliseconds (ms). The RTX 4090 was able to hit a whopping 2.8GHz clock speed, and that’s without overclocking — those are stock speeds, even though the maximum advertised clock speed for the RTX 4090 is just over 2.5GHz. This means an increase of roughly 13% without an overclock. During the demo, the GPU reached 100% utilization, but the temperatures stayed reasonable at around 55 degrees Celsius.

It’s a different story once DLSS 3 is toggled on, though. As Wccftech notes in its report, the GPU was using a pre-release version of DLSS 3, so these results might still change. For now, however, DLSS 3 is looking more and more impressive by the minute.

Enabling DLSS 3 also enables the DLSS Frame Generation setting, and for this test, the Quality preset was used. Once again, the GPU hit maximum utilization and a 2.8GHz boost clock, but the temperature was closer to 50C rather than 55C. The fps gains were nothing short of massive, hitting 119 fps and an average latency of 53ms. This means that the frame rates doubled while the latency was reduced by 30%.

We also have the power consumption figures for both DLSS 3 on and off, and this is where it gets even more impressive. Without DLSS 3, the GPU was consuming 461 watts of power on average, and the performance per watt (Frames/Joule) was rated at 0.135 points. Enabling DLSS 3 brought the wattage down to just 348 watts, meaning a reduction of 25%, while the performance per watt was boosted to 0.513 — nearly four times that of the test without DLSS 3.

The RTX 4090 among green stripes.

Wccftech has also tested this on an RTX 3090 Ti and found similar, albeit worse, results. The GPU still saw a boost in performance (64%) and a drop in power draw (10%), so the energy consumption numbers are not as impressive, confirming that DLSS 3 will offer a real upgrade over its predecessor.

The reason behind this unexpected difference in power consumption might lie in the way the GPU is utilized with DLSS 3 enabled. The load placed on the FP32 cores moves to the GPU tensor cores. This helps free up some of the load placed on the whole GPU and, as a result, cuts the power consumption.

It’s no news that the RTX 4090 is one power-hungry card, so it’s good to see that DLSS 3 might be able to bring those figures down a notch or two. Now, all we need is a game that can fully take advantage of this kind of performance. Nvidia’s GeForce RTX 4090 is set to release on October 12 and will arrive with a $1,599 price tag. With less than a month left until its launch, we should start seeing more comparisons and benchmarks soon.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Why the RTX 4080 12GB feels a lot like a rebranded RTX 4070

Nvidia announced two versions of its RTX 4080 at its GTC keynote — a 12GB model and a 16GB model. On the surface, this seems simple. Two configurations of the same graphics cards, except with different amount of memory.

This is, after all, what Nvidia did with its RTX 3080 last year. There was the original 8GB RTX 3080, and the 12GB RTX 3080 that got released earlier this year.

But the situation with the two “versions” of the RTX 4080 couldn’t be more different. Not only is there a $300 gulf in price between these two products, but Nvidia confirmed to the media today that they do, in fact, use two different GPUs. The RTX 4080 16GB uses AD104, and the RTX 4080 12GB uses AD103. To call these two products different “versions” of the same graphics card is a pretty serious misnomer.

Nvidia GeForce RTX 4080 16GB Nvidia GeForce RTX 4080 12GB
GPU AD104 AD103
CUDA cores 9,728 7,680
Shader / RT cores 49 / 113 40 / 82
Tensor TFLOPS 780 641
Base clock 2,210MHz 2,310MHz
Maximum clock 2,510MHz 2,610MHz
Memory size 16GB GDDR6X 12GB GDDR6X
Memory bus 256-bit 192-bit
TDP 320 watts 285 watts
Price $1,199 $899

Looking at the other specs we now have, you can see how that plays out. The RTX 4080 16GB has 21% more CUDA cores, 27% more RT cores, and is capable of 18% more Tensor TFLOPS (trillion floating-point operations per second) than the 12GB model. Of course, it also has a wider memory bus and consumes more power too. All in all, the 16GB model is a much more powerful graphics card.

So, what then is going on with the naming of this 12GB RTX 4080? Well, just look at what Nvidia did with its initial launch of the first RTX 30-series cards. At launch, the company announced the RTX 3090, 3080, and 3070. Three GPUs down the line. What it’s doing with the RTX 40-series line is nearly identical, meaning the 12GB 4080, which retails for $899, feels a lot more like a proper RTX 4070 than anything else. That’s a problem, considering the RTX 3070 retailed for just $499.

When asked, of course, Nvidia sees the 16GB model as an “enhanced” RTX 4080, not the other way around. And maybe the company has a point, at least with how these cards are priced. The 16GB model is certainly priced as if it were an RTX 4080 Ti — or something along those lines. Nvidia has also confirmed that there will be no first-party Founders Edition of the 12GB RTX 4080.

Still, the whole thing has left a sour taste in the mouths of PC enthusiasts, who are looking at this 12GB RTX 4080 as a repackaged 4070 as a way to secretly raise prices. Nvidia hasn’t been shy about commenting on the rising cost of GPUs in the future, confirming that falling prices are a thing of the past.

We’ll have to wait and see what Nvidia eventually does with the rest of the lineup to get the full picture, but at the very least, it’s obvious that GPU pricing is continuing to rise, even if some of the costs are buried in the specs.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

How much does the RTX 4090 cost? RTX 40-series buying guide

Nvidia has finally announced the RTX 40 series, and three new RTX 40 cards will be available later this year: the flagship RTX 4090, the high-end RTX 4080 16GB, and the RTX 4080 12GB. Along with apparently massive performance improvements over last-generation RTX 30 series cards, these new GPUs come with high price tags.

How much does the RTX 4090 cost?

Nvidia

The flagship RTX 4090 is launching with an MSRP of $1,599, which is $100 higher than the $1,499 MSRP of the RTX 3090 and $400 lower than the $1,999 MSRP of the RTX 3090 Ti. $100 extra for Nvidia’s new flagship isn’t that much when the RTX 3090 was already so expensive, so not much has changed here.

RTX 4090 RTX 3090
Process TSMC 5nm Samsung 8nm
Architecture Ada Lovelace Ampere
CUDA cores 16,384 10,496
Memory 24GB GDDR6X 24GB GDDR6X
Boost clock speed 2520MHz 1695MHz
Bus width 384-bit 384-bit
Power 450W 350W

It’s actually surprising that the RTX 4090 doesn’t cost more because it has way more CUDA cores than the RTX 3090 and the RTX 3090 Ti. It still has more or less the same memory size and bandwidth, but that shouldn’t really be a cause for concern; Nvidia should know how much VRAM its GPUs need.

How much does the RTX 4080 cost?

New Nvidia GeForce RTX 4080 GPU over a black and green background.
Nvidia

Things are a bit more complicated with the RTX 4080, which has two different models: the 4080 16GB at $1,199 and the 4080 12GB at $899. That’s much more expensive than the RTX 3080 10GB, which launched at $699, but it’s cheaper than the RTX 3080 12GB, which launched at $1,249. That being said, the 3080 12GB has seldom been in good supply, and the price has been falling ever since the end of the GPU shortage. Compared to the standard RTX 3080 10GB, both RTX 4080 models are much more expensive.

RTX 4080 16GB RTX 4080 12GB RTX 3080
Process TSMC 5nm TSMC 5nm Samsung 8nm
Architecture Ada Lovelace Ada Lovelace Ampere
CUDA cores 9,728 7,680 8960 / 8704
Memory 16GB GDDR6X 12GB GDDR6X 12GB / 10GB GDDR6X
Boost clock speed 2505MHz 2610MHz 1710MHz
Bus width 256-bit 192-bit 384-bit / 320-bit
Power 320W 285W 350W / 320W

At first glance, this different amount of memory business might sound like the difference between the RTX 3080 10GB and the RTX 3080 12GB, which have very similar performance but a large difference in price. However, these two different 4080s differ greatly not just in memory size and price but also in other specifications.

The RTX 4080 16GB has 9,728 CUDA cores, while the RTX 4080 12GB has just 7,680. The memory bandwidth on the 12GB model is also much lower since it has a 192-bit bus compared to the 256-bit bus on the 16GB version. The 12GB card does have a slightly higher clock speed, but that’s more than offset by the lower amount of cores and memory bandwidth. The 16GB and 12GB are effectively very different GPUs and not just merely different versions of the same card, hence the $300 price difference.

Which RTX 40-series GPU should you buy?

The top of the Nvidia RTX 4080 cooler.

Until the reviews are in, it’s hard to recommend any of the RTX 40 series cards Nvidia has revealed so far. These are some of the most expensive GPUs ever released (which hasn’t been received well by most users), and even if RTX 40 is as fast as Nvidia says it is, these high price tags are definitely going to negatively impact the value proposition of these cards.

That being said, Nvidia’s new GPUs do seem priced sensibly relative to each other. The RTX 4080 16GB offers over 2,000 more CUDA cores, 4GB more VRAM, and more memory bandwidth than the RTX 4080 12GB for $300 more. For another $400, you could get the RTX 4090, which comes with 6,000 more CUDA cores, 8GB more VRAM, and even more memory bandwidth. The 4090 is actually in a class above the 4080 16GB, unlike how the RTX 3090 was just an RTX 3080 with a few more cores and higher TDP.

If you’re going to spend hundreds of dollars on a cutting-edge GPU, it might just be worth it to go all out and get the RTX 4090. At least then you won’t be wanting for more, even if it is one of the most expensive gaming GPUs ever made. On the other hand, you still get faster ray tracing performance and DLSS 3 with the much cheaper RTX 4080 16GB and 12GB.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia GeForce RTX 4080 16GB vs RTX 4080 12GB

Nvidia has surprised us all by announcing two versions of the GeForce RTX 4080 instead of an RTX 4080 and an RTX 4070. Following the RTX 4090, the two 4080s will likely be some of the more popular GPUs in the brand-new RTX 40 “Ada Lovelace” lineup.

While the RTX 4070 didn’t make an appearance yet, the two versions of the RTX 4080 give us plenty to get hyped for. Let’s see how they compare to one another.

Specs

Nvidia

While the two GPUs are both called RTX 4080, they differ quite a lot in terms of their specifications. Many leaks suspected that Nvidia would be launching the RTX 4090, RTX 4080, and the RTX 4070 initially. Now, it seems that the RTX 4080 12GB may have inherited some of the specs that were initially leaked as RTX 4070.

The RTX 4080 16GB obviously sports more memory, but interestingly, it’s the RTX 4080 12GB that has slightly higher clock speeds. However, the extra memory and CUDA cores on the RTX 4080 16GB will both have an impact on performance.

Nvidia GeForce RTX 4080 16GB Nvidia GeForce RTX 4080 12GB
CUDA cores 9,728 7,680
Base clock 2,210MHz 2,310MHz
Maximum clock 2,510MHz 2,610MHz
Memory size 16GB GDDR6X 12GB GDDR6X
Memory bus 256-bit 192-bit
TDP 320 watts 285 watts

Expected performance

Comparison of the RTX 4080 16GB and 12GB versions.
Nvidia

Nvidia hasn’t said much about the expected performance of the RTX 4080, so it’s hard to predict how powerful the two GPUs are going to be. We can guess based on their specs, but the real knowledge will come from benchmarks. Fortunately, we’re likely to start seeing them begin to leak out soon, and once the cards are fully out, we should be able to test them ourselves.

The CUDA core volume of the RTX 4080 12GB puts it between the RTX 3070 Ti and the RTX 3080. However, it sports more memory than the RTX 3070 Ti, and also utilizes Nvidia’s latest tech such as DLSS 3 and Shader Execution Reordering (SER). Suffice it to say that we will be seeing an improvement in terms of performance from both cards, but it’s too early to gauge just how they compare to each other.

Nvidia has teased that the RTX 4080 will be two to four times faster than the RTX 3080 Ti, but these numbers may change. It did give us one thing, though — the benchmark linked above that implies the RTX 4080 16GB outperforms the RTX 4080 12GB in each of the three titles, but it’s not a massive difference. However, they both dwarf the RTX 3080 Ti.

Pricing and availability

Nvidia's Ada Lovelace chip.
Nvidia

We don’t have an exact release date for the two RTX 4080 GPUs just yet, but we do know that they will be launching in November this year, so a little later than the flagship RTX 4090.

Once they arrive, the GPUs will be priced at $899 for the RTX 4080 12GB and $1,119 for the RTX 4080 16GB. Custom models from Nvidia’s board partners, such as Gigabyte, Asus, Zotac, MSI, and others, will also be available soon enough, and those might be priced higher depending on their specifications.

It’s a close call

A comparison between the graphics quality without DLSS 3 and with it.
Nvidia

Choosing between the RTX 4080 16GB and the RTX 4080 12GB is going to be a pretty close call once these GPUs are available for sale.

On the one hand, the RTX 4080 12GB is $300 cheaper, and that’s nothing to sneeze at. On the other hand, the 16GB version will, of course, offer better performance, but it’s hard to say whether that difference will be worth $300.

Based on specifications alone, the RTX 4080 16GB will be the better choice, no contest — but if you’re looking for a mix of affordable and powerful, the 12GB option might be the better pick. The RTX 3080 and RTX 3080 Ti will also retain their good value if you’ll be focusing on the price rather than pushing for the latest technologies.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Don’t worry – the RTX 4090 won’t cause another GPU shortage

We’re sitting on the edge of Nvidia GTC, where it’s all but confirmed the company will launch its next-gen RTX 4090 graphics card. The last time we were in this situation, almost two years ago to the day, Nvidia’s launch kicked off what would become the worst GPU shortage we’ve ever seen, and it’s fair if you’re nervous we might be caught in that situation again.

The RTX 4090 will almost assuredly sell out when it launches, but you don’t need to get your F5 key ready to get a GPU. There were several factors that went into the GPU shortage, none of which apply this time around. If you’ve been waiting for next-gen GPUs to pull the trigger, don’t get caught up in the launch hype — all signs suggest that the RTX 4090 won’t cause another GPU shortage.

Where demand meets supply

Simon Byrne’s Berta 2 mining rig. Techarp

The biggest difference this go around is the lack of a pandemic for supply chains to contend with. We’re down from the peak of cases earlier this year, and although there was a brief spike a couple of months back, it doesn’t seem like we’re headed for another lockdown. That helps, but the main reason we won’t see a shortage comes down to the supply chain.

The chip shortage, which eventually lead to the GPU shortage, has mostly subsided. Supply chain issues haven’t been completely solved, but there are a lot of indications that there’s an excess supply of chips and not enough demand for them. Nvidia hinted at this fact in its most recent earnings call, saying that it has “excess inventory” of RTX 30-series graphics cards and would start slashing prices to sell them off. We’re seeing the effects of that now.

Demand for PCs, and by extension graphics cards, spiked in 2020 and throughout 2021. Now that people are returning to the office, that demand is mostly gone — but the components created to meet that demand remain. That’s why we’re seeing GPU prices crash so quickly. For example, the RTX 3090 Ti, which launched in April for a list price of over $2,000, is now closer to $1,200.

Even with an unforeseen COVID spike, it’s unlikely that the supply chain would be in the dire shape it was in 2020. Not only are companies now sitting on excess inventory, but they’ve also already navigated the rocky waters of rebuilding the supply chain during the worst of the pandemic.

Switching partners

Taiwan Semiconductor
Taiwan Semiconductor (TSMC), Fab 5 building, Hsinchu Science Park, Taiwan Peellden/Wikimedia

Although the pandemic certainly worsened the GPU shortage, it wasn’t the root cause. Nvidia’s issues in the previous generation started with Samsung. RTX 30-series graphics cards use Samsung’s 8nm node, and reports shortly after the launch of these cards said that Samsung had a higher rate of defects than it anticipated.

If you’re not aware, Nvidia is “fabless,” meaning it doesn’t actually manufacture the GPUs in its graphics cards. Instead, chipmakers like Samsung handle the manufacturing while Nvidia handles the design. It was a risk going with Samsung in the previous generation, and clearly, Nvidia doesn’t want to take the same risk this time.

Nvidia is using chipmaker TSMC for the RTX 4090 and presumably all RTX 40-series GPUs. TSMC was Nvidia’s partner up until the RTX 30-series, and although we’ve seen past shortages, none of them stemmed from defective manufacturing. Going back to TSMC hopefully means fewer defective chips, which is what kicked off the GPU shortage in the first place.

The fateful ‘merge’

A cryptocurrency mining rig from a computer graphic card.
Getty Images

Manufacturing issues caused the GPU shortage, but crypto extended it. In particular, Ethereum extended it. Although Bitcoin steals the limelight, the Ethereum blockchain is where the majority of GPU mining took place throughout the shortage — around 25% of all GPU sales during the shortage went to Ethereum miners according to one estimate.

But Ethereum is down bad right now, which is a reason why GPU prices are coming down so quickly. That’s a good sign, but GPU prices have been influenced by crypto for the past four years, so a rebound in Ethereum could’ve spelled disaster. Thankfully, that’s not the case anymore.

Ethereum just went through its long-awaited “merge,” which reduces the energy required for the blockchain and critically eliminates mining entirely. Although the Ethereum Group has been promising the shift for quite some time, it was perpetually delayed. Frankly, it didn’t seem like the “merge” would ever happen, leaving the fate of the upcoming GPU supply in limbo.

Now that the gavel is down, it’s much easier to be confident in upcoming GPU supply. Even if there are shortages, it’s unlikely that another boom in crypto will prolong and worsen the shortage, which is what we saw in 2020 and briefly toward the end of 2017.

Short-term shortages expected

Render of an Nvidia GeForce RTX 4090 graphics card.
QbitLeaks

Although it’s very unlikely we’ll see another GPU shortage on the scale of the one that happened in 2020, short-term shortages are likely. Whenever a new generation of GPUs or CPUs launches, there’s a short period of a few weeks where they’re sold out everywhere and prices skyrocket on the secondhand market. Usually, the prices drop quickly as supply stabilizes.

I’m expecting we’ll see an exaggerated version of this with the RTX 4090. Given how big of a cash cow GPUs have been over the past two years — one estimate says that scalpers brought in $61.5 million selling GPUs in 2020 alone — I wouldn’t be surprised if the initial wave of GPUs sold out immediately and went on the secondhand market for 2020 prices.

That should subside quickly, though, so don’t get caught up in the launch hype. It’s usually a bad idea to buy a GPU the day it releases anyway. The RTX 4090 won’t cause another GPU shortage on the scale of the one we just came out of, so don’t worry too much about picking one up on launch day.

Editors’ Choice






Repost: Original Source and Author Link

Categories
Computing

How to watch Nvidia’s RTX 4090 launch at GTC 2022

Nvidia kicks of its fall GTC 2022 event next week, where we’ll probably see the launch of the RTX 4090. Although Nvidia is tight-lipped as ever about what products it has in store, a slew of leaks and rumors have shown that we’ll see the launch of the RTX 4090 — and possibly other GPUs — during the keynote.

It’s possible we’ll see more than just next-generation GPUs as well. Here’s how to watch the RTX 4090 launch live and what to expect out of the presentation.

How to watch the Nvidia RTX 4090 launch at GTC 2022

NVIDIA GTC 2022 Keynote Teaser

Nvidia CEO Jensen Huang will deliver the company’s GTC 2022 keynote on Tuesday, September 20, at 8 a.m. PT. The presentation will likely be streamed on Nvidia’s YouTube channel, but you can bookmark the stream link on Nvidia’s website as well. We’ll embed the stream here once it’s available, but in the meantime, you can watch a short teaser for the event above.

Although the executive keynote is what most people tune in for, Nvidia’s fall GTC event lasts most of the week. It runs from September 19 to September 22, fully virtual. You can attend additional developer sessions — you can register and build a schedule on Nvidia’s GTC landing page — but they’ll focus on how developers can use Nvidia’s tools, not new product announcements.

Registration is required to attend the developer sessions. The keynote doesn’t require registration, however, and should steam on Nvidia’s YouTube channel.

What to expect from the Nvidia RTX 4090 launch

The first thing you should expect from the RTX 4090 launch is, well, the RTX 4090. Although Nvidia hasn’t confirmed any details about the card, or even that it’s called the RTX 4090, we saw the full specs leak a few days ago. According to the leak, Nvidia will announce the RTX 4090 and two RTX 4080 models — a 12GB variant and a 16GB variant.

It’s all but confirmed that Nvidia will launch its next-gen GPUs, which rumors say could offer double the performance of the current generation. Some leakers are saying the key feature of these cards is a configurable TDP. The story goes that each card will have a base power draw that’s in-line with what you’d expect from a GPU, but that users will be able to dedicate more power to the card for increased performance.

There’s a chance we’ll see more than the new cards, too. Nvidia has been teasing something called Project Beyond for a couple of weeks, posting vague videos to the GeForce Twitter account that show a desktop setup adorned with various clues. One recent video showed the PC starting a render in Adobe Media Encoder, suggesting it may have something to do with creative apps.

Speed matters…#ProjectBeyond
9.20.22
8AM PDT pic.twitter.com/Y2TM8KSJQn

— NVIDIA Studio (@NVIDIAStudio) September 16, 2022

Although it’s possible Project Beyond is just Nvidia’s branding for the RTX 4090 launch, it’s probably something different. In Nvidia’s most recent earnings call, the company said that it planned to reach “new segments of the market … with our gaming technology.”

That could mean anything, but we can still make some informed guesses. Last year, Nvidia shared a demo of games running on ARM PCs, laying the groundwork for ARM-based gaming in the future. Although Nvidia’s acquisition of ARM fell through, there’s still a good chance the companies are working closely together.

New segments of the market could mean PCs that don’t use traditional x86 CPUs like the ones offered by Intel and AMD. This is pure speculation, but ARM gaming has been a big focus of Nvidia for a while, and the company provides several of its gaming features for developers working on ARM applications.

Project Beyond could also be a tool for creators. Not only has Nvidia teased video encoding, but CEO Jensen Huang specifically called out streamers, vloggers, and other types of content creators during the company’s most recent earnings call.

We’ll need to wait until the keynote before knowing for sure, though. The RTX 4090 announcement is almost a sure deal, but it looks like Nvidia will have an extra surprise in store, as well.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

NVIDIA looks set to reveal its next-gen GeForce RTX GPUs on September 20th

NVIDIA’s GPU Technology Conference goes down this month and the company has revealed when CEO Jensen Huang’s keynote will take place. You’ll be able to watch it at 11AM ET on September 20th. The keynote will kick off with a GeForce Beyond special broadcast, which will also stream on and .

The company says the event will include “the latest breakthroughs in gaming, creating and graphics technology.” NVIDIA is expected to reveal its RTX 40-series graphics cards during the broadcast — an image the company shared to promote the event includes the GeForce RTX Logo. NVIDIA previously said it would release its this year. Those will supplant graphics cards with the current Ampere architecture.

It remains to be seen just how well the RTX 40-series cards will perform. In the meantime, the 30-series GPUs after the cryptocurrency market cratered.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link