Categories
Computing

The Aorus RTX 4090 Master is the biggest GPU we’ve ever seen

Gigabyte’s Aorus RTX 4090 Master is the biggest GPU we’ve ever seen. We don’t yet know the full specs of this GeForce RTX 4090 model, but we do know we’re going to need a very large case to house this beast.

This is a monster unit. It needs four slots all to itself on a motherboard. It comes with three 11 cm fans. It is 35.8 cm (14.1 inches) long and 16.2 cm (6.4 inches) wide, meaning we could literally stack several smaller RTX cards inside of it and still have some room to spare. Videocardz.com did the math and determined they could fit 10 Radeon RX 6400 cards inside.

Gigabyte

This huge GPU should be enough to house the GeForce RTX 4090, which Nvidia announced on September 20. These are the newest graphics cards featuring Ada Lovelace architecture, offer better ray tracing, significantly improved rendering, and DLSS 3. It comes with up to 24GB of GDDR6X memory.

Nvidia is promising 2.5GHz of clock speed on the 4090 while gulping 450W of power. The RTX 4090 doesn’t come out until October 12, so we’ll need to wait to put it through real-world use, but we can expect impressive performance thanks to double the number of CUDA cores.

But we are drooling at the thought of one of those high-end maxed out RTX 4090s inside this enormous Aorus Master unit. The power draw must be impressive and we’re not looking forward to the electric bill (nor the incredibly high prices), but maybe here is something that will finally catapult gaming into the next era. After all, this is still a GPU waiting for a game worthy of its power.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Don’t worry – the RTX 4090 won’t cause another GPU shortage

We’re sitting on the edge of Nvidia GTC, where it’s all but confirmed the company will launch its next-gen RTX 4090 graphics card. The last time we were in this situation, almost two years ago to the day, Nvidia’s launch kicked off what would become the worst GPU shortage we’ve ever seen, and it’s fair if you’re nervous we might be caught in that situation again.

The RTX 4090 will almost assuredly sell out when it launches, but you don’t need to get your F5 key ready to get a GPU. There were several factors that went into the GPU shortage, none of which apply this time around. If you’ve been waiting for next-gen GPUs to pull the trigger, don’t get caught up in the launch hype — all signs suggest that the RTX 4090 won’t cause another GPU shortage.

Where demand meets supply

Simon Byrne’s Berta 2 mining rig. Techarp

The biggest difference this go around is the lack of a pandemic for supply chains to contend with. We’re down from the peak of cases earlier this year, and although there was a brief spike a couple of months back, it doesn’t seem like we’re headed for another lockdown. That helps, but the main reason we won’t see a shortage comes down to the supply chain.

The chip shortage, which eventually lead to the GPU shortage, has mostly subsided. Supply chain issues haven’t been completely solved, but there are a lot of indications that there’s an excess supply of chips and not enough demand for them. Nvidia hinted at this fact in its most recent earnings call, saying that it has “excess inventory” of RTX 30-series graphics cards and would start slashing prices to sell them off. We’re seeing the effects of that now.

Demand for PCs, and by extension graphics cards, spiked in 2020 and throughout 2021. Now that people are returning to the office, that demand is mostly gone — but the components created to meet that demand remain. That’s why we’re seeing GPU prices crash so quickly. For example, the RTX 3090 Ti, which launched in April for a list price of over $2,000, is now closer to $1,200.

Even with an unforeseen COVID spike, it’s unlikely that the supply chain would be in the dire shape it was in 2020. Not only are companies now sitting on excess inventory, but they’ve also already navigated the rocky waters of rebuilding the supply chain during the worst of the pandemic.

Switching partners

Taiwan Semiconductor
Taiwan Semiconductor (TSMC), Fab 5 building, Hsinchu Science Park, Taiwan Peellden/Wikimedia

Although the pandemic certainly worsened the GPU shortage, it wasn’t the root cause. Nvidia’s issues in the previous generation started with Samsung. RTX 30-series graphics cards use Samsung’s 8nm node, and reports shortly after the launch of these cards said that Samsung had a higher rate of defects than it anticipated.

If you’re not aware, Nvidia is “fabless,” meaning it doesn’t actually manufacture the GPUs in its graphics cards. Instead, chipmakers like Samsung handle the manufacturing while Nvidia handles the design. It was a risk going with Samsung in the previous generation, and clearly, Nvidia doesn’t want to take the same risk this time.

Nvidia is using chipmaker TSMC for the RTX 4090 and presumably all RTX 40-series GPUs. TSMC was Nvidia’s partner up until the RTX 30-series, and although we’ve seen past shortages, none of them stemmed from defective manufacturing. Going back to TSMC hopefully means fewer defective chips, which is what kicked off the GPU shortage in the first place.

The fateful ‘merge’

A cryptocurrency mining rig from a computer graphic card.
Getty Images

Manufacturing issues caused the GPU shortage, but crypto extended it. In particular, Ethereum extended it. Although Bitcoin steals the limelight, the Ethereum blockchain is where the majority of GPU mining took place throughout the shortage — around 25% of all GPU sales during the shortage went to Ethereum miners according to one estimate.

But Ethereum is down bad right now, which is a reason why GPU prices are coming down so quickly. That’s a good sign, but GPU prices have been influenced by crypto for the past four years, so a rebound in Ethereum could’ve spelled disaster. Thankfully, that’s not the case anymore.

Ethereum just went through its long-awaited “merge,” which reduces the energy required for the blockchain and critically eliminates mining entirely. Although the Ethereum Group has been promising the shift for quite some time, it was perpetually delayed. Frankly, it didn’t seem like the “merge” would ever happen, leaving the fate of the upcoming GPU supply in limbo.

Now that the gavel is down, it’s much easier to be confident in upcoming GPU supply. Even if there are shortages, it’s unlikely that another boom in crypto will prolong and worsen the shortage, which is what we saw in 2020 and briefly toward the end of 2017.

Short-term shortages expected

Render of an Nvidia GeForce RTX 4090 graphics card.
QbitLeaks

Although it’s very unlikely we’ll see another GPU shortage on the scale of the one that happened in 2020, short-term shortages are likely. Whenever a new generation of GPUs or CPUs launches, there’s a short period of a few weeks where they’re sold out everywhere and prices skyrocket on the secondhand market. Usually, the prices drop quickly as supply stabilizes.

I’m expecting we’ll see an exaggerated version of this with the RTX 4090. Given how big of a cash cow GPUs have been over the past two years — one estimate says that scalpers brought in $61.5 million selling GPUs in 2020 alone — I wouldn’t be surprised if the initial wave of GPUs sold out immediately and went on the secondhand market for 2020 prices.

That should subside quickly, though, so don’t get caught up in the launch hype. It’s usually a bad idea to buy a GPU the day it releases anyway. The RTX 4090 won’t cause another GPU shortage on the scale of the one we just came out of, so don’t worry too much about picking one up on launch day.

Editors’ Choice






Repost: Original Source and Author Link

Categories
Computing

Intel’s Arc GPU issues run much deeper than performance

I’ve been excited for Intel’s Arc Alchemist GPUs — the first discrete gaming graphics cards Intel has ever released. But that hype has quickly faded over the last few months, as reports of subpar performance, broken drivers, and a pile of delays have plagued Intel’s entrance into the market.

It takes a lot to enter the pantheon of the best graphics cards, but Intel’s issues go well beyond performance and features. Driver bugs are rampaging through the Arc Alchemist stack, and it’s becoming clear that Intel doesn’t have a system in place for dealing with those issues when new drivers are put out, or even for identifying them months after the fact.

It may be hard to enter the GPU market, but Intel only has itself to blame for the state of Arc right now.

43 driver issues, all from YouTube

Worst We’ve Tested: Broken Intel Arc GPU Drivers

YouTube channel Gamers Nexus published a deep dive into Intel’s broken Arc drivers on August 1, following on a series of rumors that Arc’s future was in jeopardy. It wasn’t until August 19 when Intel’s Lisa Pearce wrote a blog post answering questions about Arc that we learned Intel actually found out about 43 driver issues from the Gamers Nexus video.

“We have received frank feedback from press during recent reviews, and we have taken it to heart. For example, we filed 43 issues with our engineering team from a review of the A380 by Gamers Nexus,” the blog post reads.

Although Intel Arc Alchemist doesn’t provide flagship performance, that never seemed like the goal. And frankly, that’s not the issue Intel is facing now. Gamers Nexus found that drivers simply wouldn’t work with some monitors, Intel Smooth Sync would cause visual glitches, and Intel Arc Control would break when overclocking — among dozens of other problems. That’s not to mention the issues Arc Alchemist has faced when it comes to older DirectX versions, as Intel only officially supports DirectX 11 and DirectX 12.

Intel pins the issues on the Arc Control “installer and how it downloaded unique components after the initial installation.” Basically, Intel says the drivers have a corrupted installation process where “unexpected failures are causing [the installation process] to be unreliable.” Intel knows about the issues and is working on them, but that’s not the main problem here.

Intel

The GPU that Gamers Nexus tested, the Arc Alchemist A380, was first rolled out on June 15. Considering that Arc Alchemist GPUs are exclusive to China right now, that presumably means buyers have been dealing with these driver issues for over two months, and yet it took a U.S.-based YouTube video where Gamers Nexus had to track down a GPU that isn’t even available in the states for Intel to address the problems. Keep in mind we’re not talking about minor issues, either. We’re talking about things that fundamentally break Arc Alchemist.

There have been previous examples of this, too, such as when a missing line of code resulted in a 100x drop in ray tracing performance on Linux. Intel may be doing press spots with channels like Linus Tech Tips and advertising the snot out of Arc Alchemist. But driver support is killing Arc right now, as Intel waits for tech press to uncover driver issues that the company should have discovered months earlier.

The pitfalls of promises

Intel announces new features of discrete Intel Arc GPUs.

The news about Intel discovering driver issues from a YouTube video gets at a larger point about Arc Alchemist — Intel overpromised. Unlike Nvidia and AMD, Intel likes to set out its road map early. We learned about Arc Alchemist in the middle of 2021, and Intel has been making promises, like saying over 50 Arc laptop designs would be available in 2022, since then.

Many of the issues with drivers are features Intel promised at launch — things like Smooth Sync and built-in overclocking, neither of which are necessary if the drivers have so many issues in the first place. We’re also waiting on Intel’s XeSS, which was supposed to launch on May 20. This is another feature we heard about in the middle of 2021 that Intel has yet to provide any satisfying updates on.

Even if Arc Alchemist delivered perfectly on every promise, there’s no doubt that Intel would be an underdog compared to the duopoly between AMD and Nvidia. And most of that comes down to drivers. By announcing Arc early and pushing it hard in advertising, Intel backed itself into a corner where the options were to either keep delaying Arc Alchemist or underdeliver on its many promises, and it looks like Intel did a little of both.

Still waiting

Two Intel Arc GPUs running side by side.
Linus Tech Tips

When Intel announced Arc Alchemist, it put out the idea that cards would be available in the first few months of 2022. We’ve since learned that the rollout is a little more complicated. You can technically buy one of Intel’s Arc A380 graphics cards now, but we still don’t have anywhere near the full lineup, much less availability for Arc around the world.

In the context of the driver issues, that staggered rollout doesn’t look great. At this point, Intel is learning on the fly as it offers its discrete graphics cards for sale while press continue to discover issues that should have been fixed well before you could buy an Arc GPU. Intel seems to know that in part, with Pearce writing that Intel is “continuing to learn what it will take for us to be successful.”

You can see the tension in real time, as pressure mounts for Intel to release more Arc GPUs and reports of broken driver support continue to circulate. That’s on Intel’s shoulders, though. It’s clear now that Intel jumped the gun with Arc Alchemist, and as press outlets continue to discover issues, Intel can only point the finger at itself. I hope Intel gets the situation under control, but any news about Arc Alchemist has been bad news for months now.

I’ve reached out to Intel and asked how it plans to change drivers and the state in which they will be released in the future, and I’ll update this story when I hear back.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

I make my GPU perform worse on purpose, and I’m not sorry

I have a confession to make: I have one of the most powerful GPUs you can buy, the AMD RX 6950 XT, and I deliberately make it underperform. Let me explain.

I appreciate that off the back of a GPU pricing crisis, which saw almost everyone unable to find a card like this (let alone afford it), that sounds super wrong. I’m lucky that one of the perks of this job is getting to test out the kind of high-end components that I wouldn’t otherwise shell out the cash for. But even then, I can’t bring myself to unleash the full power of this awesome GPU.

It’s sad to say, but in my day-to-day living with this graphics card, I found myself valuing a quieter, cooler system more than the extra performance GPUs of this kind can provide.

Taming the beast

Jacob Roach / Digital Trends

The 6950 XT is a power-hungry card. It’s the same GPU as the 6900 XT but pushed to its limit, so of course, it runs hot. When put in an mATX case it runs loud, too, even with a big cooler. It’s not necessarily the kind of GPU I would have bought myself, but now that I have one, what am I going to do, not use it?

But that was a very real prospect after a few days of retreating to headphones after the testing was done. How could I continue to enjoy high frame rates and detail settings in games, but not have to listen to its fans hitting 2,200 RPM as soon as the game menu appears? The solution, it turns out, was the make the card run worse.

The PowerColor Red Devil 6950 XT has a BIOS switch, ostensibly for backup purposes in case you brick one of them trying a heavy overclock. But the secondary Silent BIOS also lowers clock speeds and undervolts the GPU. Enabling that got me halfway there. With that, we were only hitting around 1700 RPM on the fans, and the junction temperatures dropped by around five degrees, resting under 100 at load for the first time since I’d gotten the card up and running.

This was an exciting development. Maybe there was a way to have my PCB cake and eat it too. But it still wasn’t quite quiet enough for my sensibilities and temperature thresholds.

AMD Radeon Tuning Control interface.

Next, I played with the settings in AMD’s Adrenaline driver application and made further headway. The automated undervolting had a minor effect of its own, but a few extra RPM and a single-degree drop in temperatures wasn’t going to cut it. I could have manually adjusted the fan curves myself, but ultimately, setting the tuning profile to Quiet was enough.

Clock speeds dropped a little more, and fan speeds followed suit. Suddenly I had a card that even with a 4K Furmark run wasn’t going over 265W, with a maximum junction temperature of just 80 degrees. Just as important? Fan speeds never went over 1,500 RPM, keeping the card cool and quiet enough that it only just registered over the whir of the system fans.

No, performance isn’t the same. The core clock now barely breaks 1,900MHz, and my Time Spy score isn’t quite as good, but I don’t care. I have a near-silent 6950 XT that still performs better than almost any other GPU out there, and it didn’t require a custom cooler or heavy tweaking. Now, I can game in relative peace, and it takes far, far longer for my little home office to heat up. I’m living the dream.

It turns out I’m not alone

I know this post reads like the most whiney of first-world problems — believe me, I’ve read it through multiple times before posting. I’m also aware that there are other ways around this problem without tamping down on the card’s performance, such as better system cooling, or gaming with the air conditioning on.

I would have been too ashamed to write about it myself if I hadn’t found out that my fellow DT writer and high-end GPU owner, Jacob Roach, is also a sacrilegious downclocker. His daily driver gaming PC has an RTX 3090 inside, and it’s stupendously powerful (although my card’s better). But according to him, it’s often a bit much and, frankly, more than he needs.

“I’ve been limiting the frame rate with my 3090 for a while,” he said when I mentioned how I felt bad for making my RX 6950XT perform worse than it can. “I just can’t handle the noise and heat, even if the card is capable of more.”

This is something that both of us have had to deal with as our respective countries grapple with heat waves. There might be a case of gaming with one of the best graphics cards pulling over 300W if you live somewhere tepid or it’s not the middle of summer, but when the temperatures rise outside and you want to hide inside playing games, switching on a miniature space heater to do it doesn’t feel very comfortable once temperatures start rising.

It also doesn’t sound comfortable, because just as gaming with a hot PC next to your legs or on your desk can make you hot and bothered, they can get really loud too. My mATX case does not give this 6950 XT enough room for its triple fans to cool it effectively, so even if I do run it full tilt, I run into some thermal throttling after extended use. The longer I play, the worse it gets, for both the card and me.

The future looks warm

A hand grabbing a graphics card.
Jacob Roach / Digital Trends

While Jacob and I might not use our GPUs to their full potential, there’s no denying that the RTX 3090 and the RX 6950 XT are incredibly power-hungry and hot GPUs, and they’re not alone. The entire lineup of modern graphics cards from both camps has had a bump in TDP this generation, and if the rumors are to be believed, the next generation will only exacerbate this problem.

And it is a problem. Jacob and I are prime examples that if you don’t have a larger space with plenty of ventilation or capable A/C running all the time, playing games with some of these top graphics cards is decidedly uncomfortable, both from a noise and heat perspective. I’m not all that excited about a graphics card that’s even hotter and potentially louder, even if it is much more performative.

It’s not that I wouldn’t keep it if I was given one. But don’t be surprised if you find me downclocking it to oblivion to make my gaming sessions cooler and quieter.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

Researchers say your GPU could expose private info online

In an age of increased online privacy awareness, many of us are conscious of our digital fingerprints and prefer not to be tracked. However, it may not be as simple as it previously seemed.

An international team of researchers has found that users can be tracked down by their graphics cards. This is done through a new technique referred to as “GPU fingerprinting.”

An example of the GPU fingerprinting technique showcasing two identical GPUs that still produce different results.

This new technology, named DrawnApart by the researchers and first reported by Bleeping Computer, relies on the tiny differences between each piece of hardware in order to make a distinction that ties it to a certain user. Through a series of identifiers, researchers find that they are able to track down individual users, as well as their online activity, just by implementing this new technique.

The team spans several countries and universities, including researchers from Israel, France, and Australia, who published their findings online in a paper on Arxiv.org. They showcased examples of the GPU fingerprinting technique, which relies on the fact that no components are exactly the same — even if they are all part of the same model and were made by the same manufacturer.

There are tiny differences in the performance, power consumption, and processing capabilities of every graphics card. DrawnApart takes advantage of that by using fixed workloads based on the Web Graphics Library (WebGL). This is a cross-platform JavaScript-based application programming interface (API) responsible for rendering graphics within any compatible web browser.

Using WebGL, DrawnApart targets the GPU’s shaders with a special sequence of graphic operations that were made specifically for this task. The drawing operations are ultra-precise and make it easier for the researchers to tell the graphics cards apart, and this includes cards of the same make and model.

Once the task is complete, the technique produces an accurate trace with timing measurements that includes how long it takes the card to handle stall functions, complete vertex renders, and more. As the timing is individual to each GPU, this results in making the unit trackable.

DrawnApart tracking duration diagram.
DrawnApart: Average tracking time by collection period graph.

The research team finds that this technique provides a high degree of accuracy and is an improvement over existing tracking methods. The algorithm was tested on a large sample of more than 2,500 unique devices and 371,000 fingerprints, and the researchers noted a 67% improvement compared to using only current fingerprinting methods without DrawnApart. In its current state, DrawnApart can fingerprint a graphics card in just eight seconds.

Eight seconds is ultrafast as it is, but there is potential for even more accurate and quicker tracking through the use of newer, faster APIs. The team tested using compute shader operations instead and found that the results were now up to 98% accurate and only took 150 milliseconds to achieve.

Although the findings are impressive, it’s impossible to deny that they’re also terrifying. We’ve all grown used to declining cookies on various websites, but DrawnApart proves that may soon not be enough. The research team is also keenly aware of the potential for misuse that the GPU fingerprint poses.

“This is a substantial improvement to stateless tracking, obtained through the use of our new fingerprinting method. […] We believe it raises practical concerns about the privacy of users being subjected to fingerprinting,” said the researchers in their paper.

As the GPU fingerprinting technique may not require additional permissions, users could be subjected to it by simply browsing the internet. Khronos, the organization in charge of the WebGL library, is already exploring ways in which to prevent the technique from being used maliciously.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Here’s why Intel’s A380 GPU could really be a hidden gem

The Intel Arc A380, the only Arc Alchemist graphics card that’s currently available, was just tested in various games after being overclocked.

The performance gains caused by the overclocking show that the GPU has the potential to be much better than what some previous benchmarks may have implied.

Intel’s Arc A380 has already been seen in a number of benchmarks and tests, including Intel’s own, which redeemed it slightly after a round of bad news. This time around, the GPU was put to the test by Pro Hi-Tech, a YouTuber who specializes in overclocking. That’s exactly what he did with the Arc A380 — he boosted the card to unlock some of the hidden power it seems to possess. These results could be a sign of the Arc A380 being a lot better than initially thought.

In order to overclock the GPU, the YouTuber had to take a different approach than usual. This is because well-known clock/voltage tools such as MSI Afterburner don’t support Intel Arc just yet. As such, he didn’t alter the GPUs core clocks; instead, he used Intel’s proprietary graphics utility tool in order to tweak the card’s voltage. Pro Hi-Tech adjusted the GPU Performance Boost setting to 55%, and the voltage offset to +0.255mv. Before moving on to testing the boosted GPU in a gaming scenario, the YouTuber also enabled resizable BAR.

These modifications brought up the clock on the Intel Arc A380 by up to an additional 150MHz, meaning a relatively small boost of six percent. However, the power usage went up considerably, from around 35 watts to — at times — more than 55 watts. That’s an increase of up to 57%, but also, it’s an interesting figure. Intel said that the official TDP of the GPU sits at 75 watts.

This brings us to the results of the testing. In order to give an accurate estimate of the card’s performance, the YouTuber compared the results to those of a regular Arc A380 with no overclock and to Nvidia’s GeForce GTX 1650, a card that has often been named as a direct competitor for this entry-level GPU.

Pro Hi-Tech

Pro Hi-Tech benchmarked the Intel Arc A380 in Cyberpunk 2077, God of War, Doom Eternal, Rainbow Six Siege, Watch Dogs Legion, and World of Tanks. Each and every game showed a performance increase, which is not all that surprising, but the gains are big enough to bring the Arc GPU to a level where it’s on par with the GTX 1650.

In Cyberpunk 2077, the boosted Arc A380 actually managed to beat Nvidia, reaching 51 frames per second (fps) compared to Nvidia’s 42. Some games, such as Doom Eternal, show a massive increase in fps, going from 64 to 102. On average, the stock version of Intel Arc A380 scored 55.1 fps across six titles; the overclocked version hit 75.6, and the GTX 1650 won by a negligible margin with 75.9. This was first spotted by Tom’s Hardware.

These benchmark results show that there might be more to Intel Arc than meets the eye. However, it’s now up to Intel to bring out that potential and tweak the performance of the GPU without requiring users to overclock it. Let’s hope that all the early benchmark data will prove to be useful and will allow Intel to optimize the Arc A380.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

How an Nvidia GPU has transformed my streaming setup

Streaming on Twitch has been a hobby of mine for a while, and it’s probably the single most important thing convincing me to stick with my RTX 3060 Ti, thanks to two key technologies that Nvidia has made: NVENC and Nvidia Broadcast. The RTX 3060 Ti is one of the best graphics cards on its own, but it’s these two features that have made the difference for me.

These aren’t crucial to my streaming experience, but the quality of life improvements these two features provide make it difficult for me to even think about switching back to AMD graphics.

NVENC makes my CPU obsolete

Nvidia

I had been an AMD user for years until I switched to the GTX 1080 Ti in 2020 and upgrading to the RTX 3060 Ti in 2021 for better power efficiency. I was also upgrading my CPU around the same time, going from a Ryzen 7 3700X to a Ryzen 9 3950X for streaming. I was excited to finally have one of the best CPUs for streaming, and I started tweaking my settings in Open Broadcaster Software for the best quality and performance. But my CPU usage was super high, the performance was bad in my games, and I was dropping frames on stream.

Then I remembered that I had an Nvidia GPU, and I had heard that NVENC (Nvidia’s GPU streaming codec) had gotten really good, so I decided to check it out. I looked up Nvidia’s guide on how to configure NVENC in OBS, configured OBS, and tested it out. The results were nothing short of amazing: the footage looked good, the performance was great, and I wasn’t dropping any frames. There was basically no performance penalty for using NVENC, and neither my CPU nor my GPU was working any harder to use NVENC. It was a no-brainer for me to switch from CPU to GPU encoding.

While CPU encoding can have very high quality, it’s also a very inefficient way of recording footage. GPU-accelerated encoding by contrast is slightly worse in quality but way more efficient. My specific situation was actually great for GPU encoding because my ITX PC didn’t have the thermal headroom to afford to be inefficient. I just wish I had heard about NVENC before spending hundreds on a Ryzen 9 3950X that doesn’t get to flex its muscles.

Nvidia Broadcast makes my keyboard silent

Razer webcam sitting on top of a monitor.

Like many gamers, I use a mechanical keyboard, which is great to type on but not very enjoyable to listen to. Whenever I was recording or streaming, I’d have to be very careful about whether or not my mic was on because it would always pick up my keyboard, as well as other background noises. I’d usually use push to talk or hard unmute my mic whenever I wanted to speak.

That was the case until I started using an app called Nvidia Broadcast, which has several features including a green screen effect without needing an actual green screen, AI-enhanced audio output, and AI-powered background noise removal, which is exactly what I was interested in. I turned the feature on, and my mic no longer picked up my keyboard or any other annoying background noises.

You might think a feature like this would be too overzealous and would require me to speak loudly or in a certain way, but I’ve never experienced any kind of annoyance with Nvidia Broadcast so far, and I’ve been using it for almost two years. On the other hand, it is a bit lenient and always picks up whenever I clear my throat (which is quite often), so I am still reaching for the mute button every now and then.

That’s not to say Broadcast isn’t amazing; totally removing the clicking of my keyboard from my streams is a massive improvement in quality for the few viewers I have.

If you’re a streamer, consider Nvidia

A gamer plays at a PC setup.
DisobeyArt/Shutterstock

I think these two features alone make Nvidia GPUs extremely compelling for streamers, and I could get more use out of Broadcast and its green screen technology if I ever decided to plug in one of the best webcams for streaming. I didn’t get an Nvidia GPU for these features, but I’m not eager to switch back to AMD.

However, AMD is catching up to Nvidia on the encoding front. The newest version of AMD’s AMF encoder is purportedly on par with Nvidia’s, but very few applications (including OBS) have gotten an update to utilize the new encoder. In fact, this version of AMF isn’t even all that new; it’s been out for four months. I’m not about to switch to AMD to use an encoder that has no real support and no timeline for when that support will finally come.

There are also alternatives to Nvidia Broadcast’s noise removal feature. RNNoise is an open-source noise removal program that appears to perform similarly to Nvidia Broadcast. However, in order to use RNNoise, you have to rely on the open source software ecosystem, and right now there just aren’t any convenient solutions using RNNoise. For example, someone developed an OBS plugin that added RNNoise but literally said “No help provided. If you can figure out how to build and use it, have fun!” Nvidia Broadcast is a simple app I can download, install, and turn on in a few clicks.

Personally, it’s Nvidia Broadcast in particular that makes it difficult to me to go back to AMD. I can deal with having a lower-quality GPU encoder or encoding with my CPU at a low-quality setting, but I don’t want my viewers to hear my keyboard all the time. This is a really simple but big quality of life improvement that I will continue to take advantage of, and it’s even more important if you have a camera.

AMD GPUs have a little ways to go before they’re as good as Nvidia’s for streaming. I don’t doubt they’ll get there eventually, but in the meantime, I’ll be sticking with my RTX 3060 Ti.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Nvidia reveals H100 GPU for AI and teases ‘world’s fastest AI supercomputer’

Nvidia has announced a slew of AI-focused enterprise products at its annual GTC conference. They include details of its new silicon architecture, Hopper; the first datacenter GPU built using that architecture, the H100; a new Grace CPU “superchip”; and vague plans to build what the company claims will be the world’s fastest AI supercomputer, named Eos.

Nvidia has benefited hugely from the AI boom of the last decade, with its GPUs proving a perfect match for popular, data-intensive deep learning methods. As the AI sector’s demand for data compute grows, says Nvidia, it wants to provide more firepower.

In particular, the company stressed the popularity of a type of machine learning system known as a Transformer. This method has been incredibly fruitful, powering everything from language models like OpenAI’s GPT-3 to medical systems like DeepMind’s AlphaFold. Such models have increased exponentially in size over the space of a few years. When OpenAI launched GPT-2 in 2019, for example, it contained 1.5 billion parameters (or connections). When Google trained a similar model just two years later, it used 1.6 trillion parameters.

“Training these giant models still takes months,” said Nvidia senior director of product management Paresh Kharya in a press briefing. “So you fire a job and wait for one and half months to see what happens. A key challenge to reducing this time to train is that performance gains start to decline as you increase the number of GPUs in a data center.”

Nvidia says its new Hopper architecture will help ameliorate these difficulties. Named after pioneering computer scientist and US Navy Rear Admiral Grace Hopper, the architecture is specialized to accelerate the training of Transformer models on H100 GPUs by six times compared to previous-generation chips, while the new fourth-generation Nivida NVlink can connect up to 256 H100 GPUs at nine times higher bandwidth than the previous generation.

The H100 GPU itself contains 80 billion transistors and is the first GPU to support PCle Gen5 and utilize HBM3, enabling memory bandwidth of 3TB/s. Nvidia says an H100 GPU is three times faster than its previous-generation A100 at FP16, FP32, and FP64 compute, and six times faster at 8-bit floating point math.

“For the training of giant Transformer models, H100 will offer up to nine times higher performance, training in days what used to take weeks,” said Kharya.

The company also announced a new data center CPU, the Grace CPU Superchip, which consists of two CPUs connected directly via a new low-latency NVLink-C2C. The chip is designed to “serve giant-scale HPC and AI applications” alongside the new Hopper-based GPUs, and can be used for CPU-only systems or GPU-accelerated servers. It has 144 Arm cores and 1TB/s of memory bandwidth.

The new Grace CPU “superchip” consists of two CPUs connected together.
Image: Nvidia

In addition to hardware and infrastructure news, Nvidia also announced updates to its various enterprise AI software services, including Maxine (an SDK to deliver audio and video enhancements, intended to power things like virtual avatars) and Riva (an SDK used for both speech recognition and text-to-speech).

The company also teased that it was building a new AI supercomputer, which it claims will be the world’s fastest when deployed. The supercomputer, named Eos, will be built using the Hopper architecture and contain some 4,600 H100 GPUs to offer 18.4 exaflops of “AI performance.” The system will be used for Nvidia’s internal research only, and the company said it would be online in a few months’ time.

Over the past few years, a number of companies with strong interest in AI have built or announced their own in-house “AI supercomputers” for internal research, including Microsoft, Tesla, and Meta. These systems are not directly comparable with regular supercomputers as they run at a lower level of accuracy, which has allowed a number of firms to quickly leapfrog one another by announcing the world’s fastest.

However, during his keynote address, Nvidia CEO Jensen Huang did say that Eos, when running traditional supercomputer tasks, would rack 275 petaFLOPS of compute — 1.4 times faster than “the fastest science computer in the US” (the Summit). “We expect Eos to be the fastest AI computer in the world,” said Huang. “Eos will be the blueprint for the most advanced AI infrastructure for our OEMs and cloud partners.”

Repost: Original Source and Author Link

Categories
Computing

Nvidia and AMD cut GPU orders to deal with crypto’s collapse

A new report shows that Nvidia, AMD, and Apple may all be trying to lower their chip orders from TSMC. This is a direct response to the lower demand for electronics we’ve been experiencing over the past few months (and the fall in GPU prices with crypto’s demise). Nvidia, in particular, is in a tough spot as it may not be able to reduce its orders.

If this proves to be true, it brings up a lot of things to consider. With AMD and Nvidia soon set to release the next generation of GPUs, will the lowered consumer interest result in a drop in prices, or will the potentially smaller supply simply mean there will be fewer next-gen graphics cards to buy?

From Shutterstock by Kiklas Kiklas/Shutterstock

The information comes from DigiTimes which cites its own anonymous industry sources. The report (translated by Twitter user RetiredEngineer) claims that AMD, Nvidia, and Apple, which are all TSMC clients, have tried to change their chip orders — but not all three have been successful.

Apple managed to cut down the initial shipment of iPhone 14 chips by around 10%. AMD, on the other hand, revised its orders for 7nm and 6nm wafers, reportedly lowering the amounts by around 20,000. This applies to shipments in the fourth quarter of 2022 and in the first quarter of 2023. However, AMD hasn’t changed its order for 5nm wafers intended for PCs and servers.

Nvidia seems to be in a sticky spot compared to the other two tech giants. It made prepayments to TSMC to secure their 5nm wafers for the upcoming RTX 4000-series of graphics cards. Now, facing a drastic drop in consumer demand, Nvidia tried to alter its order — but according to DigiTimes, TSMC wouldn’t budge. The companies came to an agreement where the first shipments will be delayed by one quarter, but Nvidia is now supposed to find replacement customers for TSMC’s vacated production capacity. A year ago, that would have been easy, but now, it might be nearly impossible.

After many long months of the GPU shortage, graphics card prices are now falling rapidly, and retailers and manufacturers alike are left with a surplus of GPUs that no one wants to buy. The second-hand market is flooded with used GPUs that did their time mining crypto and are no longer profitable to keep running due to the crash in the cryptocurrency market.

It’s not just graphics cards that have suddenly become far less sought after. According to a forecast by Gartner, worldwide PC shipments are on track to decline by 9.5% in 2022. The personal computer market is experiencing the steepest decline of all other device segments that Gartner analyzes, but mobile devices (tablets and phones) are also seeing a drop in shipments. Consumer PC demand is suffering bigger losses than business PC demand, amounting to 13.1% and 7.2% in 2022 respectively.

Will this affect the pricing of next-gen graphics cards?

Fans on the Nvidia RTX 3080.
Jacob Roach / Digital Trends

Something many aspiring PC builders are wondering about is whether the current market situation will affect the pricing of Nvidia’s RTX 4000 graphics cards. Given that the company is currently experiencing a drop in demand and seems to predict that this downward trend is going to continue, it makes sense that it might also change the pricing of next-gen GPUs.

Unfortunately, it’s hard to say with any certainty what Nvidia might do in this situation. A year ago, we were in the midst of a market where the demand was much higher than the supply. That is no longer the case, and unless the crypto market miraculously recovers, we won’t be coming back to that for quite a while. With the world economy in a shaky place and inflation on the rise, lowering the pricing might be the only thing that would kickstart GPU sales again.

Nvidia still hasn’t announced the pricing of its next-gen graphics cards — but looking at the current generation gives us a bit of an idea of what to expect. AMD’s best graphics cards are cheaper than Nvidia across the board, so Nvidia wasn’t striving to be competitive in that regard this time around. It also still hasn’t lowered the prices for all the surplus GPUs it has laying around, and with the looming launch of RTX 4000, it’s high time that Nvidia tries to sell those off.

One thing is for certain — we’re finally in a buyer’s market. Whether you buy a new GPU or hold off for a little while to see the next generation hit the shelves, it’s refreshing to no longer see graphics cards hitting 300% of MSRP and selling out in seconds.

Editors’ Choice






Repost: Original Source and Author Link

Categories
Computing

GPU benchmarks: How they can misguide a GPU upgrade

GPU prices are finally normal, and you might have found yourself in recent weeks browsing graphics cards reviews to see which ones top the charts. After all, the best graphics cards live and die based on their performance in gaming benchmarks, right?

But those benchmarks are far from a definitive answer, and in most cases, they skew the conversation away from the games you actually play and the experiences they offer.

I’m not saying we need to throw the baby out with the bathwater. GPU benchmarks offer a lot of value, and I don’t think anything needs to change about how we (or others) conduct GPU reviews. But now that it’s actually possible to upgrade your graphics card, it’s important to take all of the performance numbers in context.

Games, not benchmarks

The most popular Steam game of 2022 so far? Lost Ark, which only calls for a GTX 1050.

DT’s computing evergreen coordinator Jon Martindale made a joke concerning GPU prices the other day: “I need a new GPU so I can get 9,000 frames in Vampire Survivors.” Silly, but there’s a salient point there. When looking at performance, it’s important to recognize the fact that there are around four times as many people playing Terraria or Stardew Valley as there are playing Forza Horizon 5 or Cyberpunk 2077 at any given time.

The best games to benchmark your PC are not the most popular games that people play. In the top 25 most popular Steam games, only two of them are regularly used in benchmarks: Grand Theft Auto V and Rainbow Six Siege. Virtually no “live” games are included in benchmark suites due to network variation, despite the fact that these games largely top the charts in player count, and recent, GPU-limited games are usually overrepresented.

The games that we and others have chosen as benchmarks aren’t the problem — they offer a way to push a GPU to its extreme in order to compare it to the competition and previous generations. The problem is that benchmark suites frame performance around the clearest margins. And those margins can imply performance that doesn’t hold up outside of a graphics card review.

Benchmarks are often misleading

A hand grabbing the RTX 3090 Ti graphics card.
Jacob Roach / Digital Trends

Especially when it comes to the most recent graphics cards, benchmarks can be downright misleading. Every benchmark needs at least an average frame rate, which is a problematic number in and of itself. Brief spikes in frame rate are over-represented in an average of 1% lows and 0.1% lows — which average the lowest 1% and 0.1% of frames, respectively. But those numbers still don’t say much about how often those frame rate dips occur — only how severe they are.

A frame time chart can show how often frame rate dips happen, but even that only represents the section of the game the benchmark focused on. I hope you see the trend here: The buck has to stop somewhere, even as more data points try to paint a picture of real-world performance. Benchmarks show relative performance, but they don’t say much about the experience of playing a game.

The RTX 3090 Ti is 8.5% faster than the RTX 3090 in Red Dead Redemption 2, for example. That’s true, and it’s important to keep in mind. But the difference between the cards when playing is all of seven frames. I’d be hard-pressed to tell a difference in gameplay between 77 fps and 84 fps without a frame rate counter, so while the RTX 3090 Ti is technically faster, it doesn’t impact the experience of playing Red Dead Redemption 2 in any meaningful way.

Performance benchmarks for the RTX 3090 and RTX 3090 Ti in Red Dead Redemption 2.

The recent F1 2022 is another example. The game shows huge disparities in performance between resolutions with all of the settings cranked up (as you’d usually find them in a GPU review). But bump down a few GPU-intensive graphics options, and the game is so CPU limited that it offers almost identical performance between 1080p and 4K. No need for a GPU upgrade there.

No one is lying or intentionally misleading with benchmarks, but the strict GPU hierarchy they establish is an abstraction of using your graphics card for what you bought it for in the first place. Benchmarks are important for showing differences, but they don’t say if those differences actually matter.

How to make an informed GPU upgrade

Installing a graphics card in a motherboard.

You should absolutely look at benchmarks before upgrading your GPU, as many as you can. But don’t put your money down until you answer these questions:

  • What games do I want to play?
  • What resolution do I want to play at?
  • Are there other components that I need to upgrade?
  • What’s my budget?

Relative performance is extremely important for understanding what you’re getting for your money, but better isn’t strictly better in the world of PC components. Depending on the games you’re playing, the resolution you’re playing at, and potential bottlenecks in your system, you could buy a more expensive GPU and get the exact same performance as a cheaper one.

That doesn’t mean you shouldn’t splurge. There’s a lot to be said about buying something nice just because it’s nice, even if it doesn’t offer a huge advantage. If you have the means, there’s novelty in owning something super powerful like an RTX 3090 — even if you just use it to play Vampire Survivors. Just don’t expect to notice a difference when you’re actually playing.

This article is part of ReSpec – an ongoing biweekly column that includes discussions, advice, and in-depth reporting on the tech behind PC gaming.

Editors’ Choice




Repost: Original Source and Author Link