Categories
Game

Engadget Podcast: The repairable iPhone 14 and NVIDIA’s RTX 4000 GPUs

Surprise! The iPhone 14 is pretty repairable, it turns out. This week, Cherlynn and Devindra chat with Engadget’s Sam Rutherford about this move towards greater repairability and what it means for future iPhones. Also, they dive into NVIDIA’s powerful (and expensive!) new RTX 4080 and 4090 GPUs. Sure, they’re faster than before, but does anyone really need all that power?

Listen above, or subscribe on your podcast app of choice. If you’ve got suggestions or topics you’d like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcasts, the Morning After and Engadget News!

Subscribe!

Topics

  • The iPhone 14 is surprisingly repairable – 1:17

  • NVIDIA announces RTX 4090 and 4080 GPUs (and a Portal mod with ray tracing) – 21:08

  • Huge hack at Rockstar leaks GTA 6 videos and dev code – 34:22

  • Uber was also hacked last week by the same crew that hit Rockstar – 38:37

  • Windows 11 2022 Update – 40:21

  • Google is offering a $30 1080p HDR Chrome cast with Google TV – 44:05

  • Does anyone need the Logitech G cloud gaming handset? – 46:59

  • Twitch is banning gambling streams on October 18 – 51:56

  • Working on – 55:34

  • Pop culture picks – 1:01:35

Livestream

Credits
Hosts: Cherlynn Low and Devindra Hardawar
Guest: Sam Rutherford
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien
Livestream producers: Julio Barrientos
Graphic artists: Luke Brooks and Brian Oh

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Computing

Nvidia’s DLSS 3 may cut the RTX 4090’s insane power demands

Nvidia’s upcoming flagship, the RTX 4090, was tested in Cyberpunk 2077. It did a great job, but the results were far better with DLSS 3 enabled.

The card managed to surprise us in two ways. One, the maximum clock was higher than expected, and two, DLSS 3 actually managed to lower the card’s power draw by a considerable amount.

The card was tested in 1440p in a system with an Intel Core i9-12900K CPU, running the highest possible settings that Cyberpunk 2077 has to offer, meaning with ultra ray tracing enabled and on Psycho (max) settings. First, let’s look at how the GPU was doing without DLSS 3 enabled.

At the native resolution, the game was running at an average of 59 frames per second (fps) with a latency that hovered around 72 to 75 milliseconds (ms). The RTX 4090 was able to hit a whopping 2.8GHz clock speed, and that’s without overclocking — those are stock speeds, even though the maximum advertised clock speed for the RTX 4090 is just over 2.5GHz. This means an increase of roughly 13% without an overclock. During the demo, the GPU reached 100% utilization, but the temperatures stayed reasonable at around 55 degrees Celsius.

It’s a different story once DLSS 3 is toggled on, though. As Wccftech notes in its report, the GPU was using a pre-release version of DLSS 3, so these results might still change. For now, however, DLSS 3 is looking more and more impressive by the minute.

Enabling DLSS 3 also enables the DLSS Frame Generation setting, and for this test, the Quality preset was used. Once again, the GPU hit maximum utilization and a 2.8GHz boost clock, but the temperature was closer to 50C rather than 55C. The fps gains were nothing short of massive, hitting 119 fps and an average latency of 53ms. This means that the frame rates doubled while the latency was reduced by 30%.

We also have the power consumption figures for both DLSS 3 on and off, and this is where it gets even more impressive. Without DLSS 3, the GPU was consuming 461 watts of power on average, and the performance per watt (Frames/Joule) was rated at 0.135 points. Enabling DLSS 3 brought the wattage down to just 348 watts, meaning a reduction of 25%, while the performance per watt was boosted to 0.513 — nearly four times that of the test without DLSS 3.

The RTX 4090 among green stripes.

Wccftech has also tested this on an RTX 3090 Ti and found similar, albeit worse, results. The GPU still saw a boost in performance (64%) and a drop in power draw (10%), so the energy consumption numbers are not as impressive, confirming that DLSS 3 will offer a real upgrade over its predecessor.

The reason behind this unexpected difference in power consumption might lie in the way the GPU is utilized with DLSS 3 enabled. The load placed on the FP32 cores moves to the GPU tensor cores. This helps free up some of the load placed on the whole GPU and, as a result, cuts the power consumption.

It’s no news that the RTX 4090 is one power-hungry card, so it’s good to see that DLSS 3 might be able to bring those figures down a notch or two. Now, all we need is a game that can fully take advantage of this kind of performance. Nvidia’s GeForce RTX 4090 is set to release on October 12 and will arrive with a $1,599 price tag. With less than a month left until its launch, we should start seeing more comparisons and benchmarks soon.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

‘Portal’ will get ray tracing to show off NVIDIA’s 4000-series GPUs

Portal 3 may never happen, but at least we’ve got a new way to experience the original teleporting puzzle shooter. Today during his GTC keynote, NVIDIA CEO Jensen Huang announced Portal with RTX, a mod that adds support for real-time ray tracing and DLSS 3. Judging from the the short trailer, it looks like the Portal we all know and love, except now the lighting around portals bleeds into their surroundings, and just about every surface is deliciously reflective. 

Similar to what we saw with Minecraft RTX, Portal’s ray tracing mod adds a tremendous amount of depth to a very familiar game. And thanks to DLSS 3, the latest version of NVIDIA’s super sampling technology, it also performs smoothly with plenty of RTX bells and whistles turned on. This footage likely came from the obscenely powerful RTX 4090, but it’ll be interesting to see how well Portal with RTX performs on NVIDIA’s older 2000-series cards. Current Portal owners will be able to play the RTX mod in November.  

NVIDIA RTX Remix

NVIDIA

Huang says the company developed the RTX mod inside of its Omniverse environment. To take that concept further, NVIDIA is also launching RTX Remix, an application that will let you capture existing game scenes and tweak their objects and environments with high resolution textures and realistic lighting. The company’s AI tools can automatically give materials “physically accurate” properties—a ceiling in Morrowind, for example, becomes reflective after going through RTX Remix. You’ll be able to export remixed scenes as mods, and other players will be able to play them through the RTX renderer. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
AI

Could Nvidia’s Thor chip rule automotive AI?

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


As cars get increasingly smarter and self-driving autonomous vehicles continue to be developed there is an obvious need for more computing power. Maybe even the power of a Norse god of thunder.

At the Nvidia GTC conference today, the company announced its new DRIVE Thor platform for automotive. DRIVE Thor is intended to provide a platform that can support self-driving capabilities, vehicle operations such as parking assist, as well as in-vehicle entertainment. The system benefits from the Nvidia Grace CPU and GPU capabilities that come from Hopper architecture. The DRIVE Thor platform replaces the Atlan system that was announced in April 2021. Nvidia expects that the new DRIVE Thor technology will begin to show up in automakers 2025 vehicle models.

“Autonomous vehicles are one of the most complex computing challenges of our time,” Danny Shapiro, vice president of automotive at Nvidia, said during a press briefing. “To achieve the highest possible level of safety, we need diverse and redundant sensors and algorithms, which require massive compute.”

[Follow along with VB’s ongoing Nvidia GTC 2022 coverage »]

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Why use multiple computers when you can use one?

Shapiro explained that modern vehicles use a wide array of computers, distributed throughout the vehicle.

For example, many cars today have advanced driver assistance systems, with parking assist, various monitoring cameras and multiple digital instrument clusters, alongside some form of entertainment system.

“In 2025, these functions will no longer be separate computers,” Shapiro said. “Drive Thor will enable manufacturers to efficiently consolidate these functions into a single system, reducing overall system costs.”

The goal with Drive Thor is to provide automakers with the compute headroom and flexibility to build software-defined autonomous vehicles that are continuously upgradable through secure over the air updates.

Thor’s power isn’t a hammer, it’s an inference transformer

The mythical Norse God Thor relied on his hammer Mjölnir, but there’s nothing mystical about what brings power to Nvidia’s DRIVE Thor platform.

Nvidia DRIVE Thor
Nvidia DRIVE Thor

According to Shapiro, Thor is the first automotive chip to incorporate an inference transformer engine. A transformer is an AI technique that can quickly identify relationships between objects and is particularly useful for computer vision.

“Thor can accelerate inference performance of transformers, which is vital for supporting the massive and complex AI workloads in self-driving vehicles,” Shapiro said. 

Going a step further, the way the system can handle multiple operations security in a real-time approach is with a capability called multi-compute domain isolation. Shapiro explained that the capability enables concurrent time-critical processes to run without interruption. Additionally, on one computer, a vehicle manufacturer can simultaneously run Linux, QNX and Android operating systems and applications.

Learning to self-drive with Drive SIM

The new DRIVE Thor system is one part of Nvidia’s overall automotive efforts. 

Another key part is the Drive Sim technology, which can help to train self-driving vehicles, that will benefit from the Thor chip. Shapiro explained that Drive Sim uses a neural engine that can recreate and replay road situations in a digital twin model.

“Essentially, our researchers have developed an AI pipeline that can reconstruct a 3D scene from recorded sensor data,” Shapiro said. “At the end of the day, though, we’re creating a digital twin of the car and a digital twin of the environment.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Game

NVIDIA’s DLSS 3 promises higher frame rates for CPU-intensive games

NVIDIA’s GeForce RTX 40 series GPUs won’t just rely on brute force to deliver high-performance visuals. The company has unveiled Deep Learning Super Sampling 3 (aka DLSS 3), a new version of its AI-based rendering accelerator. Rather than generating ‘only’ pixels, the third-gen technology can create entire new frames independently. It’s a bit like the frame interpolation you see (and sometimes despise) with TVs, although this is clearly more sophisticated — NVIDIA is improving performance, not just smoothing out video.

The technique relies on both fourth-gen Tensor Cores and an “Optical Flow Accelerator” that predicts movement in a scene by comparing two high-resolution frames and generating intermediate frames. As it doesn’t involve a computer’s main processor, the approach is particularly helpful for Microsoft Flight Simulator and other games that are typically CPU-limited. A new detail setting in Cyberpunk 2077 runs at 62FPS in 4K resolution using DLSS2 in NVIDIA’s tests, but jumps beyond 100FPS with DLSS 3.

Roughly 35 apps and games will offer DLSS 3 support early on. This includes Portal RTX, older titles like The Witcher 3: Wild Hunt and releases based on Unreal Engine 4 and 5.

It’s too soon to say how well DLSS 3 works in practice. NVIDIA is choosing games that make the most of DLSS, and the technology might not help as much with less constrained titles. Nonetheless, this might be useful for ensuring that more of your games are consistently smooth. Provided, of course, that you’re willing to spend the $899-plus GPU makers are currently asking for RTX 40-based video cards.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.

Repost: Original Source and Author Link

Categories
Computing

How to watch Nvidia’s RTX 4090 launch at GTC 2022

Nvidia kicks of its fall GTC 2022 event next week, where we’ll probably see the launch of the RTX 4090. Although Nvidia is tight-lipped as ever about what products it has in store, a slew of leaks and rumors have shown that we’ll see the launch of the RTX 4090 — and possibly other GPUs — during the keynote.

It’s possible we’ll see more than just next-generation GPUs as well. Here’s how to watch the RTX 4090 launch live and what to expect out of the presentation.

How to watch the Nvidia RTX 4090 launch at GTC 2022

NVIDIA GTC 2022 Keynote Teaser

Nvidia CEO Jensen Huang will deliver the company’s GTC 2022 keynote on Tuesday, September 20, at 8 a.m. PT. The presentation will likely be streamed on Nvidia’s YouTube channel, but you can bookmark the stream link on Nvidia’s website as well. We’ll embed the stream here once it’s available, but in the meantime, you can watch a short teaser for the event above.

Although the executive keynote is what most people tune in for, Nvidia’s fall GTC event lasts most of the week. It runs from September 19 to September 22, fully virtual. You can attend additional developer sessions — you can register and build a schedule on Nvidia’s GTC landing page — but they’ll focus on how developers can use Nvidia’s tools, not new product announcements.

Registration is required to attend the developer sessions. The keynote doesn’t require registration, however, and should steam on Nvidia’s YouTube channel.

What to expect from the Nvidia RTX 4090 launch

The first thing you should expect from the RTX 4090 launch is, well, the RTX 4090. Although Nvidia hasn’t confirmed any details about the card, or even that it’s called the RTX 4090, we saw the full specs leak a few days ago. According to the leak, Nvidia will announce the RTX 4090 and two RTX 4080 models — a 12GB variant and a 16GB variant.

It’s all but confirmed that Nvidia will launch its next-gen GPUs, which rumors say could offer double the performance of the current generation. Some leakers are saying the key feature of these cards is a configurable TDP. The story goes that each card will have a base power draw that’s in-line with what you’d expect from a GPU, but that users will be able to dedicate more power to the card for increased performance.

There’s a chance we’ll see more than the new cards, too. Nvidia has been teasing something called Project Beyond for a couple of weeks, posting vague videos to the GeForce Twitter account that show a desktop setup adorned with various clues. One recent video showed the PC starting a render in Adobe Media Encoder, suggesting it may have something to do with creative apps.

Speed matters…#ProjectBeyond
9.20.22
8AM PDT pic.twitter.com/Y2TM8KSJQn

— NVIDIA Studio (@NVIDIAStudio) September 16, 2022

Although it’s possible Project Beyond is just Nvidia’s branding for the RTX 4090 launch, it’s probably something different. In Nvidia’s most recent earnings call, the company said that it planned to reach “new segments of the market … with our gaming technology.”

That could mean anything, but we can still make some informed guesses. Last year, Nvidia shared a demo of games running on ARM PCs, laying the groundwork for ARM-based gaming in the future. Although Nvidia’s acquisition of ARM fell through, there’s still a good chance the companies are working closely together.

New segments of the market could mean PCs that don’t use traditional x86 CPUs like the ones offered by Intel and AMD. This is pure speculation, but ARM gaming has been a big focus of Nvidia for a while, and the company provides several of its gaming features for developers working on ARM applications.

Project Beyond could also be a tool for creators. Not only has Nvidia teased video encoding, but CEO Jensen Huang specifically called out streamers, vloggers, and other types of content creators during the company’s most recent earnings call.

We’ll need to wait until the keynote before knowing for sure, though. The RTX 4090 announcement is almost a sure deal, but it looks like Nvidia will have an extra surprise in store, as well.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

What Nvidia’s new MLPerf AI benchmark results really mean

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Nvidia released results today against new MLPerf industry-standard artificial intelligence (AI) benchmarks for its AI-targeted processors. While the results looked impressive, it is important to note that some of the comparisons they make with other systems are really not apples-to-apples. For instance, the Qualcomm systems are running at a much smaller power footprint than the H100, and are targeted at market segments similar to the A100, where the test comparisons are much more equitable. 

Nvidia tested its top-of-the-line H100 system based on its latest Hopper architecture; its now mid-range A100 system targeted at edge compute; and its Jetson smaller system targeted at smaller individual and/or edge types of workloads. This is the first H100 submission, and shows up to 4.5 times higher performance than the A100. According to the below chart, Nvidia has some impressive results for the top-of-the-line H100 platform.

Image source: Nvidia.

Inference workloads for AI inference

Nvidia used the MLPerf Inference V2.1 benchmark to assess its capabilities in various workload scenarios for AI inference. Inference is different from machine learning (ML) where training models are created and systems “learn.” 

Inference is used to run the learned models on a series of data points and obtain results. Based on conversations with companies and vendors, we at J. Gold Associates, LLC, estimate that the AI inference market is many times larger in volume than the ML training market, so showing good inference benchmarks is critical to success.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Why Nvidia would run MLPerf

MLPerf is an industry standard benchmark series that has broad inputs from a variety of companies, and models a variety of workloads. Included are items such as natural language processing, speech recognition, image classification, medical imaging and object detection. 

The benchmark is useful in that it can work across machines from high-end data centers and cloud, down to smaller-scale edge computing systems, and can offer a consistent benchmark across various vendors’ products, even though not all of the subtests in the benchmark are run by all testers. 

It can also create scenarios for running offline, single stream or multistream tests that create a series of AI functions to simulate a real-world example of a complete workflow pipeline (e.g., speech recognition, natural language processing, search and recommendations, text-to-speech, etc.). 

While MLPerf is accepted broadly, many players feel that running only portions of the test (ResNet is the most common) is a valid indicator of their performance and these results are more generally available than the full MLPerf. Indeed, we can see from the chart that many of the comparison chips do not have test results in other components of MLPerf for comparison to the Nvidia systems, as the vendors chose not to create them. 

Is Nvidia ahead of the market?

The real advantage Nvidia has over many of its competitors is in its platform approach. 

While other players offer chips and/or systems, Nvidia has built a strong ecosystem that includes the chips, associated hardware and a full stable of software and development systems that are optimized for their chips and systems. For instance, Nvidia has built tools like their Transformer Engine that can optimize the level of floating-point calculation (such as FP8, FP16, etc.) at various points in the workflow that is best for the task at hand, which has the potential to accelerate the calculations, sometimes by orders of magnitude. This gives Nvidia a strong position in the market as it enables developers to focus on solutions rather than trying to work on low-level hardware and related code optimizations for systems without the corresponding platforms.

Indeed, competitors Intel, and to a lesser extent Qualcomm, have emphasized the platform approach, but the startups generally only support open-source options that may not be at the same level of capabilities as the major vendors provide. Further, Nvidia has optimized frameworks for specific market segments that provide a valuable starting point from which solution providers can achieve faster time-to-market with reduced efforts. Start-up AI chip vendors can’t offer this level of resource.

Image source: Nvidia.

The power factor

The one area that fewer companies test for is the amount of power that is required to run these AI systems. High-end systems like the H100 can require 500-600 watts of power to run, and most large training systems use many H100 components, potentially thousands, within their complete system. The operating cost of such large systems is extremely high as a result. 

The lower-end Jetson consumes only about 50-60 watts, which is still too much for many edge computing applications. Indeed, the major hyperscalers (AWS, Microsoft, Google) all see this as an issue and are building their own power-efficient AI accelerator chips. Nvidia is working on lower-power chips, particularly since Moore’s Law provides power reduction capability as the process nodes get smaller. 

However, it needs to achieve products in the 10 watt and below range if it wants to fully compete with newer optimized edge processors coming to market, and companies with lower power credentials like Qualcomm (and ARM, generally). There will be many low-power uses for AI inference in which Nvidia currently cannot compete.

Nvidia’s benchmark bottom line

Nvidia has shown some impressive benchmarks for its latest hardware, and the test results show that companies need to take Nvidia’s AI leadership seriously. But it’s also important to note that the potential AI market is vast and Nvidia may not be a leader in all segments, particularly in the low-power segment where companies like Qualcomm may have an advantage. 

While Nvidia shows a comparison of its chips to standard Intel x86 processors, it does not have a comparison to Intel’s new Habana Gaudi 2 chips, which are likely to show a high level of AI compute capability that could approach or exceed some Nvidia products. 

Despite these caveats, Nvidia still offers the broadest product family and its emphasis on complete platform ecosystems puts it ahead in the AI race, and will be hard for competitors to match.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Computing

Nvidia’s RTX 4000 get new specs, and it’s not all good news

Nvidia’s upcoming Ada Lovelace graphics cards just received a new set of rumored specifications, and this time around, it’s a bit of a mixed bag.

While the news is good for one of the GPUs, the RTX 4070 actually received a cut when it comes to its specs — but the leaker says this won’t translate to a cheaper price.

And TBP, 450/420?/300W.

— kopite7kimi (@kopite7kimi) June 23, 2022

The information comes from kopite7kimi, a well-recognized name when it comes to PC hardware leaks, who has just revealed an update to the specifications of the RTX 4090, RTX 4080, and the RTX 4070. While we’ve already heard previous whispers about the specs of the RTX 4090 and the RTX 4070, this is the first time we’re getting predictions about the specs of the RTX 4080.

Let’s start with the good news. If this rumor is true, the flagship RTX 4090 seems to have received a slight bump in the core count. The previously reported number was 16,128 CUDA cores, and this has now gone up to 16,384 cores, which translates to an upgrade from 126 streaming multiprocessors (SMs) to 128. As for the rest of the specs, they remain unchanged — the current expectation is that the GPU will get 24GB of GDDR6X memory across a 384-bit memory bus, as well as 21Gbps bandwidth.

The RTX 4090 includes the AD102 GPU, which maxes out at 144 SMs, but it seems unlikely that the RTX 4090 itself will ever reach such heights. The full version of the AD102 GPU is probably going to be found in an even better graphics card, be it a Titan or simply an RTX 4090 Ti. It’s also rumored to have monstrous power requirements. This time around, kopite7kimi didn’t reveal anything new about that card, and as of now, we still don’t know for a fact that it even exists.

Moving on to the RTX 4080 with the AD103 GPU, it’s said to come with 10,240 CUDA cores and 16GB of memory. However, according to kopite7kimi, it would rely on GDDR6 memory as opposed to GDDR6X. Seeing as the leaker predicts it to be 18Gbps, that would actually make it slower than the RTX 3080 with its 19Gbps memory. The core count is exactly the same as in the RTX 3080 Ti. So far, this GPU doesn’t sound very impressive, but it’s said to come with a much larger L2 cache that could potentially offer an upgrade in its gaming performance versus its predecessors.

Jacob Roach / Digital Trends

When it comes to the RTX 4070, the GPU was previously rumored to come with 12GB of memory, but now, kopite7kimi predicts just 10GB across a 160-bit memory bus. It’s said to offer 7,168 CUDA cores. While it’s certainly an upgrade over the RTX 3070, it might not quite be the generational leap some users are hoping for. It’s also supposedly not going to receive a price discount based on the reduction in specs, but we still don’t know the MSRP of this GPU, so it’s hard to judge its value.

Lastly, the leaker delivered an update on the power requirements of the GPUs, which have certainly been the subject of much speculation over the last few months. The predicted TBP for the RTX 4090 is 450 watts. It’s 420 watts for the RTX 4080 and 300 watts for the RTX 4070. Those numbers are a lot more conservative than the 600 watts (and above) that we’ve seen floating around.

What does all of this mean for us — the end-users of the upcoming RTX 40-series GPUs? Not too much just yet. The specifications may yet change, and although kopite7kimi has a proven track record, they could be wrong about the specs, too. However, as things stand now, only the RTX 4090 seems to mark a huge upgrade over its predecessor while the other two are a much more modest change. It remains to be seen whether the pricing will reflect that or not.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Nvidia’s new liquid-cooled GPUs are heading to data centers

Nvidia is taking some notes from the enthusiast PC building crowd in an effort to reduce the carbon footprint of data centers. The company announced two new liquid-cooled GPUs during its Computex 2022 keynote, but they won’t be making their way into your next gaming PC.

Instead, the H100 (announced at GTC earlier this year) and A100 GPUs will ship as part of HGX server racks toward the end of the year. Liquid cooling isn’t new for the world of supercomputers, but mainstream data center servers haven’t traditionally been able to access this efficient cooling method (not without trying to jerry-rig a gaming GPU into a server, that is).

In addition to HGX server racks, Nvidia will offer the liquid-cooled versions of the H100 and A100 as slot-in PCIe cards. The A100 is coming in the second half of 2022, and the H100 is coming in early 2023. Nvidia says “at least a dozen” system builders will have these GPUs available by the end of the year, including options from Asus, ASRock, and Gigabyte.

Data centers account for around 1% of the world’s total electricity usage, and nearly half of that electricity is spent solely on cooling everything in the data center. As opposed to traditional air cooling, Nvidia says its new liquid-cooled cards can reduce power consumption by around 30% while reducing rack space by 66%.

Instead of an all-in-one system like you’d find on a liquid-cooled gaming GPU, the A100 and H100 use a direct liquid connection to the processing unit itself. Everything but the feed lines is hidden in the GPU enclosure, which itself only takes up one PCIe slot (as opposed to two for the air-cooled versions).

Data centers look at power usage effectiveness (PUE) to gauge energy usage — essentially a ratio between how much power a data center is drawing versus how much power the computing is using. With an air-cooled data center, Equinix had a PUE of about 1.6. Liquid cooling with Nvidia’s new GPUs brought that down to 1.15, which is remarkably close to the 1.0 PUE data centers aim for.

Energy usage for Nvidia liquid-cooled data center GPUs.

In addition to better energy efficiency, Nvidia says liquid cooling provides benefits for preserving water. The company says millions of gallons of water are evaporated in data centers each year to keep air-cooled systems operating. Liquid cooling allows that water to recirculate, turning “a waste into an asset,” according to head of edge infrastructure at Equinix Zac Smith.

Although these cards won’t show up in the massive data centers run by Google, Microsoft, and Amazon — which are likely using liquid cooling already — that doesn’t mean they won’t have an impact. Banks, medical institutions, and data center providers like Equinix compromise a large portion of the data centers around today, and they could all benefit from liquid-cooled GPUs.

Nvidia says this is just the start of a journey to carbon-neutral data centers, as well. In a press release, Nvidia senior product marketing manager Joe Delaere wrote that the company plans “to support liquid cooling in our high-performance data center GPUs and our Nvidia HGX platforms for the foreseeable future.”

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

How to watch Nvidia’s Computex 2022 keynote

Next week, Nvidia will be presenting its Computex 2022 keynote, where the company will discuss current and upcoming products for data centers, professional applications, and gaming. It’s not entirely clear what exactly the company will be talking about, and although rumors range from a new low-end GPU to an announcement of next-gen GPUs, Nvidia is always very secretive so we’ll just have to wait and see.

Here’s where you can watch Nvidia’s Computex keynote and what you can expect the company to announce.

How to watch Nvidia’s Computex 2022 keynote

Six different Nvidia executives will speak at Nvidia’s keynote, which starts at 8 p.m. PT on May 23. Computex is hosted in Taiwan, which means their afternoon is America’s late night, so you might have to stay up late to catch the presentation.

Nvidia is likely going to stream the presentation on its YouTube channel, as it typically does for Computex and events like GTC. After the stream is over, we expect a recording to be available on the YouTube page.

Following the presentation, Nvidia will host a talk specifically about Omniverse, hosted by Richard Kerris, Nvidia’s vice president of Omniverse. The talk will cover “the enormous opportunities simulation brings to 3D workflows and the next evolution of A.I.”

What to expect from Nvidia’s Computex 2022 keynote

Nvidia is notoriously tight-lipped about its upcoming products. In fact, ever since the GTX 10-series, Nvidia has always announced new gaming GPUs just weeks before launch, which is very different from rivals AMD and Intel, as they tend to announce big products more than a month away from launch. So, we’re either on the cusp of the next generation (presumably the RTX 40-series) or still some months away.

Jeff Fisher presenting the RTX 3090 Ti.

One hint comes from the list of speakers. When it comes to gaming news, we’re really interested in Jeff Fisher, Senior Vice President of GeForce. He previously announced the RTX 3090 Ti at CES 2022, which has led some to claim that this is proof he’s back again to announce the RTX 40-series. But it’s hard to imagine Nvidia CEO Jensen Huang not announcing the launch of a new generation of gaming GPUs in his famous kitchen. If Fisher is announcing a new gaming GPU, it’s more likely to be the rumored GTX 1630.

There are five other speakers at Nvidia’s keynote, but they’re expected to talk about data centers, professional GPUs, and automotive, not gaming GPUs. Unfortunately, if you’re not really into enterprise-grade hardware, you probably aren’t the target demographic of this keynote. Still, Nvidia does what Nvidia wants and we can never be too sure what it’s going to show at a big event like Computex.

Editors’ Choice




Repost: Original Source and Author Link