NVIDIA’s new ‘GeForce Now RTX 3080’ streams games at 1440p and 120 fps

NVIDIA has unveiled its next-generation cloud gaming platform called GeForce Now RTX 3080 with “desktop-class latency” and 1440p gaming at up to 120 fps on PC or Mac. The service is powered by a new gaming supercomputer called the GeForce Now SuperPod and costs double the price of the current Priority tier.

The SuperPod is “the most powerful gaming supercomputer ever built,” according to NVIDIA, delivering 39,200 TFLOPS, 11,477, 760 CUDA Cores and 8,960 CPU Cores. NVIDIA said it will provide an experience equivalent to 35 TFLOPs, or triple the Xbox Series X, roughly equal to a PC with an 8-core CPU, 28GB of DDR4-3200 RAM and a PCI-GEN4 SSD. 

NVIDIA launches GeForce Now RTX 3080-class gaming at up to 1440p 120fps


As such, you’ll see 1440p gaming at up to 120fps on a Mac or PC, and even 4K HDR on a shield, though NVIDIA didn’t mention the refresh rate for the latter. It’ll also support 120 fps on mobile, “supporting next-gen 120Hz displays,” the company said. By comparison, the GeForce Now Priority tier is limited to 1080p at 60 fps, with adaptive VSync available in the latest update.

It’s also promising a “click-to-pixel” latency down to 56 milliseconds, thanks to tricks like adaptive sync that reduces buffering, supposedly beating other services and even local, dedicated PCs. However, that’s based on a 15 millisecond round trip delay (RTD) to the GeForce Now data center, something that obviously depends on your internet provider and where you’re located. 

NVIDIA’s claims aside, it’s clearly a speed upgrade over the current GeForce Priority tier, whether you’re on a mobile device or PC. There’s a price to pay for that speed, though. The GeForce Now premium tier started at $50 per year and recently doubled to $100, which is already a pretty big ask. But the RTX 3080 tier is $100 for six months (around double the price) “in limited quantities,” with Founders and priority early access starting today. If it lives up to the claims, it’s cheaper than buying a new PC, in any case. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


Mastermind Behind Nvidia RTX DLSS Just Got Hired By Intel

Nvidia’s RTX features have been among the primary selling points of its graphics cards in recent years. But now, the mastermind behind those advanced graphics features now works for one of Nvidia’s new rivals in the world of gaming graphics: Intel.

Nvidia RTX consists of two primary features: Real-time ray tracing and Deep Learning Super Sampling (DLSS), both of which are critical for running the latest games with all the visual glitter turned on. DLSS is critical for running the latest games with ray tracing enabled. It’s the bedrock that has allowed ray tracing to flourish in video games, and it’s a big reason why Nvidia still holds an edge over AMD in the space. Now, Intel looks to be joining the fray.


Intel has now hired the person behind both technologies, Anton Kaplanyan, suggesting that Intel could be working on its own DLSS competitor for its upcoming graphics cards.

Anton Kaplanyan had a short but meaningful stint at Nvidia from 2015 to 2017, during which he helped design RTX ray-tracing hardware and DLSS.

“After the hardware was done, my Nvidia Research colleagues and I realized that the hardware performance would not suffice for real-time visuals, so we started developing a completely new direction of real-time image reconstruction methods,” Kaplanyan wrote in a blog post.

Intel could be working on a similar technology for its upcoming graphics cards — the blog post is careful not to mention DLSS by name, after all. Kaplanyan’s hire is, at least in part, based on his experience with graphics and machine learning. “New differentiating technologies in graphics and machine learning is the missing cherry on the cake,” Kaplanyan wrote.

Anton Kaplanyan headshot.

That would make sense for Intel. AMD has already fired back at Nvidia with its competing FidelityFX Super Resolution technology, and some recent job postings suggest Microsoft is working on a similar feature. With Intel’s DG2 graphics card on the horizon, the company looks like it’s ready to play ball with the latest graphics technologies.

Intel is forming an all-star roster of graphics experts. In 2017, the company picked up Raja Koduri, who’s known for working in AMD’s Radeon division on the Polaris, Vega, and Navi architectures. Koduri now heads up Intel’s graphics and software sector, leading the charge on the company’s first foray into desktop graphics cards.

Kaplanyan is likely a key part of that strategy, aiding in the development of ray tracing and the software it requires to run in real time. Before joining Intel, Kaplanyan worked as a researcher at Facebook for the company’s virtual reality (VR) endeavors. During that time, Kaplanyan published a paper on neural supersampling, which looks an awful lot like DLSS.

The future of Intel’s graphics department looks bright, assuming the pieces fall in place as they should. With ray tracing pushing graphics more than ever before, as well as the rise of high-resolution and high refresh rate monitors, a supersampling method is essential.

“I think we are at the edge of a new era in graphics — an era where visual computing will become more distributed, more heterogeneous, more power-efficient, more accessible, and more intelligent,” Kaplanyan wrote.

Editors’ Choice

Repost: Original Source and Author Link


Nvidia RTX 40 Series GPUs Might Be Even More Power Hungry

A flurry of recent rumors suggests that Nvidia’s upcoming RTX 40-series graphics cards will be even more power-hungry than what’s currently available. Leakers peg the power consumption in the range of 400W to 500W for the flagship card, which is higher than even the obscenely powerful RTX 3090.

3DCenter, who has previously covered the roller coaster of GPU prices in Europe, nailed down multiple leakers claiming the card will use at least 400W of power. That’s certainly not out of the question, as the RTX 3090 already requires 350W of power. Assuming Nvidia wants to push even more power out of the upcoming range, a 400W+ power requirement could be possible.

Nvidia hasn’t announced anything about the RTX 40-series yet, so it’s likely that developers are still tweaking the final design. Kopite7kimi, one of the leakers who claimed a 400W+ power limit and is known for Nvidia leaks, said the upcoming range will be built on chipmaker TSMC’s 5nm node, breaking from the 8nm Samsung process Nvidia used on RTX 30-series graphics cards.

The next-generation architecture, tentatively named Lovelace, is rumored to arrive in late 2022 or early 2023. The rumor mill suggests that the graphics core powering the range will be capable of housing up to 18,432 CUDA cores, which is nearly 8,000 more than the RTX 3090.

AMD’s upcoming cards are rumored to require equally as much power. The RDNA 3 range is also rumored to consume between 400W and 500W of power with TSMC’s 5nm process. During a recent investors call, AMD CEO Lisa Su confirmed that 5nm is the goal and that the GPUs are on track for a 2022 launch.

Unlike Lovelace, RDNA 3 cards are rumored to use a multi-chip-module (MCM) GPU package. Essentially, the upcoming range is rumored to utilize multiple dies on the same package, unlike the RTX 40-series’ traditional monolithic design.

A diagram of an MCM on RDNA 3.

Nvidia is rumored to be working on its own MCM design, currently named Hopper. Originally, rumors pegged Hopper as the successor to the current Ampere range, though recent speculation suggests Nvidia is locked on delivering Lovelace sooner.

Both new generations are rumored to offer up to a 2.5x improvement over the current generation. As for where they’ll fall in relation to each other, it’s too soon to say.

As is the case with all early rumors and speculation, you shouldn’t take this information as law. We’re still far out from launch, so AMD and Nvidia are more than likely still finalizing the design and tweaking specs to meet their price, power, and performance targets.

Based on what we know so far, however, a higher power draw will likely be something PC builders need to deal with. Nvidia pushed past the 250W ceiling with the RTX 3080, RTX 3080 Ti, and RTX 3090, surpassing even the most powerful cards from the generations that proceeded them. It’s too soon to say for sure, but you might need to invest in a new power supply when these cards finally arrive.

Editors’ Choice

Repost: Original Source and Author Link


AMD RX 6600 XT Is 15% Faster Than the RTX 3060, but $50 More

Following months of leaks and rumors, AMD finally pulled back the curtain on the RX 6600 XT. The new graphics card is a 1080p addition to the RDNA 2 range, which should provide high frame rates at 1080p and 1440p with a little help from FidelityFX Super Resolution (FSR).

The Radeon RX 6600 XT is set to launch on August 11 for $379. In addition to board partner designs, AMD will supply units to desktop makers like Acer, Alienware, and HP. Although AMD showed off a render of a reference design, it won’t be manufacturing a reference model for the 6600 XT.

The card targets 1080p high refresh rate monitors with performance somewhere between an RTX 3060 and RTX 3060 Ti. In Doom Eternal, for example, the RX 6600 XT averaged 155 frames per second (fps) compared to 134 fps with the RTX 3060. Similarly, the card hit 92 fps in Assassin’s Creed Valhalla compared to 69 fps on Nvidia’s card. Overall, AMD claims the card is 15% faster on average.

It’s important to point out that these benchmarks come from AMD, so we’ll need to wait for further testing to draw any firm conclusions. AMD also ran the tests with Smart Access Memory (SAM) enabled, which is a feature that can boost frame rates with Ryzen 5000 and select Ryzen 3000 processors.

Here are the specs we know right now:

RX 6600 XT
GPU Navi 23
Interface PCIe 4.0
Compute units 32
Stream processors 2,048
Ray accelerators 32
Game clock 2,359MHz
Memory 8GB GDDR6
Memory speed 16Gbps
Bandwidth Up to 256 GB/s
Memory bus 128-bit
TDP 160W

Although the performance is impressive, the suggested price of $379 is higher than the direct competition. That’s only $20 less than the RTX 3060 Ti and $50 more than the RTX 3060, the latter of which matches the RX 6600 XT in games like Cyberpunk 2077 and Horizon Zero Dawn. 

AMD set the price to be representative of where the market currently is. At launch, select designs from AMD’s partners will be available at $379, though the company pointed out how challenging this price is to meet given the ongoing GPU shortage.

RX 6600 XT models from board partners.

The biggest win for the RX 6600 XT looks like FSR. At 1080p with max settings and ray tracing turned on, the card was able to surpass 100 fps in Godfall and boost frame rates by up to 74% in The Riftbreaker. It also managed to increase the frame rate in Resident Evil Village, though only by a modest 13%.

FSR also allows you to push the resolution above 1080p. With ray tracing off at 1440p, AMD showed the RX 6600 XT jumping from 113 fps to 243 fps in Resident Evil Village. Similarly, Marvel’s Avengers climbed from 57 fps at native 1440p to 96 fps in FSR’s aggressive Performance mode.

RX 6600 XT benchmarks with FSR turned on.

With FSR available, the RX 6600 XT looks like the 1080p gamer’s dream. However, availability will likely be a problem. “We are doing our best to get supply, but the demand is unprecedented,” an AMD spokesperson said.

AMD isn’t releasing a reference design for the RX 6600 XT, but models from ASRock, Gigabyte, MSI, Asus, PowerColor, and more will be available on August 11.

Editors’ Choice

Repost: Original Source and Author Link


AMD RX 6600 XT vs. Nvidia RTX 3060 Ti vs. RTX 3060

The latest generation of graphics cards from AMD and Nvidia has raised the bar for budget gamers. The RTX 3060 Ti, RTX 3060, and RX 6600 XT represent the cream of the crop for 1080p gaming, and the cards are even capable of running some demanding games at 1440p. But which one should you choose?

AMD hasn’t officially announced the RX 6600 XT, but multiple leaks point to a release date coming soon. Before it arrives, we pitted Nvidia’s two budget GPUs against AMD’s upcoming one to see which one is the best.

Pricing and availability

Nvidia released the RTX 3060 Ti on December 1, 2020, for $399. The slightly slower RTX 3060 came later on February 25 for $329. As per usual, the price set by Nvidia is for the Founders Edition models of each card, so options from board partners may be slightly more expensive depending on their cooling ability and features.

AMD hasn’t announced the RX 6600 XT yet, but the card is expected to arrive on August 11. Competing directly with the RTX 3060 Ti, the card is rumored to cost $399. AMD hasn’t revealed the card, much less any details about it, so the price and release date are subject to change.

The good news for availability is also the bad news. The two Nvidia cards are consistently out of stock at retailers, and we expect the RX 6600 XT to sell out immediately when it launches. That’s the bad news. The good news is that you don’t have to make a choice based on availability.

The ongoing GPU shortage has caused a lot of problems for graphics cards, even though it is possible to buy a graphics card in 2021. You’ll struggle to find most models in stock at all, and if you do, they probably won’t be at list price. Expect to pay a few hundred dollars on top of the list price at retailers like Micro Center and Newegg.

On the secondhand market, the situation is even worse. The RTX 3060 Ti pushes toward $900 in many cases, and the RTX 3060 can cost as much as $750. We don’t have pricing details on the RX 6600 XT yet, though it’s safe to assume it will be similarly expensive on the secondhand market.


RTX 3060 Ti RTX 3060 RX 6600 XT
GPU GA104 GA106 Navi 23
Interface PCIe 4.0 PCIe 4.0 PCIe 4.0
CUDA cores/stream processors 4,864 3,584 2,048
Tensor cores 152 112 N/A
RT cores 38 28 32
Base clock 1,410MHz 1,320MHz 2,200MHz
Boost clock 1,665MHz 1,777MHz 2,500MHz
Memory speed 1,750MHz 1,875MHz TBA
Bandwidth 448GBps 360GBps TBA
Memory bus 256-bit 192-bit 128-bit
TDP 200W 170W 180W

A spec comparison of the RTX 3060 Ti, 3060, and RX 6600 XT doesn’t reveal much. The RTX 3060 Ti and 3060 alone don’t really match each other, with the cheaper card featuring more graphics memory but less bandwidth. In addition, AMD and Nvidia use different designs, leading to a much higher clock speed on the AMD card and a bigger memory bus on the Nvidia ones.

The RX 6600 XT isn’t out yet, either, so we don’t know the official specs. The specs listed in the table above are rumored, not confirmed.

Starting with the two cards that have been released, it shouldn’t come as a surprise that the RTX 3060 Ti is faster than the RTX 3060. It’s around 15% faster, placing it on par with last-generation’s RTX 2080 Super. The RTX 3060 is more closely aligned with last-gen’s RTX 2070, performing slightly better than that card but slightly worse than the RX 5700 XT.

In our testing of the RTX 3060, we found it was around 14% behind the RTX 3060 Ti in synthetic benchmarks. That said, we still hit 84 frames per second in Battlefield V, 94 fps in Fortnite, and 114 fps in Civilization VI at 1440p with all the sliders cranked up. More demanding games like Control and Cyberpunk 2077 struggled to hit 60 fps at 1440p. Dropping to 1080p produced up to a 32% increase in frame rate, however.

We don’t have benchmarks for the RX 6600 XT yet, though a benchmark leaked not too long ago. The leak shows that the card performs within the range of the RTX 3060 Ti, but it didn’t reveal any specific frame rates. We expect performance to at least match the RTX 3060 Ti, but it’s hard to say for sure right now.

Between the three, the RTX 3060 is likely in last place. With the current GPU pricing situation, though, it could be in the range of $200 in savings. Frankly, the RTX 3060 performs much better than it has any right to, and when hundreds of dollars are on the table, a 15% performance difference doesn’t mean much.

Ray tracing, upscaling, and more

A demonstration of DLSS in Control.

For features, all three of our competitors are much closer than they were a few months ago. The standout features for the RTX 3060 Ti and RTX 3060 are Deep Learning Super Sampling (DLSS) and ray tracing, which are both part of the RTX features package. Ray tracing helps lighting look more accurate and games, and DLSS improves frame rates with artificial intelligence-assisted upscaling. Neither are stellar at ray tracing but are substantially faster than their counterparts from the previous generation, while DLSS can make a huge difference in support games’ performance.

AMD cards used to lack these features, but not any longer. The RX 6600 XT should support ray tracing like the rest of the RX 6000 range, and AMD now offers its FidelityFX Super Resolution (FSR) upscaling tech. FSR accomplishes the same goal as DLSS, and although it’s not quite as impressive, it gets very close. Ray tracing is unlikely to be hugely impressive on the 6600 XT, as AMD’s RDNA2 cards just aren’t as fast at it as Nvidia’s newer-generation options.

The Nvidia cards have the lead at the moment. AMD is late to the party when it comes to upscaling tech and ray tracing, though it’s quickly catching up to Nvidia. At the moment, we recommend one of the Nvidia cards for features. In a matter of months, however, the race will likely be much tighter.

Editors’ Choice

Repost: Original Source and Author Link


Amazon responds to RTX 3090 New World nightmare: How to avoid a bricked GPU [Update]

It’s been a rough day for some unfortunate PC gamers. Amazon’s upcoming MMO, New World, entered closed beta yesterday, and it didn’t take long before some participants reported that the game bricked their GPUs. These weren’t aging GPUs trying to run a game beyond their capability, but rather top-of-the-line RTX 3090 GPUs.

Most of the reports seem to suggest that the EVGA RTX 3090 FTW3 Ultra is having the most problems, though there have been user reports that suggest other GPUs are vulnerable as well. After some speculation from the player base about what could be causing the issue – players think it’s uncapped framerate in menus exacerbating a power delivery design flaw in these GPUs – Amazon has now responded to the claims and given users some tips on how to avoid damaging their hardware.

In a post to the New World forums, an Amazon customer service rep explains that the studio believes the issue “is related with driver settings and frame rate limiters.” First, it’s recommended that users disable the overrides in their driver settings, hit “Apply,” and then restart the game client.

Beyond that, Amazon also suggests capping FPS to stop issues with the GPU’s utilization. To do that, go into the game’s settings, then the “Visuals” menu, set the Max FPS option to “60.” Finally, Amazon suggests double-checking in the NVIDIA Control Panel that the Max Frame Rate setting for New World shows either “Use Global Settings (Off)” or “Off.”

The hope is that these fixes solve the problem entirely because this is a costly issue to encounter – not to mention one that’s made more complicated by the fact that these top-end GPUs are incredibly difficult to find at the moment. We’ll see if users report ongoing GPU issues with New World now that Amazon has detailed some fixes, so stay tuned.

Update: Amazon has now issued a statement regarding reports of bricked GPUs. “Hundreds of thousands of people played in the New World Closed Beta yesterday, with millions of total hours played,” Amazon said in a statement to SlashGear. “We’ve received a few reports of players using high-performance graphics cards experiencing hardware failure when playing New World.”

“New World makes standard DirectX calls as provided by the Windows API. We have seen no indication of widespread issues with 3090s, either in the beta or during our many months of alpha testing. The New World Closed Beta is safe to play. In order to further reassure players, we will implement a patch today that caps frames per second on our menu screen. We’re grateful for the support New World is receiving from players around the world, and will keep listening to their feedback throughout Beta and beyond.”

So, not only do we have the tips listed above, but now this statement also confirms that Amazon will implement a patch that puts a hard cap on framerate while in menus. Hopefully that’s the end of any GPU issues players have while running New World.

Repost: Original Source and Author Link


If you have an RTX 3090, Amazon’s New World MMO could kill it

Amazon’s New World MMO is currently in closed beta, and that has led to a surge of interest in the game. New World launches at the end of August, so this closed beta is a chance for prospective buyers to see the mostly-finished product, and a lot of folks are taking Amazon up on its offer of beta access in exchange for pre-orders. However, if you have an RTX 3090 graphics card in your rig, you might want to sit out the New World beta for now, as there have been reports of the game bricking GPU hardware.

As spotted by PC Gamer, there are thread up on both the New World subreddit and the New World official forums in which players report that playing New World has bricked their GPUs. The issue seems to mostly revolve around EVGA cards – specifically the EVGA RTX 3090 FTW3 Ultra – though there have been less frequent reports of other cards as well.

Obviously, it’s never a good time to have a GPU die, but right now is a terrible time given the current PC hardware shortage. While it’s been difficult to pin down a precise cause, there’s talk among the player base that uncapped framerates in New World‘s menus (or even while players are waiting in queue) is to blame. If you’re going to play New World and you want to avoid bricking your high-end GPU, limiting your frame rate to 60 or even 30 might be a good place to start.

Talk of GPU failures while playing New World really started making the rounds when Twitch streamer Gladd experienced that exact thing on-stream. Shortly afterward, he published a tweet (embedded above) in which he confirmed that his graphics card had been fried “completely.” PC Gamer notes that there were known issues with early-model RTX GPUs that had defective power delivery components, but a driver update pushed by NVIDIA supposedly quashed those issues.

Until the community, the developers behind New World, and GPU manufacturers get to the bottom of this issue, it’s probably best to avoid the New World beta if you’ve got an RTX 3090, particularly an EVGA FTW3 Ultra. We’ll see what happens from here, but with just about five weeks left to go for release, this could potentially be a big problem for New World.

Repost: Original Source and Author Link


Leaked AMD RX 6600 XT Benchmark Matches the RTX 3060 Ti

Following an alleged launch date for the RX 6600 XT and RX 6600, photos and benchmarks have leaked for the upcoming card. The benchmarks show the RX 6600 XT outperforming Nvidia’s RTX 3060 Ti, which should be the card’s direct competitor.

In a now-removed forum post on Baidu Tieba, the user posted screenshots of both the card and the benchmarks for it. With the Ludashi benchmarking tool — available in China — the RX 6600 XT earned a graphics score of 414,621. That’s above an RTX 3060 Ti model on the leaderboards, which earned a score of 413,902, but below another RTX 3060 Ti model, which earned a score of 415,516.


Ludashi measures overall system performance, not just the graphics card, so it’s important to take these numbers with a grain of salt. The leaderboards hold multiple results for most graphics cards, and it’s possible that other factors like heat could limit performance. Regardless, it looks like the RX 6600 XT is at least competitive with the RTX 3060 Ti, which is good news.

In addition to the benchmark, the leaker posted several photos of the card they tested. The photos show a rather bland plastic shroud with only a small Radeon logo in the center to tell it apart. The design runs counter to a previously leaked photo of the RX 6600 XT, not to mention the design of the rest of the RX 6000 range, suggesting it’s a card from another graphics card maker.

Someone holding a graphics card.

This particular card features a dual-fan design over an aluminum heatsink, unlike the single-fan design we expected for the RX 6600 XT. Given the heatsink and the cheap-looking shroud, it’s likely this isn’t the design AMD is going with for its first-party cards. It’s still possible, however.

The pictures did reveal that the RX 6600 XT will have 8GB of video memory, which lines up with previous rumors. The RX 6600 XT will reportedly come with 8GB of GDDR6 memory on a 128-bit bus and the new Navi 23 XT GPU core. The core is rumored to offer 32 compute units (CUs) for a total of 2,048 stream processors.

The RX 6600, which will reportedly launch at the same time as the XT model, will feature a slightly cut GPU core. It’s rumored to come with 28 CUs, totaling 1,792 stream processors. Reports suggest that it will come with the same 8GB of GDDR6 memory on a 128-bit bus.

AMD hasn’t announced either card yet or provided any information on pricing. However, multiple leaks over the past month point to a release date coming soon.

Editors’ Choice

Repost: Original Source and Author Link


NVIDIA shows ‘Wolfenstein: Youngblood’ with RTX ray-tracing on ARM

NVIDIA recently said that it was working with MediaTek to bring RTX graphics to ARM-based laptops, and now it has shown what that might look like for gamers. At the Game Developers Conference (GDC), it unveiled a technical demo with an RTX-powered MediaTek ARM processor running Wolfenstein: Youngblood.

NVIDIA showed off real time ray-traced reflections and DLSS on the game using an ARM-based platform for the first time. It also showed off a demo called Bistro (from Amazon’s Lumberyard game engine) running real-time ray-tracing on ARM, with RTX direct illumination (RTXDI) and NVIDIA Optix AI-acceleration denoiser (NRD) features turned on. The demos ran on a MediaTek Kompanio 1200 ARM-based platform combined with a GeForce RTX 3060 GPU.

NVIDIA made the tech work by porting several RTX SDKs to ARM devices. Those include deep learning super sampling (DLSS) to boost sharpness, RTX direct illumination, NVIDIA Optix AI-acceleration denoiser, the RTX memory utility (RTXMU) and RTX global illumination. NVIDIA said that the RTXDI, NRD and RTXMU SDKs for ARM with Linux are now available for developers, with RTXGI and DLSS coming soon.

Of course, you won’t get to see any of this until manufacturers add RTX hardware to their ARM-based laptops, Chromebooks or other devices. Game manufacturers will also need to implement the tech for ARM-based games. However, both the Wolfenstein: Youngblood developer and game engine company seem bullish. 

“RTX support for ARM and Linux opens up new opportunities for game developers to provide more immersive experiences on a wider variety of platforms,” said Unity’s senior technical product manager Mathieu Muller. “An iD Tech-based game running on an ARM CPU with ray tracing enabled is a significant step in a journey that will result in many more gaming platforms being available to all game developers,” added Machinegames CTO Jim Kjellin.

Of course, NVIDIA’s relationship with ARM is set to get a whole lot closer as it bought the company last year for $40 billion. However, the deal is subject to regulatory approval and NVIDIA rival (and ARM customer) Qualcomm has objected to the deal. On top of that, ARM employees 3,000 people in the UK and that country’s regulator is currently investigating the sale. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


Nvidia Might Be Working on RTX 3080 Super for Laptops

A new rumor suggests Nvidia might be working on the RTX 3080 Super and RTX 3070 Super for laptops. The rumor falls in line with a leaked roadmap from Lenovo last month, which listed the upcoming ThinkPad X1 Extreme Gen 4 sporting either an RTX 3080 Super or RTX 3070 Super. We may know the names of the cards, but that’s about it.

The rumor comes from Videocardz, who spotted a tweet from Greymon55 saying that the range is set to launch next year. The Twitter account was only set up this month, but it has already caught the attention of some well-known leakers.

The tweet alone doesn’t say much, but the Lenovo leak lends it some creditability. The original leak shows that you can configure the X1 Extreme Gen 4 with an Nvidia GTX 1650 Ti, RTX 3060, RTX 3070 Super, or RTX 3080 Super. Meanwhile, Lenovo’s X1 Extreme Gen 4 product page lists the RTX 3080, RTX 3070, RTX 3060, or RTX 3050 Ti as graphics options in the upcoming machine.

The RTX 3080 Super and RTX 3070 Super will allegedly come with 16GB and 8GB of GDDR6 memory, respectively. That’s the only spec we know about, but these Super variants, if they exist, will likely come with more CUDA cores. The RTX 2080 Super mobile, for example, came with 128 more CUDA cores than the RTX 2080 mobile. The cards will likely use the same Ampere architecture, but they could come with a redesigned GPU core.

Looking at last-gen’s launch cadence, it’s possible that Nvidia could announce Super variants in late 2021 or early 2022. The RTX 2080 mobile released in January 2019, and the RTX 2080 Super followed in April 2020. Similarly, the RTX 3080 mobile was announced in January 2021, putting the RTX 3080 Super mobile on track for an early 2022 release.

Nvidia hasn’t announced or hinted at anything at this point, though, and it’s still too soon to say these cards are coming. Last year, Nvidia was apparently working on a 20GB version of the RTX 3080 Ti and a 16GB version of the RTX 3070 Ti, both of which never made it to market. The cards were reportedly canceled to make way for the RTX 3080 Ti and RTX 3070 Ti that are available today.

If previous launches are anything to go by, Nvidia is likely working on an update to its mobile RTX 30-series range. However, it’s possible that the design will be reworked, rebranded, or completely scrapped before next year rolls around.

Editors’ Choice

Repost: Original Source and Author Link