Categories
Computing

Intel’s upcoming Raptor Lake may hit the enviable 6GHz mark

Intel’s 13th-generation Raptor Lake chips may be capable of boosting past the 6GHz mark if one tipster is to be believed. The company’s current Core i9-12900 CPUs are already capable of maxing out well over 5GHz.

The rumor comes courtesy of tipster @OneRaichu on Twitter, who claims at least one SKU of the CPU will be capable of a 6GHz turbo boost due to Intel’s Efficient Thermal Velocity Boost (ETVB) technology. That would make it the first x86 chip to reach that level of performance.

🥵6 GHz turbo MAYBE will appear in one SKU. (in ETVB mode)🤣
I guess it should not be normal sku. https://t.co/SFubzjdXNG

— Raichu (@OneRaichu) June 21, 2022

More confirmation of ETVB was revealed when Intel updated its Extreme Tuning Utility (XTU) overclocking application to include “future platform” support for ETVB. As Wccftech notes, the overclocking features listed in the XTU changelog will be available to 12th-gen Alder Lake CPUs as well.

As a refresher, Intel’s regular TVB “opportunistically” increases the clock speeds by up to 100MHz if the CPU is within its thermal limit and enough turbo headroom is available. This is how Alder Lake CPUs are able to get into the mid-5GHz range. The ETVB mode will likely be an improvement upon the TVB to perhaps allow even higher frequency boosts depending on how hot the CPU is.

This probably isn’t surprising considering some of the early benchmarks we’ve seen for Raptor Lake. In the Sandra benchmarking tool, it was found that the Core i9-13900 crushed the current Core i9-12900. However, we must caution that it was an early engineering sample that was tested so the actual performance numbers could vary upon release.

Obviously, AMD isn’t sitting on its laurels, with Team Red readying its own Ryzen 7000 chips built on the new Zen 4 architecture. AMD showed off impressive results at Computex 2022, beating Intel’s Core i9-12900K by 31%. It also showed the Zen 4 chip boosting up to 5.5GHz while playing Ghostwire Tokyo.

AMD CEO Lisa Su noted that even with such impressive results, Ryzen 7000 chips will be capable of of clock speeds “significantly” above 5Ghz. That’s not even counting any kind of overclocking potential. That said, if Intel is able to achieve 6Ghz without overclocking, that will still represent a remarkable achievement.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Intel’s Arc Alchemist GPU requirements are raising eyebrows

Intel has released the requirements for its Arc Alchemist range for desktops, which reveal a rather peculiar tidbit.

Team Blue’s Arc Alchemist desktop series of GPUs will seemingly require a Resizable BAR feature in order to ensure “optimal performance.”

As reported by VideoCardz, the aforementioned document lists support for a total of three Intel CPU series, including its 12th-gen Core Alder Lake and 600 series motherboards, the 11th-gen Core Rocket Lake and 500 series, and the 10th-gen Core Comet Lake series and the 400 range of chipsets.

The guide naturally doesn’t make specific mention of any other platforms from competitors, but VideoCardz suggests that AMD-powered Smart Access Memory systems could be supported as “support for more platforms will be added at a later time.”

As for the Resizable BAR requirement that was listed in the document, the feature is a necessity to deliver ‘optimal performance in all applications’.

However, Intel has not confirmed whether the Arc Alchemist lineup of GPUs for desktops will function without Resizable BAR. This question is understandably worrying for the GPU community considering the fact that ReBAR is not something that is enabled on every single motherboard, as noted by VideoCardz.

As such, this could cause some confusion among those who buy an Arc Alchemist GPU in the future.

Elsewhere, the document shows how motherboards will need to offer a full-size PCI Express 3.0 (or newer) x16 slot in order to be compatible with Arc boards.

An Intel Arc Alchemist laptop with the Arc logo displayed.
Intel

What’s going on with Arc Alchemist?

Intel’s Arc Alchemist launch for both laptops and desktops has not gotten off to a great start, and that’s putting it lightly. After what can only be described as a botched release pattern for its mobile version of Arc GPUs, the highly anticipated ranges of Team Blue’s desktop GPUs were delayed for an umpteenth time.

The Arc A3 GPU series is set to usher in the desktop lineup, although they can only be acquired by buying a pre-built PC. Furthermore, they are only available in China before expanding into other regions, which has yet to materialize.

And with Intel deciding to forgo the recent Computex event, which was a perfect event to re-introduce Arc Alchemist to the world, you have to start wondering what the real story is behind Arc’s unprecedented launch troubles.

In any case, time is running out: Next-gen graphics cards from Nvidia and AMD are due for a launch in the coming months, and they will, by all accounts, offer a much more attractive GPUs for similar prices to the Arc range.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Ryzen 7000 could finally threaten Intel’s mobile dominance

Ryzen 7000 is due later this year, and we’re expecting a pretty tight race on the desktop between it and Intel’s upcoming Raptor Lake CPUs. But based on what we’ve seen so far, we’re not expecting either company to achieve total victory with these new CPUs.

However, it could be a very different story for the best laptops. Based on what AMD and Intel have disclosed so far, there’s very good reason to believe Ryzen 7000 will give AMD the upper hand when it comes to laptop performance thanks to significantly improved efficiency with Ryzen 7000. Meanwhile, Intel’s upcoming Raptor Lake CPUs aren’t expected to improve efficiency significantly, putting Team Blue in a bad position for the near future.

Ryzen 7000 is already looking good for mobile

MSI/Tom’s Hardware

Although the info we’ve gotten from AMD concerning Ryzen 7000 is largely about desktop CPUs, it’s very much applicable to the upcoming mobile CPUs which are slated to launch in 2023, as both Ryzen 7000 desktop and laptop CPUs use the same Zen 4 architecture.

The key things AMD is promising with Ryzen 7000 desktop CPUs are 15% higher single-threaded performance, 35% higher multi-threaded performance, and most crucially, 25% higher performance per watt. This last point is by far the biggest hint we have regarding Ryzen 7000 mobile performance. Power consumption in laptops isn’t changing, so we can mostly treat 25% more performance per watt as 25% more performance.

Considering the differences between desktop and laptop CPUs, it looks even more positive for Ryzen 7000 mobile. Desktop CPUs tend to be less power efficient than laptop CPUs, because desktop CPUs consume more power to reach high levels of performance. If AMD had tested a mobile CPU, it’s highly likely the company would have touted a figure higher than 25%.

When it comes to improving performance for mobile CPUs, a simple increase in power efficiency is king. Laptops have limited power and thermal profiles, so being able to do more at any given wattage is really good. It’s why Ryzen 4000 and Intel 12th-gen were significantly better than their predecessors.

Ryzen 4000 chip in AMD's CEO hands.

When it comes to news specifically about mobile CPUs, AMD has been tight-lipped. There was one important announcement recently, though — namely, the confirmation that Ryzen 7000 mobile will feature two different CPU families: Dragon Range for high-end laptops and Phoenix Point for the upper-midrange and below. At the company’s Financial Analyst Day event, AMD also confirmed Phoenix Point would be on the 4nm node and would feature RDNA 3 graphics. Given that Phoenix Point is a sub-45-watt CPU, it probably has 8 cores, just like Ryzen 4000, 5000, and 6000 mobile CPUs.

But what about Dragon Range? On the surface, it may appear that Dragon Range is just a powered-up version of Phoenix Point. But if we’re to take AMD’s word for it, Dragon Range might be something much more.

AMD claims it aims to have the “highest core, thread, and cache ever for a mobile gaming CPU.” Unless AMD wants to pretend Alder Lake HX simply doesn’t exist, that means Dragon Range must have 16 cores in order for this boast to make any sense. Dragon Range also uses regular DDR5 instead of the efficient but slower LPDDR5 that Phoenix Point uses.

Raptor Lake doesn’t look like a big mobile upgrade

Someone holding the Core i9-12900KS processor.
Jacob Roach / Digital Trends

When Alder Lake launched in late 2021, it brought Intel back to parity with AMD. New 12th-gen CPUs power basically all of the high-end gaming laptops and lower-power Alder Lake CPUs dominate the premium segment. Alder Lake isn’t as efficient as Ryzen 6000, but it’s efficient enough to be competitive, and it’s the leader in single-threaded and multi-threaded performance.

But things are probably pretty bleak for Intel in the near future. Intel does hold the lead in single-threaded performance, but this metric is becoming less and less important with each new generation. Since Ryzen 7000 promises a big performance improvement across the board, Intel stands to lose ground if the company can’t match AMD’s pace. Unfortunately for Intel, it looks like that’s exactly what’s going to happen.

Intel’s upcoming Raptor Lake CPU is basically a refresh of Alder Lake that adds eight E-cores. It might also feature some architectural tweaks and larger cache, but that’s where the magic stops. Crucially for Intel is the fact that Raptor Lake is on the same 10nm node as Alder Lake. It’s unlikely that Raptor Lake will deliver any significant efficiency improvements, which Intel desperately needs for its lower-power CPUs to not only beat Ryzen 6000, but to stand a chance against Ryzen 7000 Phoenix Point APUs.

Things don’t look so good at the high end either. While eight more E-cores sounds impressive, these E-cores aren’t very fast, even if they are efficient. Without a node improvement to go hand in hand with an increase in core count, it’s likely Raptor Lake will need more power to use all 24 of its cores, which means efficiency could go down (a terrible thing for laptops). In an environment where efficiency is king, can 24 core Raptor Lake really stand up to 16-core Dragon Range CPUs? It’s an uphill battle for Intel, even if Raptor Lake surpasses our expectations.

Intel still has the market

MSI Raider GE76 laptop with Fortnite.

The one thing Intel can comfortably rely on is its large presence in the industry, which ensures Intel will get more design wins than AMD even if Ryzen 7000 is more impressive. However, every generation that Intel fails to match AMD is another generation that AMD gains market share in laptops. In some segments, AMD is getting dangerously close to achieving parity with Intel. In 2019, AMD only had 15% share in the gaming laptop market, but in 2021, it reached 32%. AMD is also doing well in other segments, such as the premium laptop segment, where AMD went from 6% share in 2019 to 23% in 2021.

So, if Ryzen 7000 really is as powerful as it looks, 2023 is going to be another difficult year for Intel. It won’t be until mid to late 2023 that 7nm Meteor Lake CPUs arrive, and for Intel’s sake, hopefully it’s not too late by then.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Intel’s New Meteor Lake CPU May Be the New Apple M1 Max

Intel Alder Lake processors have taken the market by storm, securing their place among the best processors of the year. However, it’s no surprise that Intel is already looking to the future.

The 13th and 14th generations of Intel processors are in the works. New images have emerged, showcasing the upcoming 14th-gen Intel CPUs. The photos display several different chips that are likely to release in 2022 and 2023.

Image credit: CNET

Stephen Shankland from CNET took a tour of the inside of Intel’s chipmaking factory, the Intel Fab 42 located in Chandler, Arizona. He came back with several high-quality images of the upcoming chips that won’t hit the market for at least another year, and in some cases, even two years.

The first chip is dubbed Sapphire Rapids and is a server processor set to release in 2022 as part of Intel’s Xeon server CPU lineup. It includes four larger chiplets that contain processing engines and four smaller memory modules. The entire infrastructure is connected with Intel’s Embedded Multi-die Interconnect Bridge (EMIB) links.

Among the upcoming chips, Shankland also found an Intel Ponte Vecchio CPU that’s set to release in 2022. This is a high-performance data center accelerator that Intel claims is going to be twice as powerful as initially planned.

A 300mm wafer of Meteor Lake test chips.
Image credit: CNET

Perhaps the most interesting reveal is the wafer of the upcoming Meteor Lake chip. Pictured above is a 300mm wafer that features hundreds of test chips of the Intel Meteor Lake-M, which is likely going to be Intel’s power-efficient series of 14th-Generation processors. Although it’s not confirmed whether these chips are part of the M-series of CPUs, their size definitely hints toward just that.

Meteor Lake-M processors are rumored to operate on ultralow power requirements, needing only between 5 watts and 15W to function. While the images are clear, it’s hard to judge the purpose of each and every tile on the chip.

The chip has previously been confirmed to be built using Foveros packaging technology, allowing the use of up to three tiles through stacking chiplets into a full processor. The first tile used would be the computer die, followed by a system on a chip (SoC) LP die, and lastly, a graphics die. Meteor Lake-M might also feature anywhere between 96 and 192 execution units (EUs).

The design of Intel’s 14th-Generation of processors is interesting. The use of SoC ( makes it similar to Apple’s latest and greatest, the M1 Max chip, which was also the brand’s first SoC-based system. Intel’s 12th-Gen CPUs currently perform very well when compared to Apple’s M1 Max. As Apple has plans of its own when it comes to improving its signature chip, it’s likely that the two tech giants will continue to go head-to-head when it comes to the CPU race.

Considering that the current-generation Intel Alder Lake processors feature up to 96 execution units, Meteor Lake with its rumored 192 EUs has the potential to be incredibly powerful. However, before these CPUs ever see the light of day, Intel Raptor Lake will be released first — likely in the last quarter of 2022.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

Why the M1 Is Intel’s True Rival For Alder Lake and Beyond

There have been two major CPU announcements in the past couple of weeks — Apple’s M1 Pro and M1 Max and today, the Intel 12th-gen Alder Lake platform. Although two different CPU generations with different purposes, Apple and Intel are in hot competition with each other, even if that competition isn’t direct.

These two platforms are more alike than they may seem, which could shift the balance of power in the CPU market. For decades, it has been a matchup between Intel and AMD. Apple is a new competitor in the ring, which is something that Intel recognized with the launch of Alder Lake.

AMD is resting on its laurels, which might pay off in the short term. Going forward, though, hybrid CPU architectures are what will dominate desktop and mobile platforms. Here’s why.

M1 Max and Alder Lake: More alike than different

Intel’s 12th-gen Alder Lake chips and Apple’s M1 range both use hybrid architectures. Sure, Intel uses an x86 instruction set while Apple uses the ARM instruction set, but both ranges of processors drive toward a similar goal: Increase performance and efficiency by putting the right workload on the right core.

If you’re unfamiliar, a hybrid CPU combines performant (P) cores and efficient (E) cores onto a single processor. This design — known as big.LITTLE — was pioneered by chip designer ARM, and you can find it in nearly all mobile devices available today. Apple brought that design to laptops and desktops, and now Intel is following suit.

Intel actually tried this concept a couple of years back with Lakefield, but the range never got off the ground. Intel only made two Lakefield chips, and they only showed up in a few laptops like the Galaxy Book S. Alder Lake is different. It uses a hybrid architecture, but it keeps the same improved P-cores you’d find in a typical CPU generation.

Although it’s tempting to throw more fast cores at a processor to improve performance, that’s not the best way to go about things. Small workloads, background tasks, and simple calculations don’t need such powerful cores. The result is that P-cores end up sharing bandwidth with low priority tasks instead of focusing resources on the most important tasks at hand.

That’s what makes hybrid architectures different. The P-cores can focus on the big, important tasks while the E-cores handle all of the minute background tasks. The results speak for themselves. Phones now use the latest chip-making technology, not computers, and Apple’s M1 chip — which is basically a tricked-out mobile chip — manages to outperform its Intel predecessors while staying cooler and consuming less power.

Intel sees the writing on the walls. The company hasn’t been shy about pointing out Apple as its true competitor in the future, not AMD. Meanwhile, AMD continues to stick with architectures that focus on fast cores and a lot of them instead of focusing on a hybrid approach.

The true competitor

MacBook Pro laptops.

Intel CEO Pat Gelsinger has made one thing clear since returning to Intel: Apple is the competition, not AMD. In an interview from October, Gelsinger made that crystal: “We ultimately see the real competition [is] to enable the ecosystem to compete with Apple.”

Apple has used its own silicon in mobile devices dating back to the original iPhone. But it wasn’t until the M1 chip replaced Intel’s options in MacBooks, the iMac, and the iMac Mini that Intel started to change its stance. In a recent interview, Gelsinger said that was ultimately a good move. “They moved the core of their product line to their own M1 and, you know, its derivative family because they thought they could do a better chip. And they’ve done a good job with that.”

Gelsinger says the ultimate goal is to “win them back,” which requires making a chip that outperforms the M1 — or whatever future generation Apple is on — with higher efficiency and similar power draw. Apple has little incentive to switch back to Intel. For that, Intel has to make chips that are too good to ignore.

Alder Lake looks like a paradigm shift for Intel, and if leaked benchmarks are accurate, the mobile chips could outperform Apple’s M1 Max. It’s important to recognize that Alder Lake is part of a larger strategy for Intel, though. The company has shared its road map through 2025, and it’s filled with hybrid.

AMD hasn’t been as clear about its roadmap, likely because it doesn’t need to be. With desktop and server leadership, AMD is sitting cozy at the moment. For now, we know that AMD’s next-generation Ryzen 6000 chips won’t use a hybrid architecture. AMD has suggested that hybrid still needs work, and has pointed the finger at hybrid architectures as a marketing ploy to “have a bigger number.”

It’s true that hybrid needs work, mainly to optimize the operating system’s scheduler to handle each core type appropriately. Apple has clearly done some work on that front, and Intel worked with Microsoft to optimize Windows 11 for Alder Lake’s Thread Director feature. We’ll just have to wait until Alder Lake is here to see if that work will pay off.

Regardless, it’s clear Intel is looking forward. Guided by marketing or a chance at market leadership, it doesn’t matter: Intel is driving after Apple, and AMD is still driving after Intel. I don’t know who’s gambit will pay off. But I do know that Apple is leaving Intel and AMD in the dust, and Intel is the only one talking about it right now.

Hybrid is the wave of the future

Render of Intel Alder Lake chip.

With the launch of Alder Lake, Intel has shown that hybrid is here to stay. Apple is continuing to develop its own hybrid chips, and Intel will continue doing the same for the next few years. Early murmurs suggest AMD could use a hybrid architecture on its Zen 5 CPUs — the generation after Ryzen 6000 — but that’s a couple of years off, at least.

Intel has made some big claims about Alder Lake — identical multi-threaded performance as 11th-gen chips at less than a fourth of the power, up to a 47% improvement when multi-tasking, and up to double the content creation performance as the previous generation. Some of that is on the back of Intel’s new manufacturing process. However, a lot of it comes from Alder Lake’s high core counts and hybrid architecture.

As long as AMD and Intel are making chips, they’ll be compared to each other. With Intel’s switch to a hybrid architecture, though, it’s clear that the company sees a new challenger approaching — one it used to call a partner. If Intel’s performance claims are true, Alder Lake will take the fight to Apple. And if that battle pays off, AMD will likely follow suit.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Intel’s revised roadmap looks beyond 1 nanometer chips

Forget about “SuperFin Enhanced,” the previous name for the node powering Intel’s upcoming 10nm Alder Lake processors. Now, that node is just called “Intel 7,” according to the company’s revised roadmap. But don’t go thinking that means Intel is somehow delivering a 7nm processor early — its long-delayed “Rocket Lake” 7nm chip still won’t ship until 2023, and its node has been renamed to “Intel 4.” Confused yet? It’s almost like Intel is trying to attach a new number to these upcoming products, so we’ll forget it’s losing the shrinking transistor war against AMD.

But Intel’s prospects are more interesting as we look ahead to 2024, when the company expects to finalize the design for its first chips with transistors smaller than 1 nanometer. They’ll be measured by angstroms, instead. The “Intel 20A” node will be powered by “RibbonFET” transistors, the company’s first new architecture since the arrival FinFET in 2011. It’ll be coupled with PowerVia, a technology that can move power delivery to the rear of a chip wafer, which should make signal transmission more efficient.

Pat Gelsinger Intel

Intel

“Building on Intel’s unquestioned leadership in advanced packaging, we are accelerating our innovation roadmap to ensure we are on a clear path to process performance leadership by 2025,” Intel’s new CEO Pat Gelsinger (above) said during the “Intel Accelerated” livestream today. “We are leveraging our unparalleled pipeline of innovation to deliver technology advances from the transistor up to the system level. Until the periodic table is exhausted, we will be relentless in our pursuit of Moore’s Law and our path to innovate with the magic of silicon.”

Before it reaches the angstrom era of chips, though, the company also plans to release a processor with an “Intel 3” node in 2023. You can think of it as a super-powered version of its 7nm architecture, with around an 18 percent performance power watt improvement over Intel 4. It’ll likely fill the timing gap between Rocket Lake chips in 2023 and the Intel 20A products in 2024. Intel is also daring to call its shot beyond 2024: it’s also working on an “Intel 18A” node that’ll further improve on its RibbonFET design.

For consumers, this roadmap means you can expect chips to get steadily faster and more efficient over the next five years. If anything, the announcements today show that Intel is trying to move beyond the 10nm and 7nm delays that have dogged it for ages. 

As we’ve previously argued, it’s ultimately a good thing for the tech industry if Intel can finally regain its footing. Its $20 billion investment in two Arizona-based fabrication plants was a clear sign that Gelsinger aimed to bring the company into new territory. But now that it’s laid out a new timeline, there’ll be even more pressure for Intel not to let things slip once again. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
AI

Today I learned about Intel’s AI sliders that filter online gaming abuse

Last month during its virtual GDC presentation Intel announced Bleep, a new AI-powered tool that it hopes will cut down on the amount of toxicity gamers have to experience in voice chat. According to Intel, the app “uses AI to detect and redact audio based on user preferences.” The filter works on incoming audio, acting as an additional user-controlled layer of moderation on top of what a platform or service already offers.

It’s a noble effort, but there’s something bleakly funny about Bleep’s interface, which lists in minute detail all of the different categories of abuse that people might encounter online, paired with sliders to control the quantity of mistreatment users want to hear. Categories range anywhere from “Aggression” to “LGBTQ+ Hate,” “Misogyny,” “Racism and Xenophobia,” and “White nationalism.” There’s even a toggle for the N-word. Bleep’s page notes that it’s yet to enter public beta, so all of this is subject to change.

Filters include “Aggression,” “Misogyny” …
Credit: Intel

… and a toggle for the “N-word.”
Image: Intel

With the majority of these categories, Bleep appears to give users a choice: would you like none, some, most, or all of this offensive language to be filtered out? Like choosing from a buffet of toxic internet slurry, Intel’s interface gives players the option of sprinkling in a light serving of aggression or name-calling into their online gaming.

Bleep has been in the works for a couple of years now — PCMag notes that Intel talked about this initiative way back at GDC 2019 — and it’s working with AI moderation specialists Spirit AI on the software. But moderating online spaces using artificial intelligence is no easy feat as platforms like Facebook and YouTube have shown. Although automated systems can identify straightforwardly offensive words, they often fail to consider the context and nuance of certain insults and threats. Online toxicity comes in many, constantly evolving forms that can be difficult for even the most advanced AI moderation systems to spot.

“While we recognize that solutions like Bleep don’t erase the problem, we believe it’s a step in the right direction, giving gamers a tool to control their experience,” Intel’s Roger Chandler said during its GDC demonstration. Intel says it hopes to release Bleep later this year, and adds that the technology relies on its hardware accelerated AI speech detection, suggesting that the software may rely on Intel hardware to run.

Repost: Original Source and Author Link

Categories
AI

Intel’s image-enhancing AI is a step forward for photorealistic game engines

Elevate your enterprise data technology and strategy at Transform 2021.


Intel recently unveiled a deep learning system that turns 3D rendered graphics into photorealistic images. Tested on Grand Theft Auto 5, the neural network showed impressive results. The game’s developers have already done a great job of recreating Los Angeles and southern California in detail. But with Intel’s new machine learning system, the graphics turn from high-quality synthetic 3D to real-life depictions (with very minor glitches).

And what’s even more impressive is that the Intel’s AI is doing it at a relatively high framerate as opposed to photorealistic render engines that can take minutes or hours for a single frame. And this is just the preliminary results. The researchers say they can optimize the deep learning models to work much faster.

Does it mean that real-time photorealistic game engines are on the horizon, as some analysts have suggested? I would not bet on it yet, because several fundamental problems remain unsolved.

Deep learning for image enhancement

Before we can evaluate the feasibility of running real-time image enhancement, let’s have a high-level look at the deep learning system Intel has used.

The researchers at Intel have not provided full implementation details about the deep learning system they have developed. But they have published a paper on arXiv and posted a video on YouTube that provide useful hints on the kind of computation power you would need to run this model.

The full system, displayed below, is composed of several interconnected neural networks.

Intel deep learning photorealistic enhancement full architecture

The G-buffer encoder transforms different render maps (G-buffers) into a set of numerical features. G-buffers are maps for surface normal information, depth, albedo, glossiness, atmosphere, and object segmentation. The neural network uses convolution layers to process this information and output a vector of 128 features that improve the performance of the image enhancement network and avoid artifacts that other similar techniques produce. The G-buffers are obtained directly from the game engine.

intel ai photorealistic image enhancement g-buffers

The image enhancement network takes as input the game’s rendered frame and the features from the G-buffer encoder and generates the photorealistic version of the image.

The remaining components, the discriminator and the LPIPS loss function, are used during training. They grade the output of the enhancement network by evaluating its consistency with the original game-rendered frame and by comparing its photorealistic quality with real images.

Inference costs for image enhancement

First, let’s see that, if the technology becomes available, whether gamers will be able to run it on their computers. For this, we need to calculate inference costs, or how much memory and computing power you need to run the trained model. For inference, you’ll only need the G-buffer encoder and image enhancement network, and we can cut the discriminator network.

Intel deep learning photorealistic enhancement inference architecture

The enhancement network accounts for the bulk of the work. According to Intel’s paper, this neural network is based on HRNetV2, a deep learning architecture meant for processing high-resolution images. High-resolution neural networks produce fewer visual artifacts than models that down-sample images.

According to Intel’s paper, “The HRNet processes an image via multiple branches that operate at different resolutions. Importantly, one feature stream is kept at relatively high resolution (1/4 of the input resolution) to preserve fine image structure.”

This means that, if you’re running the game at full HD (1920×1080), then the top row layers will be processing inputs at 480×270 pixels. The resolution halves on each of the lower rows. The researchers have changed the structure of each block in the neural network to also compute inputs from the G-buffer encoder (the RAD layers).

intel photorealistic deep learning image enhancement network

According to Intel’s paper, the G-buffer’s inputs include “one-hot encodings for material information, dense continuous values for normals, depth, and color, and sparse continuous information for bloom and sky buffers.”

The researchers note elsewhere in their paper that the deep learning model can still perform well with a subset of the G-buffers.

So, how much memory does the model need? Intel’s paper doesn’t state the memory size, but according to the HRNetV2 paper, the full network requires 1.79 gigabytes of memory for a 1024×2048 input. The image enhancement network used by Intel has a smaller input size, but we also need to account for the extra parameters introduced by the RAD layers and the G-buffer encoder. Therefore, it would be fair to assume that you’ll need at least one gigabyte of video memory to run deep learning–based image enhancement for full HD games and probably more than two gigabytes if you want 4K resolution.

HRNet memory requirements

One gigabyte of memory is not much given that gaming computers commonly have graphics cards with 4-8 GB of VRAM. And high-end graphics cards such as the GeForce RTX series can have up to 24 GB of VRAM.

But it is also worth noting that 3D games consume much of the graphics card’s resources. Games store as much data as possible on video memory to speed up render times and avoid swapping between RAM and VRAM, an operation that incurs a huge speed penalty. According to one estimate, GTA 5 consumes up to 3.5 GB of VRAM at full HD resolution. And GTA was released in 2013. Newer games such as Cyberpunk 2077, which have much larger 3D worlds and more detailed objects, can easily gobble up to 7-8 GB of VRAM. And if you want to play at high resolutions, then you’ll need even more memory.

So basically, with the current mid- and high-end graphics cards, you’ll have to choose between low-resolution photorealistic quality and high-resolution synthetic graphics.

But memory usage is not the only problem deep learning–based image enhancement faces.

Delays caused by non-linear processing

A much bigger problem, in my opinion, is the sequential and non-linear nature of deep learning operations. To understand this problem, we must first compare 3D graphics processing with deep learning inference.

Three-dimensional graphics rely on very large numbers of matrix multiplications. A rendered frame of 3D graphics starts from a collection of vertices, which are basically a set of numbers that represent the properties (e.g., coordinates, color, material, normal direction, etc.) of points on a 3D object. Before every frame is rendered, the vertices must go through a series of matrix multiplications that map their local coordinates to world coordinates to camera space coordinates to image frame coordinates. An index buffer bundles vertices into groups of threes to form triangles. These triangles are rasterized—or transformed into pixels— and every pixel then goes through its own set of matrix operations to determine its color based on material color, textures, reflection and refraction maps, transparency levels, etc.

3D render pipeline

Above: The 3D render pipeline (Source: LearnEveryone)

This sounds like a lot of operations, especially when you consider that today’s 3D games are composed of millions of polygons. But there are two reasons you get very high framerates when playing games on your computer. First, graphics cards have been designed specifically for parallel matrix multiplications. As opposed to the CPU, which has at most a few dozen computing cores, graphics processors have thousands of cores, each of which can independently perform matrix multiplications.

Second, graphics transformations are mostly linear. And linear transformations can be bundled together. For instance, if you have separate matrices for world, view, and projection transformations, you can multiply them together to create one matrix that performs all three operations. This cuts down your operations by two-thirds. Graphics engines also use plenty of tricks to further cut down operations. For instance, if an object’s bounding box falls out of the view frustum (the pyramid that represents the camera’s perspective), it will be excluded from the render pipeline altogether. And triangles that are occluded by others are automatically removed from the pixel rendering process.

Deep learning also relies on matrix multiplications. Every neural network is composed of layers upon layers of matrix computations. This is why graphics cards have become very popular among the deep learning community in the past decade.

But unlike 3D graphics, the operations of deep learning can’t be combined. Layers in neural networks rely on non-linear activation functions to perform complicated tasks. Basically, this means that you can’t compress the transformations of several layers into a single operation.

For instance, say you have a deep neural network that takes a 100×100 pixel input image (10,000 features) and runs it through seven layers. A graphics card with several thousand cores might be able to process all pixels in parallel. But it will still have to perform the seven layers of neural network operations sequentially, which can make it difficult to provide real-time image processing, especially on lower-end graphics cards.

Therefore, another bottleneck we must consider is the number of sequential operations that must take place. If we consider the top layer of the image enhancement network there are 16 residual blocks that are sequentially linked. In each residual block, there are two convolution layers, RAD blocks, and ReLU operations that are sequentially linked. That amounts to 96 layers of sequential operations. And the image enhancement network can’t start its operations before the G-buffer encoder outputs its feature encodings. Therefore, we must add at least the two residual blocks that process the first set of high-resolution features. That’s eight more layers added to the sequence, which brings us to at least 108 layers of operations for image enhancement.

This means that, in addition to memory, you need high clock speeds to run all these operations in time. Here’s an interesting quote from Intel’s paper: “Inference with our approach in its current unoptimized implementation takes half a second on a GeForce RTX 3090 GPU.”

The RTX 3090 has 24 GB of VRAM, which means the slow, 2 FPS render rate is not due to memory limitations but rather due to the time it takes to sequentially process all the layers of the image enhancer network. And this isn’t a problem that will be solved by adding more memory or CUDA cores, but by having faster processors.

Again, from the paper: “Since G-buffers that are used as input are produced natively on the GPU, our method could be integrated more deeply into game engines, increasing efficiency and possibly further advancing the level of realism.”

Integrating the image enhancer network into the game engine would probably give a good boost to the speed, but it won’t result in playable framerates.

For reference, we can go back to the HRNet paper. The researchers used a dedicated Nvidia V100, a massive and extremely expensive GPU specially designed for deep learning inference. With no memory limitation and no hindrance by other in-game computations, the inference time for the V100 was 150 milliseconds per input, which is ~7 fps, not nearly enough to play a smooth game.

Development and training neural networks

Another vexing problem is the development and training costs of the image-enhancing neural network. Any company that would want to replicate Intel’s deep learning models will need three things: data, computing resources, and machine learning talent.

Gathering training data can be very problematic. Luckily for Intel, someone had solved it for them. They used the Cityscapes dataset, a rich collection of annotated images captured from 50 cities in Germany. The dataset contains 5,000 finely annotated images. According to the dataset’s paper, each of the annotated images required an average of 1.5 hours of manual effort to precisely specify the boundaries and types of objects contained in the image. These fine-grained annotations enable the image enhancer to map the right photorealistic textures onto the game graphics. Cityscapes was the result of a huge effort supported by government grants, commercial companies, and academic institutions. It might prove to be useful for other games that, like Grand Theft Auto, take place in urban settings.

cityscapes image segmentation

Above: The Cityscapes dataset is a collection of finely annotated images of urban settings

But what if you want to use the same technique in a game that doesn’t have a corresponding dataset? In that case, it will be up to the game developers to gather the data and add the required annotations (a photorealistic version of Rise of the Tomb Raider, maybe?).

Compute resources will also pose a challenge. Training a network of the size of the image enhancer for tasks such as image segmentation would be feasible with a few thousand dollars—not a problem for large gaming companies. But when you want to do a generative task such as photorealistic enhancement, then training becomes much more challenging. It requires a lot of testing and tweaking of hyperparameters, and many more epochs of training, which can blow up the costs. Intel tuned and trained their model exclusively for GTA 5. Games that are similar to GTA 5 might be able to slash training costs by finetuning Intel’s trained model on the new game. Others might need to test with totally new architectures. Intel’s deep learning model works well for urban settings, where objects and people are easily separable. But it’s not clear how it would perform in natural settings, such as jungles and caves.

Gaming companies don’t have machine learning engineers, so they’ll also have to outsource the task or hire engineers, which adds more costs. The company will have to decide whether the huge costs of adding photorealistic render are worth the added gaming experience.

Intel’s photorealistic image enhancer shows how far you can push machine learning algorithms to perform interesting feats. But it will take a few more years before the hardware, the companies, and the market will be ready for real-time AI-based photorealistic rendering.

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Bdtechtalks.com. Copyright 2021

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.

How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member

Repost: Original Source and Author Link

Categories
Computing

Intel’s Own M.2 5G Modems Coming to Laptops Later This Year

With Microsoft’s bigger recent embrace of ARM processors in the Windows ecosystem, Intel is gearing up for a major battle in the hopes that its x86 architecture will remain triumphant. At Computex, the company announced a new initiative for thin-and-light Windows laptops that adds 5G mobile broadband connectivity with an Intel-developed modem called the Intel 5G solution 5000.

“We’ve taken the world’s best processor for thin-and-light Windows laptops and made the experience even better with the addition of our two new 11th Gen Intel Core processors with Intel Iris Xe graphics,” said Chris Walker, Intel corporate vice president and general manager of Mobility Client Platforms, in a statement ahead of the Taipei-based technology conference.

“In addition, we know real-world performance and connectivity are vital to our partners and the people that rely on PCs every day, so we’re continuing that momentum with more platform capabilities and choice in the market with the launch of our first 5G product for PCs — the Intel 5G solution 5000.”

The company is working with MediaTek and Fibocom to launch the Intel 5G solution 5000. Laptops with Intel’s 5G model will arrive later this year from manufacturers such as Acer, Asus, and HP, according to the company. More than 30 laptop designs are expected to support Intel’s 5G modem through 2022. These notebooks will feature the modem alongside Intel’s 11th Gen Core U- and Core H-series processors.

The modems are the company’s first M.2 5G solution and come with worldwide carrier certification, which means that laptop owners with the Intel 5G solution 5000 will be able to roam globally and get 5G connectivity where available. Intel had previously sold off its smartphone modem business to Apple in a $1 billion transaction, but the company continues to develop 5G mobile solutions for laptops.

The addition of 5G mobile connectivity solutions to Intel’s mobile chipsets will help the company broaden its Intel Evo initiative, which initially launched as Project Athena. Evo, designed to help Intel combat the rising threat of rival Qualcomm’s ARM-based Snapdragon processors for PCs, comes with guidelines for performance, battery life, connectivity, and design. Laptops bearing the Evo branding are designed to be thin-and-light while delivering on strong performance and long battery life — features that are also promoted by ARM-based notebooks running the Snapdragon 8cx family of processors. ARM-based laptops also support Qualcomm’s 4G and 5G mobile broadband connectivity.

And with Apple’s push for ARM-based computing with the launch of its M1 silicon, Intel has a lot at stake in making laptops running its chipsets more appealing. In addition to the 5G modem launch, intel also announced two new mobile processors at Computex in the 11th Gen family to fend off competition from AMD. The company unveiled the Core i7-1195G7 and Core i5-1155G7, which are described as CPUs for productivity, content creation, and gaming.

These are the first 5.0GHz processors in Intel’s U-series of processors, and the company expects more than 60 consumer laptop designs to launch with the new CPUs by the holiday. Combined, this means that Intel will count as many as 250 mobile designs across its entire family of U-series processors.

With up to 25% better performance than the competition in app performance, the company claimed that these two new processors deliver on 1080p gameplay, showing impressive performance on titles like Valheim. Here, Intel claimed that the chips deliver up to 2.7x the frame rate on this title compared to rival AMD’s Ryzen 7 5800U.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

How to Watch Intel’s Computex 2021 Keynote

Intel CEO Pat Gelsinger Walden Kirsch/Intel Corporation

Intel announced that it will be hosting a keynote of its own at this year’s Computex conference. Though the conference is scheduled to begin on June 1 and last until June 5 this year, Intel’s keynote will happen a day prior, local time, to kick things off. Like other events happening around Computex, Intel’s keynote will be virtual this year due to the ongoing health pandemic.

Though the company did not reveal what users can expect, the good news is that the virtual format allows those at home the opportunity to follow Intel’s livestream and watch the announcements as they unfold in real time. Gamers, laptop users, and those in the market to upgrade to a new Windows desktop will likely find something from Intel to get excited about. Here’s how to watch Intel’s livestream:

How to watch Intel’s Computex 2021 keynote

If you intend on watching Intel’s announcement live, be sure to take note that the company’s keynote will happen a day before Computex officially kicks off. Intel announced that its event will happen at 10 a.m. local time in Taipei, Taiwan on Monday, May 31. This means that those in the United States can tune in at 7 p.m PT  (10 p.m. PT) on Sunday, May 30.

The company posted a link to its livestream ahead of the keynote, which can be viewed from YouTube. We’ve also embedded the stream at the top, so you can watch Intel’s presentation from this page.

If you’re unable to follow watch Intel’s keynote live, be sure to also follow Digital Trends, as we’ll be covering all of the news and latest announcements from Intel and others in the PC industry.

What to expect from Intel’s Computex 2021 announcements

The company has been tight-lipped about what you can expect. In its press release, Intel stated that it will reveal key insights and strategies from new CEO Pat Gelsinger.

“Join Intel Executive Vice President Michelle Johnston Holthaus for Intel’s first virtual COMPUTEX keynote and a firsthand look at how the strategies of new CEO Pat Gelsinger, along with the forces of a rapidly accelerating digital transformation, are unleashing a new era of innovation at Intel — right when the world needs it most,” Intel wrote in its media advisory. “Johnston Holthaus will welcome Intel’s Steve Long, corporate vice president of Client Computing Group Sales, and Lisa Spelman, corporate vice president and general manager of the Xeon and Memory Group, to outline how Intel innovations help expand human potential by expanding technology’s potential.”

The company revealed that it will present news from its work in the data center, cloud, connectivity, artificial intelligence, and the intelligent edge.

Given that Intel had recently launched its new Tiger Lake-H series processors and it is hard at work on its 12th Gen Alder Lake platform after it previewed its next-gen silicon earlier this year at CES, we can likely expect updates on both fronts, especially performance improvements driven by Intel’s heterogeneous processor architecture. Intel’s keynote presentation is only scheduled for 30 minutes, we expect it to be tightly packed with news. Alder Lake is expected to debut later this year, so we don’t anticipate any new systems at Computex to debut with the 12th Gen processor. The company could also talk about its latest graphics strategy as it pushes forward with its Xe graphics architecture and its progress with new discrete graphics cards.

In addition to its opening keynote, Intel also announced a second talk at Computex scheduled for 10 a.m. local time on June 2, which translates to 7 p.m. PT on June 1 for U.S. audiences. That session will be focused on A.I. and high-performance computing, and the talk will be delivered by Nash Palaniswamy, Intel’s vice president of the Sales, Marketing and Communications Group and general manager of AI, HPC and Datacenter Accelerators Solutions and Sales. A link to Intel’s June 2 presentation is also embedded above for your convenience.

Editors’ Choice




Repost: Original Source and Author Link