Intel Arc Alchemist desktop GPUs may be worse than we thought

Today marks yet another round of bad news for Intel Arc Alchemist, this time pertaining to the Arc A380, which is the first discrete GPU for desktops that Intel had released. Upon announcing the card, Intel compared it to the budget AMD Radeon RX 6400, all the while promising that the A380 would provide an up to 25% uplift in performance versus the RX 6400.

Intel’s claims have been closely examined, and unfortunately, the A380 fails to meet those expectations. While the Intel Arc GPU is faster than the AMD RX 6400, it only wins by 4%. The other cards from the lineup have also been given another look.


Intel has just recently released its first Arc Alchemist desktop GPU, the A380. For the time being, the card is only available in China, and is only being shipped in pre-built desktop PCs. However, Intel has promised to soon move on to the next stage, which is to release it on the DIY market in China, and then finally, globally.

As part of the release announcement, Intel shared a performance slide for the GPU, showing the average frames per second (fps) when gaming at 1080p on medium settings. With that, Intel promised that the A380 should be up to 25% faster than the AMD Radeon RX 6400 — but the slide didn’t contain any matching figures to back up that statement. This prompted 3DCenter to verify that information, and unfortunately, it’s bad news all around for Intel Arc.

It seems that the general public may have missed an important factor in relation to Intel’s claims — the promise of an up to 25% increase in performance only applies to a performance versus price comparison. In short, since the RX 6400 is slightly more expensive than the A380, the actual performance boost is much smaller than expected.

3DCenter compared the data available for Intel Arc A380 and for the RX 6400. The Intel GPU is priced at 1,030 yuan (around $153) while the AMD graphics card costs 1,199 yuan ($178). According to 3DCenter, Intel’s claims mostly check out when it comes to performance per yuan — the Arc A380 wins by around 21%, making it more cost-effective. However, the raw performance gains are significantly smaller, amounting to around 4%.

Intel Arc lineup -- expectations versus possible reality.

As a result of those findings, 3DCenter went on to take a closer look at some of the other claims that were made about the performance of the entire Intel Arc lineup. Although much like Intel’s claims, these comparisons are difficult to verify, it seems that it might be a good idea to keep your expectations muted where Intel Arc desktop GPUs are concerned.

The flagship Intel Arc A780, with the full 32 Xe-cores across a 256-bit bus and 16GB of GDDR6 memory, was often compared to the Nvidia GeForce RTX 3070, and sometimes, even the RTX 3070 Ti. However, 3DCenter now says that the GPU will be “slightly worse than RTX 3060 Ti.” The other GPUs in the range are also knocked down a notch with these updated predictions, with the most entry-level A310 now being called “significantly slower than Radeon RX 6400.”

It’s hard to deny that things are looking a little bleak for Intel’s first discrete gaming GPU launch. After numerous delays, a staggered launch, and most importantly, with questionable levels of performance, it might be difficult for Intel Arc to find its footing in a GPU market dominated by Nvidia and AMD. However, despite the wait, it’s still early days, and further driver optimizations might bring Intel Arc amongst the best GPUs yet — especially if the company keeps the price competitive.

Editors’ Choice

Repost: Original Source and Author Link


Hertzbleed vulnerability steals data from AMD and Intel CPUs

Researchers just outlined a new vulnerability that affects processor chips — and it’s called Hertzbleed. If used to conduct a cybersecurity attack, this vulnerability can help the attacker steal secret cryptographic keys.

The scale of the vulnerability is somewhat staggering: According to the researchers, most Intel and AMD CPUs might be impacted. Should we be worried about Hertzbleed?


The new vulnerability was first discovered and described by a team of researchers from Intel as part of its internal investigations. Later on, independent researchers from UIUC, UW, and UT Austin also contacted Intel with similar findings. According to their findings, Hertzbleed might affect most CPUs. The two processor giants, Intel and AMD, have both acknowledged the vulnerability, with Intel confirming that it affects all of its CPUs.

Intel has issued a security advisory that provides guidance to cryptographic developers on how to strengthen their software and libraries against Hertzbleed. So far, AMD hasn’t released anything similar.

What exactly is Hertzbleed and what does it do?

Hertzbleed is a chip vulnerability that allows for side-channel attacks. These attacks can then be used to steal data from your computer. This is done through the tracking of the processor’s power and boost mechanisms and observing the power signature of a cryptographic workload, such as cryptographic keys. The term “cryptographic keys” refers to a piece of information, securely stored in a file, which can only be encoded and decoded through a cryptographic algorithm.

In short, Hertzbleed is capable of stealing secure data that normally remains encrypted. Through observing the power information generated by your CPU, the attacker can convert that information to timing data, which opens the door for them to steal crypto keys. What’s perhaps more worrying is that Hertzbleed doesn’t require physical access — it can be exploited remotely.

It’s quite likely that modern processors from other vendors are also exposed to this vulnerability, because as outlined by the researchers, Hertzbleed tracks the power algorithms behind the Dynamic Voltage Frequency Scaling (DVFS) technique. DVFS is used in most modern processors, and thus, other manufacturers such as ARM are likely affected. Although the research team notified them of Hertzbleed, they are yet to confirm whether their chips are exposed.

Putting all of the above together certainly paints a worrying picture, because Hertzbleed affects such a large number of users and so far, there is no quick fix to be safe from it. However, Intel is here to put your mind at ease on this account — it’s highly unlikely that you will be the victim of Hertzbleed, even though you are likely exposed to it.

According to Intel, it takes anywhere between several hours to several days to steal a cryptographic key. If someone would still want to try, they might not even be able to, because it requires advanced high-resolution power monitoring capabilities that are difficult to replicate outside of a lab environment. Most hackers wouldn’t bother with Hertzbleed when plenty of other vulnerabilities are discovered so frequently.

How to make sure Hertzbleed won’t affect you?

Hertzbleed vulnerability mitigation methods depicted in a chart.

As mentioned above, you are probably secure even without doing anything in particular. If Hertzbleed gets exploited, it’s unlikely that regular users will be affected. However, if you want to play it extra safe, there are a couple of steps you can take — but they come at a severe performance price.

Intel has detailed a number of mitigation methods to be used against Hertzbleed. The company doesn’t seem to be planning to deploy any firmware updates, and the same can be said about AMD. As per Intel’s guidelines, two ways exist to be fully protected from Hertzbleed, and one of them is super easy to do — you just have to disable Turbo Boost on Intel processors and Precision Boost on AMD CPUs. In both cases, this will require a trip to the BIOS and disabling boost mode. Unfortunately, this is really bad for your processor’s performance.

The other methods listed by Intel will either only result in partial protection or are very difficult, if not impossible, for regular users to apply. If you don’t want to tweak the BIOS for this and sacrifice your CPU’s performance, you most likely don’t have to. However, keep your eyes open and stay sharp — cybersecurity attacks take place all the time, so it’s always good to be extra careful. If you’re tech-savvy, check out the full paper on Hertzbleed, first spotted by Tom’s Hardware.

Editors’ Choice

Repost: Original Source and Author Link


Intel Meteor Lake will pack more punch for the same power

Intel has just given us a much larger glimpse into its future Meteor Lake lineup. At the 2022 IEEE VLSI Symposium, the company talked about the 14th generation of its processors, detailing the future process node and the improvements the new Intel 4 process should bring.

The teaser certainly sounds promising. Intel claims that Meteor Lake CPUs will provide 20% higher clock speeds than the previous generations, all while maintaining the same power requirements.


Intel Meteor Lake is still quite far off — the company confirms that the new chips are on track to meet the 2023 launch deadline, although no specifics have been given at this time. Before we ever see Meteor Lake, we will see the launch of Intel Raptor Lake in the fall. However, unsurprisingly, both Intel and the tech world at large are looking to the future — and as far as the 14th generation of Intel chips goes, the future looks pretty exciting.

During the 2022 IEEE VLSI Symposium, Intel took the public on a deep dive into the upcoming Intel 4 process node, which is what Meteor Lake is based on. As a successor to the Intel 7 (used for Alder Lake and Raptor Lake), it will require a new socket, and it will feature a new architecture. Intel claims that the changes introduced in that generation will deliver huge performance gains while keeping the power consumption at a similar level to what we’ve grown used to with 12th-gen CPUs.

The company teased that Meteor Lake will deliver up to 21.5% higher frequencies at the same power requirements as the Intel 7 process. Similarly, when scaled down to the same frequency as Intel 7, Meteor Lake will sport an up to 40% power reduction. This is going to be achieved through various changes in the chip’s architecture, such as a 2x improvement in area scaling. This means that it has doubled transistor density compared to the Intel 7, at least for the high-performance libraries.

With the new process node, Intel will largely use extreme ultraviolet (EUV) lithography as a way to simplify manufacturing. Simply put, this reduces the number of steps needed to manufacture the node by a significant amount. It should result in higher yields and reduce production errors. As a result of EUV, Intel noted a 5% reduction in process steps and a 20% lower total mask count.

The Intel 4 name is a code name for Intel’s 7nm process node, which means a switch from 10nm to 7nm for Intel. The new chips will utilize Intel’s Foveros 3D packaging technology and will feature a four-die setup joined by TSV (through silicon-via) connections. These four tiles will be split into the input/output tile (I/O), the system-on-a-chip tile, the compute tile, and the graphics tile.

Intel Meteor Lake slide, part two.

Intel has shared a blown-up image of the compute die for Meteor Lake, complete with six blue-colored performance cores (Redwood Cove) and two clusters of four Crestmont efficiency cores, colored in purple. In the middle of the chip, you can see the L3 cache and the interconnect circuitry. The company has yet to divulge the exact description of the I/O and the SOC tiles.

In addition to teasing the Intel 4 process, the manufacturer also talked about what comes next — moving on to Intel 3. Intel 3 will come with enhanced transistors and interconnects, and it’s worth noting that I4 will be forward compatible with I3, so it won’t require a full redesign. Intel will stay true to the EUV technology, with more EUV layers that simplify the design even further. According to the current estimations, the I3 node will be around 18% faster than the I4. Once Intel is done with I3, it will move on to the 20A and 18A nodes and even more exciting technologies.

All in all, Intel’s sneak peek is very detailed and quite technical, so if you’re a fan of that, make sure you read the full write-up prepared by Tom’s Hardware. Although Meteor Lake is a while off, there’s still plenty to be hyped for this year. We’ve got the Intel Raptor Lake coming up, and around the same time, AMD is slated to launch the Ryzen 7000 series of CPUs.

Editors’ Choice

Repost: Original Source and Author Link


AMD’s Ryzen road map spells out how it plans to beat Intel

AMD showed off its Ryzen road map on desktop and mobile during its Financial Analyst Day on Thursday, laying out how it plans to beat Intel to have the best processor. The road map reveals several key details about the upcoming Ryzen 7000 processors, as well as future CPUs for laptops and desktops. Although AMD didn’t provide hard performance numbers, the company still revealed expected performance for its Ryzen 7000 CPUs.

In particular, AMD says Ryzen 7000 comes with between an 8% and 10% increase in instruction per clock (IPC), and that it has a 25% performance-per-watt advantage over Ryzen 5000 CPUs. AMD reconfirmed the greater than 15% single-core performance increase it announced at Computex, too, which the company says is a very conservative estimate. In a pre-brief, AMD says it wants to underscore the “greater than” part of the claim.

Overall, AMD says Ryzen 7000 is 35% faster than the previous generation, which is a massive jump. It doesn’t stop at Ryzen 7000, though. In addition to 3D V-Cache coming back to Ryzen, AMD’s Ryzen road map (above) reveals some details about Zen 5 CPUs as well. AMD says they’re coming in 2024 and will offer a much more significant step up in performance.

These chips will use a 4nm manufacturing process for desktops, but that’s about all we know for now. The only major development is that Zen 5 CPUs could use a multinode architecture, similar to Intel Alder Lake. AMD didn’t outright confirm this is the case, but it talked up its fourth-gen Infinity architecture that enables multi-node designs.

In addition, the road map confirms that Threadripper processors built on Zen 4 are in the works. A leaked road map hinted at Threadripper 7000 earlier this year, and it is expected to launch in early 2023. You might not be able to buy them for your next PC build, though. Threadripper 5000 processors, for example, are currently only available in the Lenovo P620 workstation.

AMD's Ryzen mobile roadmap through 2024.

AMD provided a road map for its laptop processors, too (above). The company just launched Ryzen 6000 mobile, so now AMD’s sights are set on Phoenix Point chips in early 2023. We don’t have a confirmed name for this range yet, but AMD says they’ll use Zen 4 cores like Ryzen 7000 and be built using a 4nm manufacturing process.

Perhaps more exciting, these next-gen mobile CPUs will come with RDNA 3 graphics built-in — that’s the architecture behind AMD’s upcoming RX 7000 GPUs. Laptops have become a larger focus for AMD over the past few generations. Although Ryzen 6000 doesn’t beat Intel across the board, next-gen processors may.

Phoenix Point processors will target a power range of 35 watts to 45W for high-performance laptops, but AMD has previously confirmed an even more powerful lineup of mobile chips dubbed Dragon Range. These should launch around the same time as Phoenix Point, though they aren’t included on AMD’s new road map.

Beyond Phoenix Point, AMD will launch Strix Point built on Zen 5 CPU cores. We don’t know the manufacturing process yet, and details are light, as they are with Zen 5 desktop CPUs. The biggest announcement was that they will include RDNA 3+ graphics, which seems to be an enhanced version of AMD’s upcoming graphics architecture.

AMD Ryzen processor render.

Both Phoenix Point and Strix Point will also introduce an AI engine developed by Xilinx — a company that AMD recently acquired. It’s tough to say what specifically the engine will do, but it will likely target features that improve battery life, webcam performance, and system noise. AMD hasn’t revealed any details, though.

AMD is gearing up for a fight following the release of Intel’s 12th-gen Alder Lake processors. Looking forward, Ryzen 7000 will compete with Intel Meteor Lake, which is Intel’s next generation of processors. Intel is sticking with the same manufacturing process as Alder Lake, which could give AMD a leg up in the next generation. Intel has a road map of its own, however, so it’ll be an interesting few years.

Editors’ Choice

Repost: Original Source and Author Link


Nvidia Is Reportedly Making a New GPU to Combat Intel Arc

As Intel plans to disperse the first batch of Arc Alchemist discrete graphics cards for early 2022, rumors suggest Nvidia is already planning to sabotage it.

Nvidia and AMD have long traded blows with one another on the GPU market. However, Intel is going to be joining the fight in early 2022 with the release of its Arc graphics cards. But as anticipation builds, Nvidia has something up its sleeve to counter them: The release of its new RTX 3050.

This information comes from a tweet by Kopite7Kimi, a known Twitter user who leaked the original RTX 3000 Super lineup in September.

Update that:
RTX 3050
8G GD6

— kopite7kimi (@kopite7kimi) December 2, 2021

It is also worth noting that this latest info, along with the tweet about the RTX 3000 Super lineup, should be taken with skepticism since Nvidia has yet to comment or confirm any of these leaks.

While this may seem like Nvidia has stolen the show, a tweet by the user TUM_APISAK tweeted the following information.

Intel Arc A380 Graphics
2.45GHz 6GB

perf 1650S#IntelArc #DG2

— APISAK (@TUM_APISAK) December 2, 2021

As subliminal as that tweet sounds, the user mentions “Perf 1650s,” which most likely means that the Intel Arc A380 is there to combat the Nvidia GTX 1650 Super, which has 4GB of GDDR6.

The RTX 3050 line isn’t entirely foreign; in October, we took a look at the MSI Summit E16 Flip, which featured an RTX 3050. The most notable difference between this model and the leaked discrete variant is that the mobile version features only 4GB of GDDR6 whereas the latter is rumored to feature 8GB of GDDR6.

We do know for a fact is that Intel is releasing its Arc GPUs in early 2022 and with CES just around the corner, more information is sure to surface. It is also worth mentioning that due to the GPU shortage. these card will likely still be hard to obtain and sold for higher than their retail price even if they are not impressive.

Editors’ Choice

Repost: Original Source and Author Link


Intel Arc Alchemist A380 Discrete Graphics Card: Specs Leak

Intel’s upcoming discrete GPUs, dubbed Intel Arc Alchemist, are coming next year, and some new leaks reveal what kind of performance we can expect from them.

According to the leak, one of the upcoming GPUs, the A380, is likely to offer performance similar to that of Nvidia’s GTX 1650 Super, an entry-level video card from Nvidia’s previous generation of graphics.

Image credit: Wccftech

The information comes from TUM_APISAK on Twitter, a well-known source for graphics card-related rumors and leaks. The tweet in question talks about some of the specifications of the upcoming Intel Arc A380 graphics card and reveals the expected naming convention Intel might use. It seems that Intel is going to name the new cards A***, with the numbers changing to correspond to the performance tier of that specific card.

What we’re seeing in TUM_APISAK’s reveal is most likely the desktop variant of this graphics card. In terms of specifications, the A380 is said to be based on an Alchemist (XE-HPG DG2) GPU. It will be fabricated on the TSMC 6nm process node. Its 8 Xe cores will house 128 execution units (EUs). The top model of this lineup will allegedly have 512 EUs and 32 Xe cores.

The card is also rumored to have an impressive clock speed of 2.45GHz. Whether this frequency will be the boost clock or the base clock remains to be seen, but such speeds put the A380 within range of AMD Navi 22 and Navi 23 graphics cards. In addition, the card will have 6GB of GDDR6 memory. It has also been said that all Arc Alchemist cards will come with ray-tracing and the XeSS feature set, a form of image upscaling on Intel cards.

There was no mention of the bus, but previous leaks suggest a 96-bit interface. In the desktop version of the card, we can expect to see 16Gbps pin speeds, adding up to 168GB/s of bandwidth. The laptop version is said to be slightly worse, with 14Gbps pin speeds and 168GB/s bandwidth. Intel Arc Alchemist A380 is likely going to be fairly conservative with poweer, with a TDP of 75W.

Intel Arc A380 Graphics
2.45GHz 6GB

perf 1650S#IntelArc #DG2

— APISAK (@TUM_APISAK) December 2, 2021

TUM_APISAK hasn’t provided any benchmarks, but he did suggest that the performance of this card is going to rival that of the Nvidia GeForce GTX 1650 Super. While that is a rather dated card by now, it continues to be one of the best budget graphics cards out there. This bodes well for the Arc Alchemist.

The pricing of the card hasn’t yet been revealed, but the launch is still a few months away. Remember, its performance and specifications may not be accurate right now. If the leaks prove to be true, this card is likely to be rather inexpensive, with a price of around $250 or less.

Editors’ Choice

Repost: Original Source and Author Link


Future Intel Laptops Could Mandate 8-Megapixel Webcams

Tired of a miserly low-resolution webcam on your Intel-powered laptop? That could soon be a thing of the past, if leaked specifications for Intel’s Evo 4.0 platform are anything to go by, as much better picture quality is apparently in the offing.

According to NotebookCheck, the fourth generation of Intel’s Evo platform — which could be introduced with the upcoming Raptor Lake series of processors pegged for the third quarter of 2022 — will mandate 8-megapixel cameras on all laptops running this spec. In other words, if laptop manufacturers want to work with Intel to be Evo-accredited, they will need to up their webcam game.

Riley Young/Digital Trends

High resolution isn’t the only thing that could become a requirement. NotebookCheck claims other specs are likely to be part of the Evo 4.0 specification, including an 80-degree field of view, plus a passing grade on the VCX benchmark.

What is VCX, you ask? Well, Intel is now part of the VCX forum (short for Valued Camera eXperience), which scores laptop webcams based on certain benchmarks. These include texture loss, motion control, sharpness, dynamic range, the camera’s performance under various lighting conditions, and more. At the end, a final score is given. And it now seems that Intel will be expecting manufacturers’ webcams to hit a minimum score (as yet unknown) in order to pass muster.

Interestingly, NotebookCheck’s report says that any webcams placed below the user’s eye line will be awarded negative points in the VCX test. Someone better tell the Huawei MateBook X Pro.

With Intel’s Raptor Lake series set for later in 2022, could we see some of these webcam improvements in this year’s Alder Lake-based laptops? That’s certainly possible. Intel will allegedly have VCX benchmark scores ready by the first quarter of 2022, so we might see a few devices appear that meet these standards before Raptor Lake steps into the limelight. Just don’t bet the farm on it.

Alongside Intel, Microsoft has also reportedly begun enforcing minimum standards for its partner devices. Like Intel, the company wants manufacturers to hit certain specs for webcams, microphones, and speakers. With two giants of the industry pushing manufacturers to up their game, we could finally be able to bid flimsy webcams and crackly mics adieu.

Editors’ Choice

Repost: Original Source and Author Link


LinkedIn and Intel tech leaders on the state of AI

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

Disclosure: The author is the managing director of Connected Data World.

AI is on a roll. Adoption is increasing across the board, and organizations are already seeing tangible benefits. However, the definition of what AI is and what it can do is up for grabs, and the investment required to make it work isn’t always easy to justify. Despite AI’s newfound practicality, there’s still a long way to go.

Let’s take a tour through the past, present, and future of AI, and learn from leaders and innovators from LinkedIn, Intel Labs, and cutting-edge research institutes.

Connecting data with duct tape at LinkedIn

Mike Dillinger is the technical lead for Taxonomies and Ontologies at LinkedIn’s AI Division. He has a diverse background, ranging from academic research to consulting on translation technologies for Fortune 500 companies. For the last several years, he has been working with taxonomies at LinkedIn.

LinkedIn relies heavily on taxonomies. As the de facto social network for professionals, launching a skill-building platform is a central piece in its strategy. Following CEO Ryan Roslanski’s statement, LinkedIn Learning Hub was recently announced, powered by the LinkedIn Skills Graph, dubbed “the world’s most comprehensive skills taxonomy.”

The Skills Graph includes more than 36,000 skills, more than 14 million job postings, and the largest professional network with more than 740 million members. It empowers LinkedIn users with richer skill development insights, personalized content, and community-based learning.

For Dillinger, however, taxonomies may be overrated. In his upcoming keynote in Connected Data World 2021, Dillinger is expected to refer to taxonomies as the duct tape of connecting data. This alludes to Perl, the programming language that was often referred to as the duct tape of the internet.

“Duct tape is good because it’s flexible and easy to use, but it tends to hide problems rather than fix them,” Dillinger said.

A lot of effort goes into building taxonomies, making them correct and coherent, then getting sign-off from key stakeholders. But this is when problems start appearing.

Key stakeholders such as product managers, taxonomists, users, and managers take turns punching holes in what was carefully constructed. They point out issues of coverage, accuracy, scalability, and communication. And they’re all right from their own point of view, Dillinger concedes. So the question is — what gives?

Dillinger’s key thesis,, is that taxonomies are simply not very good as a tool for knowledge organization. That may sound surprising at first, but coming from someone like Dillinger, it carries significant weight.

Dillinger goes a long way to elaborate on the issues with taxonomies, but perhaps more interestingly, he also provides hints for a way to alleviate those issues:

“The good news is that we can do much better than taxonomies. In fact, we have to do much better. We’re building the foundations for a new generation of semantic technologies and artificial intelligence. We have to get it right,” says Dillinger.

Dillinger goes on to talk about more reliable building blocks than taxonomies for AI. He cites concept catalogs, concept models, explicit relation concepts, more realistic epistemological assumptions, and next-generation knowledge graphs.

It’s the next generation, Dillinger says, because today’s knowledge graphs do not always use concepts with explicit human-readable semantics. These have many advantages over taxonomies, and we need to work on people, processes, and tools levels to be able to get there.

Thrill-K: Rethinking higher machine cognition

The issue of knowledge organization is a central one for Gadi Singer as well. Singer is VP and director of Emergent AI at Intel Labs. With one technology after another, he has been pushing the leading edge of computing for the past four decades and has made key contributions to Intel’s computer architectures, hardware and software development, AI technologies, and more.

Singer said he believes that the last decade has been phenomenal for AI, mostly because of deep learning, but there’s a next wave that is coming: a “third wave” of AI that is more cognitive, has a better understanding of the world, and higher intelligence. This is going to come about through a combination of components:

“It’s going to have neural networks in it. It’s going to have symbolic representation and symbolic reasoning in it. And, of course, it’s going to be based on deep knowledge. And when we have it, the value that is provided to individuals and businesses will be redefined and much enhanced compared to even the great things that we can do today”, Singer says.

In his upcoming keynote for Connected Data World 2021, Singer will elaborate on Thrill-K, his architecture for rethinking knowledge layering and construction for higher machine cognition.

Singer distinguishes recognition, as in the type of pattern-matching operation using shallow data and deep compute at which neural networks excel, from cognition. Cognition, Singer argues, requires understanding the very deep structure of knowledge.

To be able to process even seemingly simple questions requires organizing an internal view of the world, comprehending the meaning of words in context, and reasoning on knowledge. And that’s precisely why even the more elaborate deep learning models we have currently, namely language models, are not a good match for deep knowledge.

Language models contain statistical information, factual knowledge, and even some common sense knowledge. However, they were never designed to serve as a tool for knowledge organization. Singer believes there are some basic limitations in language models that make them good, but not great for the task.

Singer said that what makes for a great knowledge model is the capability to scale well across five areas of capabilities: scalability, fidelity, adaptability, richness, and explainability. He adds that sometimes there’s so much information learned in language models, that we can extract it and enhance dedicated knowledge models.

To translate the principles of having a great knowledge model to an actual architecture that can support the next wave of AI, Singer proposes an architecture for knowledge and information organized at three levels, which he calls Thrill-K.

The first level is for the most immediate knowledge, which Singer calls the Giga scale, and believes should sit in a neural network.

The next level of knowledge is the deep knowledge base, such as a knowledge graph. This is where intelligible, structured, explicit knowledge is stored at the Terascale, available on demand for the neural network.

And, finally, there’s the world information and the world knowledge level, where data is stored at the Zetta scale.

Knowledge, Singer argues, is the basis for making reasoned intelligent decisions. It can adapt to new circumstances and new tasks. That’s because the data and the knowledge are not structured for a particular task, but it’s there with all their richness and expressivity.

It will take concerted effort to get there, and Intel Labs on its part is looking into aspects of NLP, multi-modality, common sense reasoning, and neuromorphic computing.

Systems that learn and reason

If knowledge organization is something that both Dillinger and Singer value as a key component in an overarching framework for AI, for Frank van Harmelen it’s the centerfold in his entire career. Van Harmelen leads the Knowledge Representation & Reasoning Group in the Computer Science Department of the VU University Amsterdam.

He is also Principal investigator of the Hybrid Intelligence Centre, a $22.7 million, (€20 million), ten-year collaboration between researchers at six Dutch universities into AI that collaborates with people instead of replacing them.

Van Harmelen notes that after the breakthroughs of machine learning (deep learning or otherwise) in the past decade, the shortcomings of machine learning are also becoming increasingly clear: unexplainable results, data hunger, and limited generalisability are all becoming bottlenecks.

In his upcoming keynote in Connected Data World 2021, Van Harmelen will look at how the combination with symbolic AIin the form of very large knowledge graphs can give us a way forward: Towards machine learning systems that can explain their results, that need less data, and that generalize better outside their training set.

The emphasis in modern AI is less on replacing people with AI systems, and more on AI systems that collaborate with people and support them. For Van Harmelen, however, it’s clear that current AI systems lack background knowledge, contextual knowledge, and the capability to explain themselves, which makes them not very human-centered:

“They can’t support people and they can’t be competent partners. So what’s holding AI back? Why are we in this situation? For a long time, AI researchers have locked themselves into one of two towers. In the case of AI, we could call these the symbolic AI tower and the statistical AI tower”.

If you’re in the statistical AI camp, you build your neural networks and machine learning programs. If you’re in the symbolic AI camp, you build knowledge bases and knowledge graphs and you do inference over them. Either way, you don’t need to talk to people in the other camp, because they’re wrong anyway.

What’s actually wrong, argues Van Harmelen, is this division. Our brains work in both ways, so there’s no reason why approximating them with AI should rely exclusively on either approach. In fact, those approaches complement each other very well in terms of strengths and weaknesses.

Symbolic AI, most famously knowledge graphs, is expensive to build and maintain as it requires manual effort. Statistical AI, most famously deep learning, requires lots of data, plus oftentimes also lots of effort. They both suffer from the “performance cliff” issue (, i.e. their performance drops under certain circumstances, but the circumstances and the way differ).

Van Harmelen provides many examples of practical ways in which symbolic and statistical AI can complement each other. Machine learning can help build and maintain knowledge graphs, and knowledge graphs can provide context to improve machine learning:

“It is no longer true that symbolic knowledge is expensive and we cannot obtain it all. Very large knowledge graphs are witness to the fact that this symbolic knowledge is very well available, so it is no longer necessary to learn what we already know.

We can inject what we already know into our machine learning systems, and by combining these two types of systems produce more robust, more efficient, and more explainable systems,” says Van Harmelen.

The pendulum has been swinging back and forth between symbolic and statistical AI for decades now. Perhaps it’s a good time for the two camps to reconcile and start a conversation. To build AI for the real world, we’ll have to connect more than data. We’ll also have to connect people and ideas.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Gigabyte Fixes Major Gaming Problem On Intel Alder Lake

Digital Trends may earn a commission when you buy through links on our site.

Although Intel Alder Lake processors have been collecting stellar reviews, some games have had issues running on the new CPUs. The design of Intel’s 12th-Generation processors causes a number of games to be impossible to play.

Gigabyte joins the list of the best motherboard vendors, such as MSI, in providing a fix to these issues that will let users play some, if not all, of the affected titles through the use of its new DRM Fix Tool. Meanwhile, Intel continues working on its own solution alongside game developers.

Intel Corporation

Intel Alder Lake CPUs are generally powerful gaming beasts, in some cases outperforming their competitors by as much as 60%. Unfortunately, there is a fairly long list of games that simply don’t work on the new processors. The reason lies in the hybrid architecture of Intel’s 12th-Gen chips.

The issue is caused by DRM (Digital Rights Management) in these games. As Intel Alder Lake CPUs feature a mix of two types of cores: the Golden Cove P-cores (Performance) and the Gracemont E-cores (Efficiency). DRM identifies these two kinds of cores as two separate systems. This prevents the games from running, even though both the P-cores and the E-cores are all part of the same processor.

Depending on the game, this incompatibility with the latest hybrid CPU technology can either completely prevent it from running, cause crashes and bugs, or simply lower gaming performance. The fix, already utilized by MSI motherboards, is to temporarily disable Alder Lake’s efficiency cores. This is what Gigabyte is offering with its new DRM Fix Tool.

Gigabyte’s new software, targeted at the owners of the vendor’s new Z690 motherboards, switches off Alder Lake’s E-cores. This means that, while gaming, efficiency cores are disabled, and this allows these pre-Alder Lake games to run normally, as they once again recognize the processor as just one system.

Gigabyte motherboards that can use the new DRM Fix tool.

Gigabyte issued a press release to announce the launch of the new tool. The manufacturer promises that its new Windows-based software is easy to control and doesn’t require any complicated installation. Most users won’t have to tinker with their BIOS in order to run Gigabyte’s DRM Fix, but some motherboards may require it.

In the press release, Gigabyte invites customers to download the latest version of BIOS, which is required to run the new tool. A download link for DRM Fix Tool has also been provided, alongside a list of motherboards and the required BIOS version for each model.

Earlier this month, Intel acknowledged this gaming issue and posted a fix to enable Legacy Game Compatibility Mode. However, the solution requires entering the BIOS and covers a few steps, so it’s less than ideal — but it’s better than nothing, at least while more vendors, game devs, and Intel itself work on a permanent solution.

Editors’ Choice

Repost: Original Source and Author Link


Intel Arc Alchemist Might Make Sub-$200 GPUs a Reality Again

According to a new leak from Moore’s Law Is Dead, Intel’s upcoming Arc Alchemist graphics card could finally offer an affordable, sub-$200 GPU to consumers.

The well-known leaker revealed in a YouTube video that a variant of Intel’s Arc Alchemist entry-level graphics card, which will run on the company’s Xe-HPG GPU architecture, will be based on the 128-EU model. It’ll reportedly feature a clock speed ranging between 2.2GHz and 2.5GHz on chipmaker TSMC’s 6nm process node.

MLID also said that the GPU will utilize 6GB of GDDR6 memory clocked at 16Gbps over a 96-bit memory bus for the desktop variant. The laptop model, meanwhile, is expected to deliver 4GB of GDDR6 memory across a 64-bit bus at 14Gbps.

Notably, Moore’s Law Is Dead predicts the GPU could cost $179 or less. Due to the purported components of the entry-level Arc Alchemist, he expects Intel could even attach a price point as low as $150 to the graphics card.

If the aforementioned estimation becomes a reality when the product gets officially announced, it would mark the return of inexpensive graphics cards priced at $200 or below. The only GPU that comes close to that price point in the current generation of video cards is Nvidia’s RTX 3060 with an MSRP of $329. 

One of the reasons why the graphics card could cost below $200 is its thermal design power — the GPU will allegedly yield a power draw of only 75 watts. AMD’s most efficient card, the RX 6600, has a power draw of 132W, so Intel’s looks to be much more efficient overall. 

As for other specs related to the Arc Alchemist, the cut-down models will reportedly supply 96 EUs with a 64-bit bus interface. ​​As Wccftech notes, there have been rumors pertaining to a variant providing 4GB of GDDR6 memory, but MLID doesn’t rule out a 3GB desktop model. 

The 128-EU model of the GPU is expected to launch at the end of February or March on laptops. It’ll be followed by a desktop release sometime during the second quarter of 2022. Intel will thus go head-to-head with AMD, with team Red also set to announce its own entry-level card, the Navi 24 RDNA 2 Radeon RX GPU, in the first few months of 2022 as well.

With the current shortage of GPUs and the subsequent price increases, hopefully the incoming launch of entry-level graphics cards will at least provide an affordable solution for consumers until the unprecedented state of affairs improves in 2023.

Editors’ Choice

Repost: Original Source and Author Link