Nvidia and Booz Allen develop Morpheus platform to supercharge security AI 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

One of the biggest challenges facing modern organizations is the fact that security teams aren’t scalable. Even well-resourced security teams struggle to keep up with the pace of enterprise threats when monitoring their environments without the use of security artificial intelligence (AI).

However, today at the 2022 Nvidia GTC conference, Nvidia and enterprise consulting firm Booz Allen announced they are partnering together to release a GPU-accelerated AI cybersecurity processing framework called the Morpheus platform. 

[Follow along with VB’s ongoing Nvidia GTC 2022 coverage »]

So far, Booz Allen has used Morpheus to create Cyber Precog, a GPU-accelerated software platform for building AI models at the network’s edge, which offer data ingestion capabilities at 300x the rate of CPUs, and boost AI training by 32x and AI inference by 24x. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The new solution will enable public and private sector companies to address some of the cybersecurity challenges around closing the cyberskills gap with AI optimized for using GPUs, enabling much more processing to take place than if it was relying on CPUs. 

Finding threats with digital fingerprinting 

Identifying malicious activity in a network full of devices is extremely difficult to do without the help of automation. 

Research shows that 51% of IT security and SOC decision-makers feel their team is overwhelmed by the volume of alerts, with 55% admitting that they aren’t entirely confident in their ability to prioritize and respond to them. 

Security AI has the potential to lighten the loads of SOC analysts by automatically identifying anomalous — or high-risk — activity, and blocking it. 

For instance, the Morpheus software framework enables developers to inspect network traffic in real time, and identify anomalies based on digital fingerprinting. 

“We call it digital fingerprinting of users and machines, where you basically can get to a very granular model for every user or every machine in the company, and you can basically build the model on how that person should be interacting with the system,” said Justin Boitano, VP, EGX of Nvidia. 

“So if you take a user like myself, and I use Office 365 and Outlook every day, and suddenly me as a user starts trying to log in into build systems or other sources of IP in the company, that should be an event that alerts our security teams,” Boitano said. 

It’s an approach that gives the solution the ability to examine network traffic for sensitive information, detect phishing emails, and alert security teams with AI processing powered by large BERT models that couldn’t run on CPUs alone. 

Entering the security AI cluster category: UEBA, XDR, EDR 

As a solution, Morpheus is competing against a wide range of security AI solutions, from user and entity behavior analytics (UEBA) solutions to extended detection and response (XDR) and endpoint detection and response (EDR) solutions designed to discover potential threats.

One of the organizations competing against Nvidia in the realm of threat detection is CrowdStrike Falcon Enterprise, which combines next-gen antivirus (NGAV), endpoint detection and response, threat hunting, and threat intelligence as part of a single solution to continuously and automatically identify threats in enterprise environments.

CrowdStrike recently announced raising $431 million in revenue during the 2022 fiscal year. 

Another potential competitor is IBM QRadar, an XDR solution that uses AI to identify security risks with automatic root cause analysis and MITRE ATT&CK mapping, while providing analysts with support in the form of automated triaging and contextual intelligence. IBM announced raising $16.7 billion in revenue in 2021. 

With Nvidia recently announcing second quarter revenue of $6.7 billion, and now combining the strength of Nvidia’s GPUs alongside Booz Allen’s expertise, the Morpheus framework stands in a unique position to empower enterprises to conduct greater analytic data processing activities at the edge of the network to help supercharge threat detection. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


Meta wants to supercharge Wikipedia with an AI upgrade

Wikipedia has a problem. And Meta, the not-too-long-ago rebranded Facebook, may just have the answer.

Let’s back up. Wikipedia is one of the largest-scale collaborative projects in human history, with more than 100,000 volunteer human editors contributing to the construction and maintenance of a mind-bogglingly large, multi-language encyclopedia consisting of millions of articles. Upward of 17,000 new articles are added to Wikipedia each month, while tweaks and modifications are continuously made to its existing corpus of articles. The most popular Wiki articles have been edited thousands of times, reflecting the very latest research, insights, and up-to-the-minute information.

The challenge, of course, is accuracy. The very existence of Wikipedia is proof positive that large numbers of humans can come together to create something positive. But in order to be genuinely useful and not a sprawling graffiti wall of unsubstantiated claims, Wikipedia articles must be backed up by facts. This is where citations come in. The idea – and for the most part this works very well – is that Wikipedia users and editors alike can confirm facts by adding or clicking hyperlinks that track statements back to their source.

Citation needed

Say, for example, I want to confirm the entry on President Barack Obama’s Wikipedia article stating that Obama traveled to Europe and then Kenya in 1988, where he met many of his paternal relatives for the first time. All I have to do is to look at the citations for the sentence and, sure enough, there are three separate book references that seemingly confirm that the fact checks out.

By contrast, the phrase “citation needed” is probably the two most damning in all of Wikipedia, precisely because they suggest that there’s no evidence that the author didn’t conjure the words out of the digital ether. The words “citation needed” affixed to a Wikipedia claim is the equivalent of telling someone a fact while making finger quotes in the air.

Citations don’t tell us everything, though. If I were to tell you that, last year, I was the 23rd highest-earning tech journalist in the world and that I once gave up a lucrative modeling career to write articles for Digital Trends, it appears superficially plausible because there are hyperlinks to support my delusions.

The fact that the hyperlinks don’t support my alternative facts at all, but rather lead to unrelated pages on Digital Trends is only revealed when you click them. For the 99.9 percent of readers who have never met me, they might leave this article with a slew of false impressions, not the least of which is the surprisingly low barrier to entry to the world of modeling. In a hyperlinked world of information overload, in which we increasingly splash around in what Nicholas Carr refers to as “The Shallows,” the existence of citations themselves appear to be factual endorsements.

Meta wades in

But what if citations are added by Wikipedia editors, even if they don’t link to pages that actually support the claims? As an illustration, a recent Wikipedia article on Blackfeet Tribe member Joe Hipp described how Hipp was the first Native American boxer to challenge for the WBA World Heavyweight title and linked to what seemed to be an appropriate webpage. However, the webpage in question mentioned neither boxing nor Joe Hipp.

In the case of the Joe Hipp claim, the Wikipedia factoid was accurate, even if the citation was inappropriate. Nonetheless, it’s easy to see how this could be used, either deliberately or otherwise, to spread misinformation.

Mark Zuckurburg introduces Facebook's new name, Meta.

It’s here that Meta thinks that it’s come up with a way to help. Working with the Wikimedia Foundation, Meta AI (that’s the AI research and development research lab for the social media giant) has developed what it claims is the first machine learning model able to automatically scan hundreds of thousands of citations at once to check if they support the corresponding claims. While this would be far from the first bot Wikipedia uses, it could be among the most impressive.

“I think we were driven by curiosity at the end of the day,” Fabio Petroni, research tech lead manager for the FAIR (Fundamental AI Research) team of Meta AI, told Digital Trends. “We wanted to see what was the limit of this technology. We were absolutely not sure if [this AI] could do anything meaningful in this context. No one had ever tried to do something similar [before].”

Understanding meaning

Trained using a dataset consisting of 4 million Wikipedia citations, Meta’s new tool is able to effectively analyze the information linked to a citation and then cross-reference it with the supporting evidence. And this isn’t just a straightforward text string comparison, either.

“There is a component like that, [looking at] the lexical similarity between the claim and the source, but that’s the easy case,” Petroni said. “With these models, what we have done is to build an index of all these webpages by chunking them into passages and providing an accurate representation for each passage … That is not representing word-by-word the passage, but the meaning of the passage. That means that two chunks of text with similar meanings will be represented in a very close position in the resulting n-dimensional space where all these passages are stored.”

a single-pane comic from xkcd about Wikipedia citaions

Just as impressive as the ability to spot fraudulent citations, however, is the tool’s potential for suggesting better references. Deployed as a production model, this tool could helpfully suggest references that would best illustrate a certain point. While Petroni balks at it being likened to a factual spellcheck, flagging errors and suggesting improvements, that’s an easy way to think about what it might do.

But as Petroni explains, there is still much more work to be done before it reaches this point. “What we have built is a proof of concept,” he said. “It’s not really usable at the moment. In order for this to be usable, you need to have a fresh index that indexes much more data than what we currently have. It needs to be constantly updated, with new information coming every day.”

This could, at least in theory, include not just text, but multimedia as well. Perhaps there’s a great authoritative documentary that’s available on YouTube the system could direct users toward. Maybe the answer to a particular claim is hidden in an image somewhere online.

A question of quality

There are other challenges, too. Notable in its absence, at least at present, is any attempt to independently grade the quality of sources cited. This is a thorny area in itself. As a simple illustration, would a brief, throwaway reference to a subject in, say, the New York Times prove a more suitable, high-quality citation than a more comprehensive, but less-renowned source? Should a mainstream publication rank more highly than a non-mainstream one?

Google’s trillion-dollar PageRank algorithm – certainly the most famous algorithm ever built around citations – had this built into its model by, in essence, equating a high-quality source with one that had a high number of incoming links. At present, Meta’s AI has nothing like this.

If this AI was to work as an effective tool, it would need to have something like that. As a very obvious example of why, imagine that one was to set out to “prove” the most egregious, reprehensible opinion for inclusion on a Wikipedia page. If the only evidence needed to confirm that something is true is whether similar sentiments could be found published elsewhere online, then virtually any claim could technically prove correct — no matter how wrong it might be.

“[One area we are interested in] is trying to model explicitly the trustworthiness of a source, the trustworthiness of a domain,” Petroni said. “I think Wikipedia already has a list of domains that are considered trustworthy, and domains that are considered not. But instead of having a fixed list, it would be nice if we can find a way to promote these algorithmically.”

Editors’ Choice

Repost: Original Source and Author Link