Categories
AI

Nvidia and Booz Allen develop Morpheus platform to supercharge security AI 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


One of the biggest challenges facing modern organizations is the fact that security teams aren’t scalable. Even well-resourced security teams struggle to keep up with the pace of enterprise threats when monitoring their environments without the use of security artificial intelligence (AI).

However, today at the 2022 Nvidia GTC conference, Nvidia and enterprise consulting firm Booz Allen announced they are partnering together to release a GPU-accelerated AI cybersecurity processing framework called the Morpheus platform. 

[Follow along with VB’s ongoing Nvidia GTC 2022 coverage »]

So far, Booz Allen has used Morpheus to create Cyber Precog, a GPU-accelerated software platform for building AI models at the network’s edge, which offer data ingestion capabilities at 300x the rate of CPUs, and boost AI training by 32x and AI inference by 24x. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The new solution will enable public and private sector companies to address some of the cybersecurity challenges around closing the cyberskills gap with AI optimized for using GPUs, enabling much more processing to take place than if it was relying on CPUs. 

Finding threats with digital fingerprinting 

Identifying malicious activity in a network full of devices is extremely difficult to do without the help of automation. 

Research shows that 51% of IT security and SOC decision-makers feel their team is overwhelmed by the volume of alerts, with 55% admitting that they aren’t entirely confident in their ability to prioritize and respond to them. 

Security AI has the potential to lighten the loads of SOC analysts by automatically identifying anomalous — or high-risk — activity, and blocking it. 

For instance, the Morpheus software framework enables developers to inspect network traffic in real time, and identify anomalies based on digital fingerprinting. 

“We call it digital fingerprinting of users and machines, where you basically can get to a very granular model for every user or every machine in the company, and you can basically build the model on how that person should be interacting with the system,” said Justin Boitano, VP, EGX of Nvidia. 

“So if you take a user like myself, and I use Office 365 and Outlook every day, and suddenly me as a user starts trying to log in into build systems or other sources of IP in the company, that should be an event that alerts our security teams,” Boitano said. 

It’s an approach that gives the solution the ability to examine network traffic for sensitive information, detect phishing emails, and alert security teams with AI processing powered by large BERT models that couldn’t run on CPUs alone. 

Entering the security AI cluster category: UEBA, XDR, EDR 

As a solution, Morpheus is competing against a wide range of security AI solutions, from user and entity behavior analytics (UEBA) solutions to extended detection and response (XDR) and endpoint detection and response (EDR) solutions designed to discover potential threats.

One of the organizations competing against Nvidia in the realm of threat detection is CrowdStrike Falcon Enterprise, which combines next-gen antivirus (NGAV), endpoint detection and response, threat hunting, and threat intelligence as part of a single solution to continuously and automatically identify threats in enterprise environments.

CrowdStrike recently announced raising $431 million in revenue during the 2022 fiscal year. 

Another potential competitor is IBM QRadar, an XDR solution that uses AI to identify security risks with automatic root cause analysis and MITRE ATT&CK mapping, while providing analysts with support in the form of automated triaging and contextual intelligence. IBM announced raising $16.7 billion in revenue in 2021. 

With Nvidia recently announcing second quarter revenue of $6.7 billion, and now combining the strength of Nvidia’s GPUs alongside Booz Allen’s expertise, the Morpheus framework stands in a unique position to empower enterprises to conduct greater analytic data processing activities at the edge of the network to help supercharge threat detection. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Allen Institute launches GENIE, a leaderboard for human-in-the-loop language model benchmarking

There’s been an explosion in recent years of natural language processing (NLP) datasets aimed at testing various AI capabilities. Many of these datasets have accompanying leaderboards, which provide a means of ranking and comparing models. But the adoption of leaderboards has thus far been limited to setups with automatic evaluation, like classification and knowledge retrieval. Open-ended tasks requiring natural language generation such as language translation, where there are often many correct solutions, lack techniques that can reliably automatically evaluate a model’s quality.

To remedy this, researchers at the Allen Institute for Artificial Intelligence, the Hebrew University of Jerusalem, and the University of Washington created GENIE, a leaderboard for human-in-the-loop evaluation of text generation. GENIE posts model predictions to a crowdsourcing platform (Amazon Mechanical Turk), where human annotators evaluate them according to predefined, dataset-specific guidelines for fluency, correctness, conciseness, and more. In addition, GENIE incorporates various automatic machine translation, question answering, summarization, and common-sense reasoning metrics including BLEU and ROUGE to show how well they correlate with the human assessment scores.

As the researchers note, human-evaluation leaderboards raise a couple of novel challenges, first and foremost potentially high crowdsourcing fees. To avoid deterring submissions from researchers with limited resources, GENIE aims to keep submission costs around $100, with initial submissions to be paid by academic groups. In the future, the coauthors plan to explore other payment models including requesting payment from tech companies while subsidizing the cost for smaller organizations.

To mitigate another potential issue — the reproducibility of human annotations over time across various annotators — the researchers use techniques including estimating annotator variance and spreading the annotations over several days. Experiments show that GENIE achieves “reliable scores” on the included tasks, they claim.

“[GENIE] standardizes high-quality human evaluation of generative tasks, which is currently done in a case-by-case manner with model developers using hard-to-compare approaches,” Daniel Khashabi, a lead developer on the GENIE project, explained in a Medium post. “It frees model developers from the burden of designing, building, and running crowdsourced human model evaluations. [It also] provides researchers interested in either human-computer interaction for human evaluation or in automatic metric creation with a central, updating hub of model submissions and associated human-annotated evaluations.”

Allen Institute GENIE

The coauthors believe that the GENIE infrastructure, if widely adopted, could alleviate the evaluation burden for researchers while ensuring high-quality, standardized comparison against previous models. Moreover, they anticipate that GENIE will facilitate the study of human evaluation approaches, addressing challenges like annotator training, inter-annotator agreement, and reproducibility — all of which could be integrated into GENIE to compare against other evaluation metrics on past and future submissions.

Allen Institute GENIE leaderboard

“We make GENIE publicly available and hope that it will spur progress in language generation models as well as their automatic and manual evaluation,” the coauthors wrote in a paper describing their work. “This is a novel deviation from how text generation is currently evaluated, and we hope that GENIE contributes to further development of natural language generation technology.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member



Repost: Original Source and Author Link