Categories
AI

Nvidia and Booz Allen develop Morpheus platform to supercharge security AI 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


One of the biggest challenges facing modern organizations is the fact that security teams aren’t scalable. Even well-resourced security teams struggle to keep up with the pace of enterprise threats when monitoring their environments without the use of security artificial intelligence (AI).

However, today at the 2022 Nvidia GTC conference, Nvidia and enterprise consulting firm Booz Allen announced they are partnering together to release a GPU-accelerated AI cybersecurity processing framework called the Morpheus platform. 

[Follow along with VB’s ongoing Nvidia GTC 2022 coverage »]

So far, Booz Allen has used Morpheus to create Cyber Precog, a GPU-accelerated software platform for building AI models at the network’s edge, which offer data ingestion capabilities at 300x the rate of CPUs, and boost AI training by 32x and AI inference by 24x. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The new solution will enable public and private sector companies to address some of the cybersecurity challenges around closing the cyberskills gap with AI optimized for using GPUs, enabling much more processing to take place than if it was relying on CPUs. 

Finding threats with digital fingerprinting 

Identifying malicious activity in a network full of devices is extremely difficult to do without the help of automation. 

Research shows that 51% of IT security and SOC decision-makers feel their team is overwhelmed by the volume of alerts, with 55% admitting that they aren’t entirely confident in their ability to prioritize and respond to them. 

Security AI has the potential to lighten the loads of SOC analysts by automatically identifying anomalous — or high-risk — activity, and blocking it. 

For instance, the Morpheus software framework enables developers to inspect network traffic in real time, and identify anomalies based on digital fingerprinting. 

“We call it digital fingerprinting of users and machines, where you basically can get to a very granular model for every user or every machine in the company, and you can basically build the model on how that person should be interacting with the system,” said Justin Boitano, VP, EGX of Nvidia. 

“So if you take a user like myself, and I use Office 365 and Outlook every day, and suddenly me as a user starts trying to log in into build systems or other sources of IP in the company, that should be an event that alerts our security teams,” Boitano said. 

It’s an approach that gives the solution the ability to examine network traffic for sensitive information, detect phishing emails, and alert security teams with AI processing powered by large BERT models that couldn’t run on CPUs alone. 

Entering the security AI cluster category: UEBA, XDR, EDR 

As a solution, Morpheus is competing against a wide range of security AI solutions, from user and entity behavior analytics (UEBA) solutions to extended detection and response (XDR) and endpoint detection and response (EDR) solutions designed to discover potential threats.

One of the organizations competing against Nvidia in the realm of threat detection is CrowdStrike Falcon Enterprise, which combines next-gen antivirus (NGAV), endpoint detection and response, threat hunting, and threat intelligence as part of a single solution to continuously and automatically identify threats in enterprise environments.

CrowdStrike recently announced raising $431 million in revenue during the 2022 fiscal year. 

Another potential competitor is IBM QRadar, an XDR solution that uses AI to identify security risks with automatic root cause analysis and MITRE ATT&CK mapping, while providing analysts with support in the form of automated triaging and contextual intelligence. IBM announced raising $16.7 billion in revenue in 2021. 

With Nvidia recently announcing second quarter revenue of $6.7 billion, and now combining the strength of Nvidia’s GPUs alongside Booz Allen’s expertise, the Morpheus framework stands in a unique position to empower enterprises to conduct greater analytic data processing activities at the edge of the network to help supercharge threat detection. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

NeuReality and IBM team up to develop AI inference platforms

[Updated 5:44am PST]

NeuReality, an Israeli-based semiconductor company developing high-performance AI inference technology, has signed an agreement with IBM to develop the technology.

The technology aims to deliver cost and power consumption improvements for deep learning use cases of inference, the companies said. This development follows NeuReality’s emergence from stealth earlier in February with an $8 million seed round to accelerate AI workloads at scale.

AI inference is a growing area of focus for enterprises, because it’s the part of AI where neural networks actually are applied in real application and yield results. IBM and NeuReality claim their partnership will allow the deployment of computer vision, recommendation systems, natural language processing, and other AI use cases in critical sectors like finance, insurance, healthcare, manufacturing, and smart cities. They also claim the agreement will accelerate deployments in today’s ever-growing AI use cases, which are already deployed in public and private cloud datacenters.

NeuReality has competition in Cast AI, a technology company offering a platform that “allows developers to deploy, manage, and cost-optimize applications in multiple clouds simultaneously.” Some other competitors include Comet.ml, Upright Project, OctoML, Deci, and DeepCube. However, this partnership with IBM will see NeuReality become the first start-up semiconductor product member of the IBM Research AI Hardware Center and a licensee of the Center’s low-precision high performance Digital AI Cores.

VentureBeat connected via email with Moshe Tanach, CEO and co-founder of NeuReality, to get a broader view on the direction of this partnership.

Delivering a new reality to datacenters and near edge compute solutions

NeuReality’s agreement with IBM includes cooperation around NR1, NeuReality’s first Server-on-a-Chip ASIC implementation of its AI-centric architecture. The NR1 is a high performance, fully linear, scalable, network-attached device that provides services of AI workload processing, NeuReality says. In simpler terms, the NR1 offering targets cloud and enterprise datacenters, alongside carriers, telecom operators, and other near edge compute solutions—enabling them to deploy AI use cases more efficiently. The NR1 is based on NeuReality’s first generation FPGA-based NR1-P prototype platform introduced earlier this year.

In line with NeuReality’s vision to make AI accessible to all, this technology will remove the system bottlenecks of today’s solutions and provide disruptive cost and power consumption benefits for inference systems and services, the company said. The collaboration with IBM will ensure Neurality’s already available FPGA-based NR1-P platform supports software integration and system level validation prior to the availability of the NR1 production platform next year, the companies said.

“Having the NR1-P FPGA platform available today allows us to develop IBM’s requirements and test them before the NR1 Server-on-a-Chip’s tapeout. Being able to develop, test and optimize complex datacenter distributed features, such as Kubernetes, networking, and security before production is the only way to deliver high quality to our customers. I am extremely proud of our engineering team who will deliver a new reality to datacenters and near edge solutions. This new reality will allow many new sectors to deploy AI use cases more efficiently than ever before,” Tanach added.

A marker of NeuReality’s continued momentum

According to Dr. Mukesh Khare, Vice President of Hybrid Cloud research at IBM Research, “In light of IBM’s vision to deliver the most advanced Hybrid Cloud and AI systems and services to our clients, teaming up with NeuReality, which brings a disruptive AI-centric approach to the table, is the type of industry collaboration we are looking for. The partnership with NeuReality is expected to drive a more streamlined and accessible AI infrastructure, which has the potential to enhance people’s lives.”

As part of the agreement, IBM becomes a design partner of NeuReality and will work on the product requirements for the NR1 chip, system, and SDK that will be implemented in the next revision of the architecture. Together the two companies will evaluate NeuReality’s products for use in IBM’s Hybrid Cloud, including AI use cases, system flows, virtualization, networking, security, and more.

Following NeuReality’s announcement of  its first-of-a-kind AI-centric architecture back in February and its collaboration with Xilinx to deliver their new AI-centric FPGA-based NR1-P platforms to the market  in September, this agreement with IBM marks the company’s upward trajectory and continued momentum.

 

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Researchers develop algorithm to identify well-liked brands

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Measuring sentiment can provide a snapshot of how customers feel about companies, products, or services. It’s important for organizations to be aware: 86% of people say that authenticity is a key factor when deciding what brands they like and support. In an Edelman survey, 8% of consumers [SHOULD THAT BE 80%?] said that they need to be able to trust a brand in order to buy products from them.

While sentiment analysis technology has been around for a while, researchers at the University of Maryland’s Robert H. Smith School of Business claim to have improved upon prior methods with a new system that leverages machine learning. They say that their algorithm, which sorts through social media posts to understand how people perceive brands, can comb through more data and better measure favorability.

Sentiment analysis isn’t a perfect science, but social media provides rich signals that can be used to help shape brand strategies. According to statistics, 46% of people have opted to use social media in the past to extend their complaints to a particular company.

“There is a vast amount of social media data available to help brands better understand their customers, but it has been underutilized in part because the methods used to monitor and analyze the data have been flawed,” Wendy W. Moe, University of Maryland associate dean of master’s programs, who created the algorithm with colleague Kunpeng Zhang, said in a statement. “Our research addresses some of the shortcomings and provides a tool for companies to more accurately gauge how consumers perceive their brands.”

Algorithmic analysis

Zhang’s and Moe’s method sifts through data from posts on a brand’s page, including how many users have expressed positive or negative sentiments, “liked” something, or shared something. It predicts how people will feel about that brand in the future, scaling to billions of pages of user-brand interaction data and millions of users.

The algorithm specifically looks at users’ interactions with brands to measure favorability — whether people view that brand in a positive or negative way. And it takes into account biases, inferring favorability and measuring social media users’ positivity based on their comments in the user-brand interaction data.

Zhang and Moe say that brands can apply the algorithm to a range of platforms, such as  Facebook, Twitter, and Instagram, as long as the platforms provide user-brand interaction data and allow users to comment, share, and like content. The algorithm importantly doesn’t use private information, like user demographics, relying instead on user-brand publicly available interaction data.

“A brand needs to monitor the health of their brand dynamically,” Zhang said in a statement. “Then they can change marketing strategy to impact their brand favorability or better respond to competitors. They can better see their current location in the market in terms of their brand favorability. That can guide a brand to change marketing [practices].”

Zhang’s and Moe’s research is detailed in the paper “Measuring Brand Favorability Using Large-Scale Social Media Data,” which will be published in the forthcoming issue of the journal Information Systems Research.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Justice Department has charged a Latvian woman it says helped develop Trickbot malware

The US Department of Justice has charged a Latvian woman for her role in allegedly developing the Trickbot malware, which was responsible for infecting millions of computers, targeting schools, hospitals, public utilities, and governments, the agency said in a news release.

The DOJ alleges that Alla Witte was part of a criminal organization known as the Trickbot Group that operated in Russia, Belarus, Ukraine, and Suriname. She allegedly helped develop the malware which was used to enable ransomware demands and payments. Victims would receive a notice that their computers were encrypted, the DOJ said, and were directed to buy special software through a bitcoin address linked to the Trickbot Group to have their files decrypted.

According to the DOJ, the Trickbot malware was designed to capture online banking login credentials to gain access to other personal information including credit card numbers, emails, passwords, Social Security numbers, and addresses. The group allegedly used stolen personal information “to gain access to online bank accounts, execute unauthorized electronic funds transfers and launder the money through U.S. and foreign beneficiary accounts,” the DOJ said.

Federal law enforcement agencies warned hospitals and healthcare providers last October of a credible ransomware threat by attackers using Trickbot to deploy ransomware such as Ryuk and Conti.

Witte was arrested February 6th in Miami. She is charged with 19 counts including conspiracy to commit computer fraud and aggravated identity theft, conspiracy to commit wire and bank fraud affecting a financial institution, aggravated identity theft, and conspiracy to commit money laundering.

Repost: Original Source and Author Link

Categories
AI

Asapp releases dataset to help develop better customer service AI

Elevate your enterprise data technology and strategy at Transform 2021.


For call center applications, dialogue state tracking (DST) has historically served as a way to determine what a caller wants at a given point in a conversation. But in the real world, the work of a call center agent is much more complex than simply recognizing intents. Agents often have to look up knowledge base articles, review customer histories, and inspect account details all at the same time. Yet none of these aspects is accounted for in popular DST benchmarks. A more realistic environment might use a “dual constraint,” in which an agent needs to accommodate customer requests while considering company policies when taking actions.

In an effort to address this, AI research-driven customer experience company Asapp is releasing Action-Based Conversations Dataset (ABCD), a dataset designed to help develop task-oriented dialogue systems for customer service applications. ABCD contains more than 10,000 human-to-human labeled dialogues with 55 intents requiring sequences of actions constrained by company policies to accomplish tasks.

According to Asapp, ABCD differs from other datasets in that it asks call center agents to adhere to a set of policies. With the dataset, the company proposes two new tasks:

  • Action State Tracking (AST), which keeps track of dialogue state when an action has taken place during that turn.
  • Cascading Dialogue Success (CDS), a measure of an AI system’s ability to understand actions in context as a whole, which includes the context from other utterances.

AST ostensibly improves upon DST metrics by detecting intents from customer utterances while taking into account agent guidelines. For example, if a customer is entitled to a discount and requests 30% off, but the guidelines stipulate 15%, it would make 30% an apparantly reasonable — but ultimately flawed — choice. To measure a system’s ability to understand these situations, AST adopts overall accuracy as an evaluation metric.

Meanwhile, CDS aims to gauge a system’s skill at understanding actions in context. Whereas AST assumes an action occurs in the current turn, CDS first predicts the type of turn (e.g., utterances, actions, and endings) and then its subsequent details. When the turn is an utterance, the detail is to respond with the best sentence chosen from a list of possible sentences. When the turn is an action, the detail is to choose the appropriate values. And when the turn is an ending, the system should know to end the conversation, according to Asapp.

A CDS score is calculated on every turn, and the system is evaluated based on the percent of remaining steps correctly predicted, averaged across all available turns.

Improving customer experiences

The ubiquity of smartphones and messaging apps — and the constraints of the pandemic — have contributed to increased adoption of conversational technologies. Fifty-six percent of companies told Accenture in a survey that conversational bots and other experiences are driving disruption in their industry. And a Twilio study showed that 9 out of 10 consumers would like the option to use messaging to contact a business.

Even before the pandemic, autonomous agents were on the way to becoming the rule rather than the exception, partly because consumers prefer it that way. According to research published last year by Vonage subsidiary NewVoiceMedia, 25% of people prefer to have their queries handled by a chatbot or other self-service alternative. And Salesforce says roughly 69% of consumers choose chatbots for quick communication with brands.

Unlike other large open-domain dialogue datasets, which are typically built for more general chatbot entertainment purposes, ABCD focuses on increasing the count and diversity of actions and text within the domain of customer service. Call center contributors to the dataset were incentivized through cash bonuses, mimicking the service environments and realistic agent behavior, according to Asapp.

Rather than relying on datasets that expand upon an array of knowledge base lookup actions, ABCD presents a corpus for building more in-depth and task-oriented dialogue systems, Asapp says. The company expects that the dataset and new tasks will create opportunities for researchers to explore better and more reliable models for task-oriented dialogue systems.

“For customer service and call center applications, it is time for both the research community and industry to do better. Models relying on DST as a measure of success have little indication of performance in real-world scenarios, and discerning customer experience leaders should look to other indicators grounded in the conditions that actual call center agents face,” the company wrote in a press release. “We can’t wait to see what the community creates from this dataset. Our contribution to the field with this dataset is another major step to improving machine learning models in customer service.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Microsoft Build touts Power Apps, Cosmos DB enhancements to develop code faster

Elevate your enterprise data technology and strategy at Transform 2021.


At Microsoft’s Build conference this week, CEO Satya Nadella was focused on speed. “It’s all about that developer velocity,” he promised, as the company unveiled tools and services that would enable developers to turn ideas into software stacks faster.

The annual event has something for both traditional developers and newer developers who use spreadsheets and other “low-code” tools. Key announcements this week included the integration of AI technologies with Microsoft Power FX low-code programming language and enhancements to Cosmos DB.

Turning data into dashboards

The Microsoft Power Platform allows non-technical users to create, automate, and analyze data themselves and not have to wait for developers to build the applications and processes for them. Power BI is a collection of low-code and no-code tools that turn complex data into reports and interactive dashboards. Analysts can use Power Apps to build data applications and processes.

Integrating AI into Power FX will make it easier to use natural-language input and “programming by example” techniques when developing with PowerApps. The fact that Power FX is a formula-based tool built on Microsoft Excel means people can write custom code without having to learn traditional programming languages.

Power FX is a “low-code programming language for everyone,” Microsoft program manager Greg Lindhorst said.

While this approach offers plenty of benefits, there are limits to how much coding the world can handle. The Excel lovers who can create elaborate interactive spreadsheets will be overjoyed to write even more complex functions that can trigger more elaborate dashboards. But casual spreadsheet users will find there is still a steep learning curve as they struggle to keep track of complex syntax and other gotchas that drive neophytes mad.

It’s low code, not no code, after all.

Teams as more than video

Power Apps is natively integrated into all Microsoft cloud offerings, including Microsoft Teams (Office 365), business apps (Dynamics 365), and developer cloud (Azure). With an embedded app studio, Teams is more than just a place for email and video chat. At Microsoft Build, the company tried to position the remote collaboration tool as a fully customizable platform for delivering apps.

This integration could increase the amount of custom code within an organization. While this feature won’t mean much to average users, giving teammates the ability to not just chat but also create code could be amazing. Power users would be able to share their code over Teams, and others can extend it.

A  few clever hacks can save millions of hours of work.

Enhancing Cosmos DB

Cosmos DB is one of Microsoft’s flagship tools on Azure, and it remains one of the simplest and most flexible ways for developers to store data. Microsoft emphasized cost containment and serverless options for Cosmos DB.

The biggest option may be a customized cache. In the past, Azure users could insert a version of Redis to handle bursts in similar traffic. The new cache is optimized for Cosmos DB.

The price for the cache is computed as a regular instance based upon its compute power and size of the RAM, the most important parameter for deciding how much data to cache. When the cache hits, there’s no cost incurred for Cosmos DB, effectively trading the seemingly infinite exposure of database queries with the fixed monthly cost of a caching machine.

Caching helps with high loads with big, concentrated bursts of activities. The Cosmos DB team is also emphasizing the opportunity to deploy serverless loads for intermittent applications and those that might still be in testing. The serverless version became generally available at Microsoft Build.

Cosmos DB users tend to be more serious developers with a grander set of assignments than power-spreadsheet users working with the Power BI platform. The new features are aimed at making it easier and faster for developers to start storing data in Cosmos DB and also help contain the costs (or even drive them lower).

Software development for everyone

Nadella’s goal is to push software development into every corner of the world. He pointed out that the number of developers at non-technology companies has grown faster than the number at technology companies, making low-code tools ideal for those environments.

“In the automotive industry, there were more software engineers than mechanical engineers hired over the last year,” he noted in his keynote address.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Intel unveils Project Athena Open Labs to develop next-gen thin, light, always-connected laptops

At CES 2019, Intel announced plans to help develop next-generation ultrabooks, code-named “Project Athena.” Now, with Project Athena Open Labs, Intel’s throwing open its doors to help make that vision a reality.

Over the next few weeks, Intel said it plans to host what Project Athena Open Labs at Intel facilities in Taipei, Shanghai, and its office in Folsom, California. The goal is to help potential partners and customers develop Project Athena notebooks. Component vendors will be able to submit their parts for compliance testing against Intel’s own Athena requirements, some of which have been previously disclosed.

Though Intel obviously designs its own components, it also works closely with partners and customers to assist them with their own designs. Intel’s programs for top-tier customers include the Innovation Excellence Program, a crack team of engineers that collaborate with a particular customer on a specific project, such as the HP Spectre Folio.

But while a PC giant like HP can field its own in-house engineering teams, some smaller companies may genuinely need assistance in pushing a final PC design to market. It’s these smaller companies that generally take advantage of approved component lists that Intel compiles, including reference designs that can serve as a template to manufacture a finished PC.

Project Athena notebooks are due in the second half of 2019, with support from Acer, Asus, Dell, HP, Lenovo, Microsoft, Samsung, Sharp, and even Google. Intel already hosted 500 members at what it calls the Project Athena Ecosystem Symposium this week in Taiwan to ready the first wave of Project Athena designs for these top-tier customers.

What this means to you: Disclosing the existence of the Project Athena Open Labs isn’t so much an announcement as a public vote of confidence for Project Athena PCs. What Intel’s trying to do is establish Athena notebooks the same way we take thin-and-light PCs or ultrabooks for granted—and not just for people willing to spend $2,500, either.

HP Spectre Folio Daniel Masaoka / IDG

HP’s Spectre Folio is evidence of the good things that can come from a partnership between Intel engineers and its customers. 

Athena assistance for those who need it

The Athena Open Labs will serve as a resource for the second tier of smaller PC vendors hoping to develop Athena designs, including “white box” vendors that sell PCs under a more popular brand name. It’s these no-name vendors that will really determine whether “Athena” designs are merely a blip, or a long-term influence on the PC industry at large. 

Intel executives expressed a similar concern last year. “We expect that, over time, not just to hit our north star of Project Athena designs, but also improve the experience of all laptops in the market, which is very similar to the experience we observed with the ultrabook,” said Josh Newman, the general manager of mobile innovation segments for Intel, in January.

Repost: Original Source and Author Link

Categories
AI

Google partners with Automation Anywhere to develop RPA products

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Google today announced a strategic, multi-year collaboration with robotic process automation (RPA) startup Automation Anywhere to accelerate the adoption of RPA with enterprises “on a global scale.” The partnership will make the Automation Anywhere platform available on Google Cloud and see the two companies mutually develop AI- and RPA-powered solutions, closely aligning go-to-market teams.

RPA — technology that automates monotonous, repetitive chores traditionally performed by human workers — is big business. Forrester estimates that RPA and other AI subfields created jobs for 40% of companies in 2019 and that a tenth of startups now employ more digital workers than human ones. According to a McKinsey survey, at least a third of activities could be automated in about 60% of occupations. And in its recent Trends in Workflow Automation report, Salesforce found that 95% of IT leaders are prioritizing workflow automation, with 70% seeing the equivalent of more than 4 hours saved each week per employee.

As a part of its agreement with Automation Anywhere, Google plans to integrate the former company’s RPA technologies including low- and no-code development tools, AI workflow builders, and API management with Google Cloud services like Apigee, AppSheet, and AI Platform. Automation Anywhere and Google say they’ll also jointly develop solutions geared toward industry-specific use cases, with a focus on financial services, supply chains, health care and life sciences, telecommunications, retail, and the public sector.

For its part, Automation Anywhere has pledged to migrate its cloud-native, web-based automation platform to Google Cloud as its primary cloud provider, in exchange for becoming Google Cloud’s preferred RPA partner. Automation Anywhere will additionally make its products available in the Google Cloud Marketplace, enabling them to be deployed across cloud, hybrid, and on-premises environments and providing customers with a view of assets and environments.

As Google Cloud CEO Thomas Kurian notes, RPA has become a key part of businesses’ digital transformation efforts. Front office employees at call centers, financial services companies, human resources offices, IT departments, and more handle thousands of manual, repetitive tasks each day, like invoice processing, lending decisions, and employee onboarding. In the back office, IT teams and developers spend time managing APIs, entering data, and ensuring that apps can connect with legacy systems. Along with machine learning, computer vision, deep learning, and analytics, RPA can help businesses streamline these workloads through the development of AI-powered software bots capable of managing a number of front-and-back office tasks.

“As businesses increasingly run in the cloud, RPA provides the means to streamline processes across both cloud-native applications and legacy, on-premises systems — ultimately helping employees spend less time on repetitive tasks and more time supporting business-critical projects,” Kurian said in a press release. “We are proud to partner with Automation Anywhere to help businesses quickly deploy and scale RPA capabilities on Google Cloud, and to address business challenges with solutions specially designed for industries.”

The deal with Automation Anywhere appears to be Google’s answer to Microsoft’s acquisition of RPA startup Softomotive and IBM’s purchase of WDG Automation. Microsoft recently brought RPA to Windows 10 with new Power Platform products. With a market opportunity anticipated to be worth $3.97 billion by 2025, according to Grand View Research, RPA is fast becoming too large to ignore. Automation Anywhere rival Blue Prism has raised over $120 million, Kryon $40 million, and FortressIQ $30 million. And in July, UiPath nabbed $750 million, bringing its total raised to $2 billion at a post-money valuation of $35 billion.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Nvidia and Harvard develop AI tool that speeds up genome analysis

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Researchers affiliated with Nvidia and Harvard today detailed AtacWorks, a machine learning toolkit designed to bring down the cost and time needed for rare and single-cell experiments. In a study published in the journal Nature Communications, the coauthors showed that AtacWorks can run analyses on a whole genome in just half an hour compared with the multiple hours traditional methods take.

Most cells in the body carry around a complete copy of a person’s DNA, with billions of base pairs crammed into the nucleus. But an individual cell pulls out only the subsection of genetic components that it needs to function, with cell types like liver, blood, or skin cells using different genes. The regions of DNA that determine a cell’s function are easily accessible, more or less, while the rest are shielded around proteins.

AtacWorks, which is available from Nvidia’s NGC hub of GPU-optimized software, works with ATAC-seq, a method for finding open areas in the genome in cells pioneered by Harvard professor Jason Buenrostro, one of the paper’s coauthors. ATAC-seq measures the intensity of a signal at every spot on the genome. Peaks in the signal correspond to regions with DNA such that the fewer cells available, the noisier the data appears, making it difficult to identify which areas of the DNA are accessible.

ATAC-seq typically requires tens of thousands of cells to get a clean signal. Applying AtacWorks produces the same quality of results with just tens of cells, according to the coauthors.

AtacWorks was trained on labeled pairs of matching ATAC-seq datasets, one high-quality and one noisy. Given a downsampled copy of the data, the model learned to predict an accurate high-quality version and identify peaks in the signal. Using AtacWorks, the researchers found that they could spot accessible chromatin, a complex of DNA and protein whose primary function is packaging long molecules into more compact structures, in a noisy sequence of 1 million reads nearly as well as traditional methods did with a clean dataset of 50 million reads.

AtacWorks could allow scientists to conduct research with a smaller number of cells, reducing the cost of sample collection and sequencing. Analysis, too, could become faster and cheaper. Running on Nvidia Tensor Core GPUs, AtacWorks took under 30 minutes for inference on a genome, a process that would take 15 hours on a system with 32 CPU cores.

In the Nature Communications paper, the Harvard researchers applied AtacWorks to a dataset of stem cells that produce red and white blood cells — rare subtypes that couldn’t be studied with traditional methods. With a sample set of only 50 cells, the team was able to use AtacWorks to identify distinct regions of DNA associated with cells that develop into white blood cells, and separate sequences that correlate with red blood cells.

“With very rare cell types, it’s not possible to study differences in their DNA using existing methods,” Nvidia researcher Avantika Lal, first author on the paper, said. “AtacWorks can help not only drive down the cost of gathering chromatin accessibility data, but also open up new possibilities in drug discovery and diagnostics.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Mobileye partners with Lohr and Transdev to develop self-driving shuttles

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Intel subsidiary Mobileye today announced a partnership with Transdev ATS and Lohr Group to develop and deploy shared autonomous vehicle transportation services. As a part of the deal, Transdev and Lohr plan to integrate Mobileye’s driverless system into Lohr’s i-Cristal shuttle, with plans to launch autonomous public transportation powered by fleets of self-driving shuttles in Europe and then further internationally.

Some experts predict the pandemic will hasten the adoption of autonomous vehicles for delivery. Self-driving cars, vans, and trucks promise to minimize the risk of spreading disease by limiting driver contact, which is perhaps why people across the globe trust the safety of autonomous vehicles more than they did three years ago, according to several surveys by Deloitte.

Over the next year, Mobileye says it’ll work with Transdev and Lohr to ship i-Cristal autonomous shuttles equipped with Mobileye’s autonomous technology. The i-Cristal shuttle, which is electric, features space for up to 16 people, is accessible via a ramp, and can travel at speeds up to 31 miles per hour.

The three companies will initially test vehicles on roadways in France and Israel, aiming for production in 2022 and the launch of commercial operations by 2023.

“Our collaboration with Transdev ATS and Lohr Group serves to grow Mobileye’s global footprint as the autonomous vehicle (AV) technology partner of choice for pioneers in the transportation industry,” Johann Jungwirth, VP of mobility-as-a-service at Mobileye, said in a press release. “Mobileye, Transdev ATS and Lohr Group are shaping the future of shared autonomous mobility, and we look forward to bringing our self-driving solutions to regions all over the world.”

Mobileye, which Intel acquired for $15.3 billion in March 2017, is building two independent self-driving systems. One is based entirely on cameras while the other incorporates radar, lidar sensors, modems, GPS, and other components. The former runs on two of the company’s EyeQ 5 system-on-chips processing 11 cameras, along with lidar and radar for redundancy. The next generation of the chip, EyeQ6, is expected to arrive in 2023 and will continue to be made by Taiwan Semiconductor Manufacturing Co. using its 7-nanometer chipmaking process.

Mobileye’s system isn’t unlike Tesla’s Autopilot, which uses 8 cameras, 12 ultrasonic sensors, and front-facing radar in tandem with an onboard computer to achieve a level of high-speed autonomy. Tesla “shadow-tests” its cars’ autonomous capabilities by collecting anonymized data from hundreds of thousands of customer-owned vehicles “during normal driving operations.” Tesla’s cars, however, omit lidar; CEO Elon Musk has called the laser-based sensors “a crutch.”

Mobileye’s system-on-chip lidar system features digital and “state-of-the-art” signal processing, different scanning modes, rich raw detections, and multi-frame tracking. Using AI algorithms, the company claims its technology can reduce computing requirements while allowing autonomous vehicles to recognize up to 500,000 detections per second.

Mobileye recently announced four new locations — Tokyo, Shanghai, Paris, and potentially New York City — where it plans to test its autonomous vehicle technologies. Beyond this, the company previewed new radar and lidar sensor technologies that will ostensibly detect weak targets far away in the presence of “strong” nearby targets, like a motorcyclist obscured by a semitrailer. By 2022, Mobileye is aiming to deploy robo-taxi fleets in three major cities, including 100 cars with lidar and radar in Tel Aviv, with the hardware cost per robo-taxi coming in at $10,000 to $20,000. By 2025, Mobileye expects to bring the cost of a self-driving system below $5,000.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link