Categories
AI

What is AI hardware? How GPUs and TPUs give artificial intelligence algorithms a boost

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Most computers and algorithms — including, at this point, many artificial intelligence (AI) applications — run on general-purpose circuits called central processing units or CPUs. Though, when some calculations are done often, computer scientists and electrical engineers design special circuits that can perform the same work faster or with more accuracy. Now that AI algorithms are becoming so common and essential, specialized circuits or chips are becoming more and more common and essential. 

The circuits are found in several forms and in different locations. Some offer faster creation of new AI models. They use multiple processing circuits in parallel to churn through millions, billions or even more data elements, searching for patterns and signals. These are used in the lab at the beginning of the process by AI scientists looking for the best algorithms to understand the data. 

Others are being deployed at the point where the model is being used. Some smartphones and home automation systems have specialized circuits that can speed up speech recognition or other common tasks. They run the model more efficiently at the place it is being used by offering faster calculations and lower power consumption. 

Scientists are also experimenting with newer designs for circuits. Some, for example, want to use analog electronics instead of the digital circuits that have dominated computers. These different forms may offer better accuracy, lower power consumption, faster training and more. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

What are some examples of AI hardware? 

The simplest examples of AI hardware are the graphical processing units, or GPUs, that have been redeployed to handle machine learning (ML) chores. Many ML packages have been modified to take advantage of the extensive parallelism available inside the average GPU. The same hardware that renders scenes for games can also train ML models because in both cases there are many tasks that can be done at the same time. 

Some companies have taken this same approach and extended it to focus only on ML. These newer chips, sometimes called tensor processing units (TPUs), don’t try to serve both game display and learning algorithms. They are completely optimized for AI model development and deployment. 

There are also chips optimized for different parts of the machine learning pipeline. These may be better for creating the model because it can juggle large datasets — or, they may excel at applying the model to incoming data to see if the model can find an answer in them. These can be optimized to use lower power and fewer resources to make them easier to deploy in mobile phones or places where users will want to rely on AI but not to create new models. 

Additionally, there are basic CPUs that are starting to streamline their performance for ML workloads. Traditionally, many CPUs have focused on double-precision floating-point computations because they are used extensively in games and scientific research. Lately, some chips are emphasizing single-precision floating-point computations because they can be substantially faster. The newer chips are trading off precision for speed because scientists have found that the extra precision may not be valuable in some common machine learning tasks — they would rather have the speed.

In all these cases, many of the cloud providers are making it possible for users to spin up and shut down multiple instances of these specialized machines. Users don’t need to invest in buying their own and can just rent them when they are training a model. In some cases, deploying multiple machines can be significantly faster, making the cloud an efficient choice. 

How is AI hardware different from regular hardware? 

Many of the chips designed for accelerating artificial intelligence algorithms rely on the same basic arithmetic operations as regular chips. They add, subtract, multiply and divide as before. The biggest advantage they have is that they have many cores, often smaller, so they can process this data in parallel. 

The architects of these chips usually try to tune the channels for bringing the data in and out of the chip because the size and nature of the data flows are often quite different from general-purpose computing. Regular CPUs may process many more instructions and relatively fewer data. AI processing chips generally work with large data volumes. 

Some companies deliberately embed many very small processors in large memory arrays. Traditional computers separate the memory from the CPU; orchestrating the movement of data between the two is one of the biggest challenges for machine architects. Placing many small arithmetic units next to the memory speeds up calculations dramatically by eliminating much of the time and organization devoted to data movement. 

Some companies also focus on creating special processors for particular types of AI operations. The work of creating an AI model through training is much more computationally intensive and involves more data movement and communication. When the model is built, the need for analyzing new data elements is simpler. Some companies are creating special AI inference systems that work faster and more efficiently with existing models. 

Not all approaches rely on traditional arithmetic methods. Some developers are creating analog circuits that behave differently from the traditional digital circuits found in almost all CPUs. They hope to create even faster and denser chips by forgoing the digital approach and tapping into some of the raw behavior of electrical circuitry. 

What are some advantages of using AI hardware?

The main advantage is speed. It is not uncommon for some benchmarks to show that GPUs are more than 100 times or even 200 times faster than a CPU. Not all models and all algorithms, though, will speed up that much, and some benchmarks are only 10 to 20 times faster. A few algorithms aren’t much faster at all. 

One advantage that is growing more important is the power consumption. In the right combinations, GPUs and TPUs can use less electricity to produce the same result. While GPU and TPU cards are often big power consumers, they run so much faster that they can end up saving electricity. This is a big advantage when power costs are rising. They can also help companies produce “greener AI” by delivering the same results while using less electricity and consequently producing less CO2. 

The specialized circuits can also be helpful in mobile phones or other devices that must rely upon batteries or less copious sources of electricity. Some applications, for instance, rely upon fast AI hardware for very common tasks like waiting for the “wake word” used in speech recognition. 

Faster, local hardware can also eliminate the need to send data over the internet to a cloud. This can save bandwidth charges and electricity when the computation is done locally. 

What are some examples of how leading companies are approaching AI hardware?

The most common forms of specialized hardware for machine learning continue to come from the companies that manufacture graphical processing units. Nvidia and AMD create many of the leading GPUs on the market, and many of these are also used to accelerate ML. While many of these can accelerate many tasks like rendering computer games, some are starting to come with enhancements designed especially for AI. 

Nvidia, for example, adds a number of multiprecision operations that are useful for training ML models and calls these Tensor Cores. AMD is also adapting its GPUs for machine learning and calls this approach CDNA2. The use of AI will continue to drive these architectures for the foreseeable future. 

As mentioned earlier, Google makes its own hardware for accelerating ML, called Tensor Processing Units or TPUs. The company also delivers a set of libraries and tools that simplify deploying the hardware and the models they build. Google’s TPUs are mainly available for rent through the Google Cloud platform.

Google is also adding a version of its TPU design to its Pixel phone line to accelerate any of the AI chores that the phone might be used for. These could include voice recognition, photo improvement or machine translation. Google notes that the chip is powerful enough to do much of this work locally, saving bandwidth and improving speeds because, traditionally, phones have offloaded the work to the cloud. 

Many of the cloud companies like Amazon, IBM, Oracle, Vultr and Microsoft are installing these GPUs or TPUs and renting time on them. Indeed, many of the high-end GPUs are not intended for users to purchase directly because it can be more cost-effective to share them through this business model. 

Amazon’s cloud computing systems are also offering a new set of chips built around the ARM architecture. The latest versions of these Graviton chips can run lower-precision arithmetic at a much faster rate, a feature that is often desirable for machine learning. 

Some companies are also building simple front-end applications that help data scientists curate their data and then feed it to various AI algorithms. Google’s CoLab or AutoML, Amazon’s SageMaker, Microsoft’s Machine Learning Studio and IBM’s Watson Studio are just several examples of options that hide any specialized hardware behind an interface. These companies may or may not use specialized hardware to speed up the ML tasks and deliver them at a lower price, but the customer may not know. 

How startups are tackling creating AI hardware

Dozens of startups are approaching the job of creating good AI chips. These examples are notable for their funding and market interest: 

  • D-Matrix is creating a collection of chips that move the standard arithmetic functions to be closer to the data that’s stored in RAM cells. This architecture, which they call “in-memory computing,” promises to accelerate many AI applications by speeding up the work that comes with evaluating previously trained models. The data does not need to move as far and many of the calculations can be done in parallel. 
  • Untether is another startup that’s mixing standard logic with memory cells to create what they call “at-memory” computing. Embedding the logic with the RAM cells produces an extremely dense — but energy efficient — system in a single card that delivers about 2 petaflops of computation. Untether calls this the “world’s highest compute density.” The system is designed to scale from small chips, perhaps for embedded or mobile systems, to larger configurations for server farms. 
  • Graphcore calls its approach to in-memory computing the “IPU” (for Intelligence Processing Unit) and relies upon a novel three-dimensional packaging of the chips to improve processor density and limit communication times. The IPU is a large grid of thousands of what they call “IPU tiles” built with memory and computational abilities. Together, they promise to deliver 350 teraflops of computing power. 
  • Cerebras has built a very large, wafer-scale chip that’s up to 50 times bigger than a competing GPU. They’ve used this extra silicon to pack in 850,000 cores that can train and evaluate models in parallel. They’ve coupled this with extremely high bandwidth connections to suck in data, allowing them to produce results thousands of times faster than even the best GPUs.  
  • Celestial uses photonics — a mixture of electronics and light-based logic — to speed up communication between processing nodes. This “photonic fabric” promises to reduce the amount of energy devoted to communication by using light, allowing the entire system to lower power consumption and deliver faster results. 

Is there anything that AI hardware can’t do? 

For the most part, specialized hardware does not execute any special algorithms or approach training in a better way. The chips are just faster at running the algorithms. Standard hardware will find the same answers, but at a slower rate.

This equivalence doesn’t apply to chips that use analog circuitry. In general, though, the approach is similar enough that the results won’t necessarily be different, just faster. 

There will be cases where it may be a mistake to trade off precision for speed by relying on single-precision computations instead of double-precision, but these may be rare and predictable. AI scientists have devoted many hours of research to understand how to best train models and, often, the algorithms converge without the extra precision. 

There will also be cases where the extra power and parallelism of specialized hardware lends little to finding the solution. When datasets are small, the advantages may not be worth the time and complexity of deploying extra hardware.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

How to leverage AI to boost care management success

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Sixty percent of American adults live with at least one chronic condition, and 12% with five or more. They spend exponentially more on healthcare than those without any chronic conditions. For instance, 32% of adults with five or more chronic conditions make at least one ER visit each year. On top of that, 24% have at least one inpatient stay, in addition to an average of 20 outpatient visits — up to 10 times more than those without chronic conditions. In fact, 90% of America’s $4 trillion healthcare expenditures are for people with chronic and mental health conditions, according to the Centers for Disease Control and Prevention (CDC).

The fundamental way healthcare organizations reduce these costs, improve patient experience and ensure better population health is through care management. 

In short, care management refers to the collection of services and activities that help patients with chronic conditions manage their health. Care managers proactively reach out to patients under their care and offer preventative interventions to reduce hospital ER admissions. Despite their best efforts, many of these initiatives provide suboptimal outcomes.

Why current care management initiatives are ineffective

Much of care management today is performed based on past data

For instance, care managers identify patients with the highest costs over the previous year and begin their outreach programs with them. The biggest challenge with this approach, according to our internal research, is nearly 50-60% of high-cost patients were low-cost in the previous year. Without appropriate outreach, a large number of at-risk patients are left unattended with the reactive care management approach. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The risk stratification that the care management team uses today is a national model

These models are not localized, so understanding the social determinants of individual locations is not considered.

The care management team’s primary focus is chiefly on transition of care and avoiding readmissions

Our experience while working with different clients also points to the fact that readmissions contribute only 10-15% of total admission. The focus on proactive care management and avoiding future avoidable emergency room and hospital admission is lacking. This is key to success in value-based care models.

In any given year, high-cost patients can become low-cost

Without such granular understanding, outreach efforts can be ineffective in curbing the cost of care.

How AI can boost care management success

Advanced analytics and artificial intelligence (AI) open up a significant opportunity for care management. Health risks are complex, driven by a wide range of factors well beyond just one’s physical or mental health. For example, a person with diabetes is at higher risk if they also have low-income and limited access to medical services. Therefore, identifying at-risk patients’ needs to consider additional factors to encompass those most in need of care.

Machine learning (ML) algorithms can evaluate a complex range of variables such as patient history, past hospital/ER admissions, medications, social determinants of health, and external data to identify at-risk patients accurately. It can stratify and prioritize patients based on their risk scores, enabling care managers to design their outreach to be effective for those who need it most. 

At an individual level, an AI-enabled care management platform can offer a holistic view of each patient, including their past care, current medication, risks, and accurate recommendations for their future course of action. For the patient in the example above, AI can equip care managers with HbA1C readings, medication possession ratio, and predictive risk scores to deliver proper care at the right time. It can also guide the care manager regarding the number of times they should reach out to each patient for maximum impact.

Unlike traditional risk stratification mechanisms, modern AI-enabled care management systems are self-learning. When care managers enter new information about the patient — such as latest hospital visit, change in medication, new habits, etc. — AI adapts its risk stratification and recommendations engine for more effective outcomes. This means that the ongoing care for every patient improves over time.

Why payers and providers are reluctant to embrace AI in care management

In theory, the impact of AI in care management is significant — both governments and the private sector are bullish on the possibilities. Yet, in practice, especially among those who use the technology every day, i.e., care managers, there appears to be reluctance. With good reason.

Lack of localized models

For starters, many of today’s AI-based care management solutions aren’t patient-centric. Nationalized models are ineffective for most local populations, throwing predictions off by a considerable margin. Without accurate predictions, care managers lack reliable tools, creating further skepticism. Carefully designed localized models are fundamental to the success of any AI-based care management solution.

Not driven by the care manager’s needs

On the other hand, AI today is not ‘care manager-driven’ either. A ‘risk score’ or the number indicating the risk of any patient gives little to the care manager. AI solutions need to speak the user’s language, so they become comfortable with the suggestions. 

Healthcare delivery is too complex and critical to be left to the black box of an ML algorithm. It needs to be transparent about why each decision was made — there must be explainability that is accessible to the end-user. 

Inability to demonstrate ROI

At the healthcare organizational level, AI solutions must also demonstrate ROI. They must impact the business by moving the needle on its key performance indicators (KPIs). This could include reducing the cost of care, easing the care manager’s burden, minimizing ER visits, and other benefits. These solutions must provide healthcare leaders with the visibility they need into hospital operations as well as delivery metrics.

What is the future of AI in care management?

Despite current challenges and failures in some early AI projects, what the industry is experiencing is merely teething troubles. As a rapidly evolving technology, AI is adapting itself to the needs of the healthcare industry at an unprecedented pace. With ongoing innovation and receptiveness to feedback, AI can become the superpower in the armor of healthcare organizations.

Especially in proactive care management, AI can play a significant role. It can help identify at-risk patients and offer care that prevents complications or emergencies. It can enable care managers to monitor progress and give ongoing support without patients ever visiting a hospital to receive it. This will, in turn, significantly reduce the cost of care for providers. It will empower patients to lead healthy lives over the long term and promote overall population health.

Pradeep Kumar Jain is the chief product officer at HealthEM AI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

MLCommons releases new benchmarks to boost ML performance

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Understanding the performance characteristics of different hardware and software for machine learning (ML) is critical for organizations that want to optimize their deployments.

One of the ways to understand the capabilities of hardware and software for ML is by using benchmarks from MLCommons — a multi-stakeholder organization that builds out different performance benchmarks to help advance the state of ML technology. 

The MLCommons MLPerf testing regimen has a series of different areas where benchmarks are conducted throughout the year. In early July, MLCommons released benchmarks on ML training data and today is releasing its latest set of MLPerf benchmarks for ML inference. With training, a model learns from data, while inference is about how a model “infers” or gives a result from new data, such as a computer vision model that uses inference for image recognition.

The benchmarks come from the MLPerf Inference v2.1 update, which introduces new models, including SSD-ResNeXt50 for computer vision, and a new testing division for inference over the network to help expand the testing suite to better replicate real-world scenarios.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“MLCommons is a global community and our interest really is to enable ML for everyone,” Vijay Janapa Reddi, vice president of MLCommons said during a press briefing. “What this means is actually bringing together all the hardware and software players in the ecosystem around machine learning so we can try and speak the same language.”

He added that speaking the same language is all about having standardized ways of claiming and reporting ML performance metrics.

How MLPerf measures ML inference benchmarks

Reddi emphasized that benchmarking is a challenging activity in ML inference, as there are any number of different variables that are constantly changing. He noted that MLCommons’ goal is to measure performance in a standardized way to help track progress.

Inference spans many areas that are considered in the MLPerf 2.1 suite, including recommendations, speech recognition, image classification and object detection capabilities. Reddi explained that MLCommons pulls in public data, then has a trained ML network model for which the code is available. The group then determined a certain target quality score that submitters of different hardware systems platforms need to meet.

“Ultimately, our goal here is to make sure that things get improved, so if we can measure them, we can improve them,” he said.

Results? MLPerf Inference has thousands

The MLPerf Inference 2.1 suite benchmark is not a listing for the faint of heart, or those that are afraid of numbers — lots and lots of numbers.

In total the new benchmark generated over 5,300 results, provided by a laundry list of submitters including Alibaba, Asustek, Azure, Biren, Dell, Fujitsu, Gigabyte, H3C, HPE, Inspur, Intel, Krai, Lenovo, Moffett, Nettrix, NeuralMagic, Nvidia, OctoML, Qualcomm, Sapeon and Supermicro.

“It’s very exciting to see that we’ve got over 5,300 performance results, in addition to over 2,400 power measurement results,” Reddi said. “So there’s a wealth of data to look at.”

The volume of data is overwhelming and includes systems that are just coming to market. For example, among Nvidia’s many submissions are several for the company’s next generation H100 accelerator that was first announced back in March.

“The H100 is delivering phenomenal speedups versus previous generations and versus other competitors,” Dave Salvator, director of product marketing at Nvidia, commented during a press briefing that Nvidia hosted.

While Salvator is confident in Nvidia’s performance, he noted that from his perspective it’s also good to see new competitors show up in the latest MLPerf Inference 2.1 benchmarks. Among those new competitors is Chinese artificial intelligence (AI) accelerator vendor Biren Technology. Salvator noted that Biren brought in a new accelerator that he said made a “decent” first showing in the MLPerf Inference benchmarks.

“With that said, you can see the H100 outperform them (Biren) handily and the H100 will be in market here very soon before the end of this year,” Salvator said.

Forget about AI hype, enterprises should focus on what matters to them

The MLPerf Inference numbers, while verbose and potentially overwhelming, also have a real meaning that can help to cut through AI hype, according to Jordan Plawner, senior director of Intel AI products.

“I think we probably can all agree there’s been a lot of hype in AI,” Plawner commented during the MLCommons press briefing. “I think my experience is that customers are very wary of PowerPoint in claims or claims based on one model.”

Plawner noted that some models are great for certain use cases, but not all use cases. He said that MLPerf helps him and Intel communicate to customers in a credible way with a common framework that looks at multiple models. While attempting to translate real-world problems into benchmarks is an imperfect exercise, MLPerf has a lot of value.

“This is the industry’s best effort to say here [is] an objective set of measures to at least say — is company XYZ credible,” Plawner said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Can AI help Oracle take on Salesforce to boost B2B sales?

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


When Oracle announced its next generation of Fusion Sales in late July, as part of its Oracle Fusion Cloud Customer Experience (CX) powered by artificial intelligence (AI), a PR representative wrote in an email to VentureBeat that the product “raises the bar for the entire industry and stomps all over Salesforce’s territory.” 

While Salesforce declined to comment on Oracle’s claim, it is clear that Oracle is looking to use AI and machine learning (ML) to compete with the customer relationship management (CRM) giant as well as fend off related startups like Gong and Salesloft. The company says it believes its Fusion Sales is the next generation of CRM, focusing on helping sellers in an era of business-to-business (B2B) sales transformation. 

“Increasingly, we’ve realized that the way we built Fusion as a more modern cloud stack not only allows you to orchestrate processes all the way through from the front to the back, but to use machine learning to help people get their jobs done better with CRM tools,” said Rob Tarkoff, executive vice president and general manager of Oracle’s Fusion Cloud Customer Experience. 

The first generation of big tech digital sales tools (which include Salesforce and Microsoft Dynamics) were traditionally about sales forecasting and included a variety of third-party integrations, he explained. Now, Fusion Sales can help sales professionals plan campaigns, target key accounts across both advertising and marketing, and move through a unified selling effort that includes content management, advertising and sales orchestration. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“We know that we’re not the largest provider of CRM tools – that is Salesforce,” Tarkoff told VentureBeat.  “…but we think that if we drive these innovations, we can raise the bar for the rest of the industry to respond to that.”

Oracle seeks to transform B2B sales post-pandemic

Historically, B2B sales were what Tarkoff refers to as the “last bastion of relationship-based selling.”

“Salespeople and customers had long-term relationships primarily formed physically in person,” he said, adding that this model has changed dramatically: “Obviously today, it’s a lot more about digital engagement – people have confidence in buying a product without ever meeting a sales rep even for large ticket purchases.” 

As a result, B2B sales has become more about using data to orchestrate processes that are more personalized for the buyer, knowing that they have already done probably 70-80% of their research. Reference stories from other customers help companies validate the quality of their offerings. 

“It’s really about how effectively you use references to sell because nobody wants to be the risk-taker, so we’ve turned reference selling into the key part of the B2B flow,” he said. “It’s about finely tuning a personalized set of engagements and references that are much more relevant.”

Ultimately, he explained, the sales rep’s role is no longer to educate the B2B buyer on products but to have a conversation about what like-minded customers did successfully and why they should join the ranks. In addition, it is important to unify what used to be separate sets of activities for sales and marketing.

“You start to unify around really the only thing that matters in B2B, which is having enough mature, qualified opportunities and knowing enough about the journey of those prospects or customers to most effectively convert them to buyers,” Tarkoff said. “It’s turning that into a set of data points that help you determine, through artificial intelligence and machine learning, what is a truly conversation-ready opportunity.” 

While that may sound mechanical, he points out that B2B sales have become much more prescriptive and orchestrated.

“It’s less about having an outgoing personality and winning over your customer with your charm,” he said. 

Using AI to support data-driven decision-making

According to Robert Blaisdell, senior director and analyst at Gartner, by 2026, 65% of B2B sales organizations will transition from intuition-based strategy to data-driven decision-making, using technology like Oracle’s that unites workflows, data and analytics. 

“Most of the big trends we see with AI focus on supporting B2B sales reps in their daily sales tasks by saving time and effort while also providing insights,” he told VentureBeat via email.

These insights can include recommending which leads to prioritize or providing insights about a sales lead or customer, and also enabling a greater sense of empathy from sellers to improve customer engagement with hyper-personalization.

“When you look at the impact AI has had on other areas of business, such as supply chain management, customer service engagement, and marketing outreach, we are just beginning to see the impact AI could have on sales effectiveness and efficiency – the potential is great,” he said 

Today, Blaisdell says he sees AI being implemented throughout many facets of broader sales technology.

“CSOs are working to free up time for sellers, sales leaders, marketing and customer success teams to deal with delicate customer cases that require acute problem-solving skills, empathy and creativity,” Blaisdell said, adding that the use is often seen in improved revenue intelligence, increased sales engagement and better conversation intelligence technologies. 

“These are driven by capabilities that prioritize opportunities based on certain criteria, determine a seller’s next best action to advance or close a deal, or highlight trends to help sales managers zero in on what to coach sellers,” he said. 

Oracle focuses on data quality for machine learning

Tarkoff said Oracle is using the power of the company’s customer data platform (CDP) to “build extensive profiles on each of our prospects that can then be activated more effectively through the machine learning models we bring in, so we’re constantly testing new models.”  

That hinges on the quality of the dataset provided to those models, he explained.

“That’s where we’ve seen the most advancement because one of the problems with machine learning and AI is you have to constantly be refining your dataset to make sure you’re training the models properly,” he said.

Blaisdell pointed out that Oracle allows customers to bring in their own models.

“It’s hard for us to say we can build all the models better than every company if they know their industry,” Tarkoff said. “They want to be able to take their CDP and build on the fly changes and additional attributes and modify the attributes.” 

Oracle’s core approach to its Fusion Applications, built on Oracle Cloud, has always been to build as many advanced machine learning models into flows, from the database layer all the way into the applications layer. 

“The best and the greatest advancement here is that we are surfacing all those insights in the form of guided flows for a sales rep to follow rather than having to hire teams of data scientists to interpret what’s coming out,” he said. “We built that all into a guided UI that, I think, will get to the next level of machine learning-influenced outcomes because we’ve done the work to make it easier for the salesperson.” 

What sales organizations should consider 

While AI has great potential in B2B sales, Gartner’s Blaisdell says that when it comes to choosing AI tools, organizations need to consider the most pressing set of priorities that AI can solve. 

“Implementing and gaining results beyond the hype can be a challenge if everything is tried to be achieved at once,” he said, and recommended that sales organizations focus on one to three positive outcomes from instituting AI to ensure that process and organizational change can be leveraged with AI. 

One of the main reasons for this is because insights from AI are only as good as the data it uses, he explained. 

“Many sales organizations miss the mark when it comes to consistent high-quality data due to low seller data literacy and lackadaisical input,” Blaisdell said. “If the goal of investment into AI is ultimately to yield insights that shape better decision-making, sales organizations need to ensure their current dataset is clean along with instituting governance policies that helps [ensure] consistent correct data is utilized regardless of the source.” 

The future of AI and B2B sales

While the use of AI for sales organizations has been trending for years, the pandemic was a catalyst for increased use, Blaisdell added. The need for sales organizations to become efficient and effective in a quickly evolving unknown environment drove a rapid evolution in the technology and increased need for usage, he said. 

“We see that trend continuing, but at a steadier pace,” he said. “The future holds where AI can contribute more, helping align sales organizations toward an increased buyer preference for seller-free engagement and multithreaded sales experiences between both seller and digital channels.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Nvidia teams up with UN to boost data science in 10 African nations

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Nvidia has teamed up with the United Nations to boost data science in 10 African nations.

The public-private effort provides nations with data science workstations and AI education
to support census, public health, climate projects and data analytics, said Geoffrey Levene, director of worldwide AI Initiatives at Nvidia, in an interview with VentureBeat.

Nvidia is collaborating with the UN Economic Commission for Africa (UNECA) to equip governments and developer communities in 10 nations with data science training and technology to support more informed policymaking and accelerate how resources are allocated.

“It’s the chicken or the egg,” Levene said. “You really can’t train people to use GPUs unless they have access to GPUs.”

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The initiative will empower the countries’ national statistical offices — agencies that handle population censuses data, economic policies, healthcare and more — by providing AI hardware, training for data scientists and ecosystem support, Levene said.

Known as the United AI Alliance, the initiative is led by the UNECA, the Global Partnership for
Sustainable Development Data (the Global Partnership), which facilitates data partnerships for public good, and Nvidia.

Future Tech, a Long Island, New York-based IT solution provider and member of the Nvidia Partner Network, is the alliance’s inaugural funding and global distribution partner.

“Population data is critical information for policy decisions, whether it’s for urban planning, climate action or monitoring the spread of COVID-19,” said Oliver Chinganya, director of the African Centre for Statistics at UNECA, in a statement. “Without a strong digital infrastructure, many of these nations struggled to collect and report data during the pandemic.”

Better public health data can help countries track real-time COVID infection rates, detect hotspots and target their response efforts. And beyond the pandemic, strengthening data systems will allow local experts to connect population statistics to agricultural data, climate trends and economic indicators.

Laying the groundwork

Future Tech is covering the cost of procurement and overseeing the distribution and deployment
of Nvidia-certified systems and data science workstations powered by Nvidia RTX and Nvidia Quadro RTX GPUs for each country — starting with Ghana, Kenya, Rwanda, Senegal and Sierra Leone. Up next will be Guinea, Mali, Nigeria, Somalia and Togo.

“Public-sector institutions play a critical role in providing the data used for policymaking at all levels. But often they face huge gaps in infrastructure and expertise required to tap the benefits of the data revolution,” said Future Tech founder and CEO Bob Venero, in a statement.

To further support the countries’ data science capabilities, Nvidia is teaming up with local universities, research institutes and data science communities to build a pipeline of developers that can extract insights from census information and other data sources.

Nvidia is putting together a curriculum of free Deep Learning Institute courses — starting with fundamentals such as accelerated computing with CUDA Python and accelerated data science workflows — tailored to the needs of each country’s national statistical office. It’s also providing access to workshops and data science teaching kits for each of the nations.

This work extends the company’s support of AI and data science in Africa through the Nvidia Inception startup program and the Nvidia Emerging Chapters initiative, which bolsters developer communities in emerging markets with education and technical resources.

Using data to drive environmental and social progress

Around the world, the pandemic has accelerated the transition to digitization. The United AI Alliance is supporting this transformation by working with grassroots groups at the core of AI development in Africa, with the goal of enabling data practitioners in every region to build meaningful solutions to local challenges.

Many of the continent’s developers are part of local technology communities, including groups like the Kenya-based AI Center of Excellence or nonprofit organization Data Science Africa, Nvidia said.

United AI Alliance is pairing many of these developers with governments to drive new data analysis projects.

“To help countries fully harness the data revolution for sustainable development, we must meet developers where they are,” said Claire Melamed, CEO of the Global Partnership, in a statement. “This
collaboration is a great example of the Sustainable Development Goals in action, with technology creating a path to quality education and providing meaningful work as well as individual economic growth.”

While the project’s initial focus is in Africa, the collaborators plan to roll out the same model in
Southeast Asia and Latin America.

Levine said the partnership came about last year as Nvidia was discussing how to expand its capabilities in Africa, and it had a relationship with the Global Partnership. Nvidia and Future Tech decided to be the primary partner and fund this program to actually bring hardware infrastructure to Africa.

“The mission is pretty simple. It’s starting with data science and national statistical offices of at least 10 nations, but we want to expand beyond these 10 nations. And we’re going to expand across all of Africa, but also Southeast Asia and Latin America as well,” Levene said. “The idea is to uplevel capabilities for AI and science and emerging nations. And right now, there is a focus on national statistical offices.”

He added, “We’re hoping a better policy is developed from that census data. for instance, between census surveys, you might have shifts of where poverty pockets might be in the nation. And with the insight that data science can bring to the data, you can see how populations might be flowing. And therefore you might be able to make better policy that can have better intended consequences.”

In the 10 highlighted countries, flooding is often a major problem. With data science, those countries will be better able to plan infrastructure and tackle big engineering challenges. It can also help with farming, climate change, and food shortages.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link

Categories
AI

How self-supervised learning may boost medical AI progress

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Self-supervised learning has been a fast-rising trend in artificial intelligence (AI) over the past couple of years, as researchers seek to take advantage of large-scale unannotated data to develop better machine learning models. 

In 2020, Yann Lecun, Meta’s chief AI scientist, said supervised learning, which entails training an AI model on a labeled data set, would play a diminishing role as supervised learning came into wider use. 

“Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode,” he told a virtual session audience during the International Conference on Learning Representation (ICLR) 2020. And in a 2021 Meta blog post, LeCun explained that self-supervised learning “obtains supervisory signals from the data itself, often leveraging the underlying structure in the data.” Because of that, it can make use of a “variety of supervisory signals across co-occurring modalities (e.g., video and audio) and across large datasets — all without relying on labels.” 

Growing use of self-supervised learning in medicine

Those advantages have led to the notable growing use of self-supervised learning in healthcare and medicine, thanks to the vast amount of unstructured data available in that industry – including electronic health records and datasets of medical images, bioelectrical signals, and sequences and structures of genes and proteins. Previously, the development of medical applications of machine learning had required manual annotation of data, often by medical experts. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

This was a bottleneck to progress, said Pranav Rajpurkar, assistant professor of biomedical informatics at Harvard Medical School. Rajpurkar leads a research lab focused on deep learning for label-efficient medical image interpretation, clinician-AI collaboration design, and open benchmark curation. 

“We’ve seen a lot of exciting advancements with our stable data sets,” he told VentureBeat.

But a “paradigm shift” was necessary to go from 100 algorithms that do very specific medical tasks to the thousands needed without going about a laborious, intensive process. That is where self-supervised learning, with its ability to predict any unobserved or hidden part of an input from any observed or unhidden part of an input, has been a game-changer. 

Highlighting self-supervised learning

In a recent review paper in Nature Biomedical Engineering, Rajpurkar, along with cardiologist, scientist and author Eric Topol and student researcher Rayan Krishnan, highlighted self-supervised methods and models used in medicine and healthcare, as well as and promising applications of self-supervised learning for the development of models leveraging multimodal datasets, and the challenges in collecting unbiased data for their training. 

The paper, Rajpurkar said, was aimed at “communicating the opportunities and challenges that underlie this the shift in paradigm we’re going to see over the upcoming years in many applications of AI, most certainly including medicine.” 

With self-supervised learning, Rajpurkar explained that he, “… can learn about a certain data source, whether that’s a medical image or signal, by using unlabeled data. That allows me a great starting point to do any task I care about within medicine and beyond without actually collecting large stable datasets.”

Big achievements unlocked

In 2019 and 2020, Rajpurkar’s lab saw some of the first big achievements that self-supervised learning was unlocking for interpreting medical images, including chest X-rays. 

“With a few modifications to algorithms that helped us understand natural images, we reduced the number of chest X-rays that had to be seen with a particular disease before we could start to do well at identifying that disease,” he said. 

Rajpurkar and his colleagues applied similar principles to electrocardiograms.

“We showed that with some ways of applying self-supervised learning, in combination with a bit of physiological insights in the algorithm, we were able to leverage a lot of unlabeled data,” he said.

Since then, he has also applied self-supervised learning to lung and heart sound data.

“What’s been very exciting about deep learning as a whole, but especially in the recent year or two, is that we’ve been able to transfer our methods really well across modalities,” Rajpurkar said. 

Self-supervised learning across modalities

For example, another soon-to-be-published paper showed that even with zero-annotated examples of diseases on chest X-rays, Rajpurkar’s team was actually able to detect diseases on chest X-rays and classify them nearly at the level of radiologists across a variety of pathologies.  

“We basically learned from images paired with radiology reports that were dictated at the time of their interpretation, and combined these two modalities to create a model that could be applied in a zero-shot way – meaning labeled samples were not necessary to be able to classify different diseases,” he said. 

Whether you’re working with proteins or images or text, the process is borrowing from the same sort of set of frameworks and methods and terminologies in a way that is more unified than it was even two or three years ago.

“That’s exciting for the field because it means that a set of advances on a general set of tools helps everybody working across and on these very specific modalities,” he said. 

In medical image interpretation, which has been Rajpurkar’s research focus for many years, this is “absolutely revolutionary,” he said. “Rather than thinking of solving problems one at a time and iterat[ing] this process 1,000 times, I can solve a much larger set of problems all at once.”

Momentum to apply methods

These possibilities have created momentum towards developing and applying self-supervised learning methods in medicine and healthcare, and likely for other industries that also have the ability to collect data at scale, said Rajpurkar, especially those industries that don’t have the sensitivity associated with medical data. 

Going forward, he adds that he is interested in getting closer to solving the full swath of potential tasks that a medical expert does.

“The goal has always been to enable intelligent systems that can increase the accessibility of medicine and healthcare to a large audience,” he said, adding that what excites him is building solutions that don’t just solve one narrow problem: “We’re working toward a world with models that combine different signals so physicians or patients are able to make intelligent decisions about diagnoses and treatments.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
AI

Dell, Nvidia and VMware partner to boost data center speed

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


For much of the history of computer virtualization, workloads have been limited by the power of CPUs and to a lesser extent GPUs, in order to handle computation, networking, storage, security and artificial intelligence (AI) requirements. There is however another way.

At the VMware Explore conference today, Dell Technologies and Nvidia are officially announcing the launch of a new data center solution that integrates Dell PowerEdge servers, the new VMware vSphere 8 virtualization platform, alongside Nvidia GPUs and for the first time Nvidia BlueField 2 DPUs (data processing unit) as well.

A DPU is a dedicated piece of silicon hardware designed to handle certain data processing tasks. Those tasks can include security and network routing for data traffic, in an approach that is intended to help reduce the load on CPUs and GPUs for core computing tasks related to a given workload. VMware had been working together with Nvidia on an effort known as Project Monterey for the last two years to enable the vSphere virtualization platform to support the BlueField DPUs and that effort has finally come to fruition.

“Modern applications such as AI, are continuing to generate massive amounts of data and processing that data is consuming CPU cycles,” Kevin Deierling, senior vice president of networking at Nvidia explained during a press briefing.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Deierling commented that another issue impacting CPU utilization is the way that modern applications are now built. Modern virtualized and container-based applications are no longer single monolithic stacks, but rather are composed with a set of distributed micro-services that consume more CPU cycles.

“CPU capacity is being consumed both with the security aspects of moving data around and the  massive amounts of east-west traffic to allow these distributed applications to communicate with each other and actually share all of the data,” Deierling said. “So that’s the problem that we’re able to solve with this new BlueField DPU data processing unit integrated with vSphere.”

The new era of DPUs for CPU offload is here

The move toward more usage of DPUs as well as infrastructure processing units (IPUs) is part of an emerging industry trend.

Back on June 21, the Linux Foundation launched a new initiative around DPUs called the Open Programmable Infrastructure Project (OPI), which counts Nvidia, Intel, Dell and Marvell among its members. The goal of OPI is to help develop industry open standards around DPUs and IPUs. Marvell has been building out its own Octeon DPU technology, while Intel has been developing its infrastructure process unit (IPU) approach.

Deierling explained that the Nvidia BlueField is a new class accelerated computing processor that runs the infrastructure software of the data center. The BlueField combines networking accelerators and embedded ARM CPU cores. 

“This combination simplifies infrastructure and management, boosts performance and strengthens security,” Deierling said. “And now this is all fully integrated with VMware vSphere running on the BlueField DPU.”

How Nvidia BlueField will accelerate security and AI for VMware

The new VMware vSphere 8 release will now support Nvidia BlueField2 DPUs which will have a significant impact on networking, storage and AI workloads.

Part of VMware’s overall virtualization platform is the company’s NSX software defined networking technology. NSX in recent years has also played a strong role in VMware’s security strategy, enabling networking isolation and firewall capabilities.

“With the NSX security running on the DPU, enterprises can now put a firewall in every server,” Deierling said. 

The benefit of having an NSX-based firewall with the Nvidia BlueField DPU is that it can help organizations to support zero-trust efforts. The basic idea behind zero trust is to have continuous authentication and pervasive encryption to help protect data. Deierling added that having NSX networking security running on the BlueField DPU also provides a new layer of isolation between the application and the infrastructure processing domains. 

Nvidia also expects that the integration of its DPU technology with VMware’s vSphere will also accelerate AI workloads. Deierling noted that the DPU optimization is fully integrated with Nvidia AI enterprise running on VMware vSphere as both virtual machines and containers.

For developers and existing applications, getting the benefit of the new BlueField-2 DPUs will happen automatically.

“One of the things that we strive to do is preserve all of the existing API’s and interfaces and we’ve worked very closely with VMware,” Deierling said. “The API’s and all of the things that you might use, you’ll call those and then instead of it executing on an x86 CPU, it will actually simply be offloaded and accelerated onto the BlueField-2, so really from an application perspective the consumption of those APIs is identical.”

General availability of the new Dell Technologies servers with Nvidia BlueField-2 DPU and VMware vSphere 8 is set for later this year. Nvidia has also set up an environment with its LaunchPad service to let users virtually try out the technology before it is physically available.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link

Categories
Game

Microsoft Edge update brings Xbox streaming ‘Clarity Boost’ to everyone

Microsoft is hoping to make Edge the browser of choice for gamers. The company is rolling out a host of gaming-related updates to most users, including perks for game streaming. A new (if long in development) Clarity Boost feature improves the visual quality of console titles when you’re using Xbox Cloud Gaming on a Windows 10 or 11 PC. The spatial upscaling technology won’t make you forget that it’s a stream, but the sample Microsoft offered suggests it will reduce the muddy look that sometimes plagues remote games.

You won’t need an Xbox Game Pass Ultimate subscription for the other improvements. Windows 10 and 11 users will also see a toggle in Efficiency mode that automatically reduces Edge’s resource use when you start a PC game. You might not have to close your browser to wring every last drop of performance out of your system.

Regardless of platform, there’s an optional gaming-oriented homepage that points you to news, livestreams, new releases and quick access to the Xbox Cloud Gaming catalog. You can also visit a dedicated games menu that offers free-to-play arcade and casual titles to keep you entertained during uneventful meetings.

This isn’t the first browser built for gamers. Opera GX launched three years ago with similar features, such as lower resource usage and quick access to livestreams. Microsoft features like Clarity Boost might be more appealing in some cases, though, and Edge’s ubiquity on Windows systems gives it better odds of widespread adoption.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Computing

Six Easy Ways to Boost Your Mac’s Performance

Is your Mac feeling slow? If it is, then there’s no need to use extra tools to increase your MacBook’s performance.

Just like when it comes to increasing your Mac’s security, you can boost your MacBook’s performance by using some tools built right into MacOS.

Reduce visual effects

At the top of our list is a simple trick. MacOS has a lot of fancy visual effects, but this can take a toll on your system’s RAM and CPU if you have a lower-end machine. The effects could cause your Mac to feel a bit slow, but you can disable them with ease. Here’s how.

To begin, click the Apple button in the Menu Bar, and choose System Preferences. After that, choose Dock & Menu bar. You’ll then be able to uncheck the boxes for Animate opening applications, as well as Automatically hide, and show the Dock. Finally, uncheck the box for Minimize windows using and change the Genie effect to Scale effect. You might want to turn off magnification in the dock, too, just to be safe.

Remove unused apps

removing unused mac apps.

Next up is a simple trick involving removing unused apps. Unused apps can take up space on your Mac’s hard drive or SSD for no reason, leaving you less room for photos and for other apps to store and cache their files. Deleting an app isn’t as simple as it is in Windows, though, so follow our steps below.

First off, click the Apple Menu. Then, choose About this Mac. You should see Storage and Manage. Click this button, followed by Applications. Your apps will show up in the list, and you can click the app, followed by Delete at the bottom to remove it.

In addition to removing unused apps, we also suggest using Apple’s own storage monitor to delete large files. Just click the Apple menu, then choose About this Mac, then Storage followed by Manage. There will be a link that says Reduce Clutter. Click this to delete large files and apps from your Mac.

Change or remove startup programs

Removing startup programs on MacOS.

Is your Mac taking a while to boot? The problem might be some startup applications that are holding your Mac back as soon as you push the power button and log in. For the best experience, we suggest removing startup apps, to ensure you always have a clean boot experience.

To remove startup apps in MacOS, go to System Preferences from the Apple Menu, then go to Users & Groups. Next, choose Login Items. From there, click any unwanted apps in the list and then click the minus button to remove it.

Reindex spotlight

reindexing spotlight in macos.

After you’ve installed a major Mac update, or a security update, your Mac might be feeling a little slow. Usually, this is because Spotlight search is reindexing your hard drive or SSD. In some cases, it might end up getting stuck, so to speed up your Mac, you’ll need to manually reindex again.

To reindex Spotlight, click System Preferences > Spotlight. From there, look for the Privacy tab. Click Privacy and then add Macintosh HD to the list — and then remove it with the minus button. Indexing will start and stop, and your Mac should feel a bit faster.

Update your Mac Software

Updating a Mac via Software Update.

This is an obvious choice, but we also suggest updating your Mac to ensure that it’s not running slow. Of course, not every Mac model is supported by Apple. So, depending on which Mac model you have, you might no longer be seeing firmware and other security updates (which are known to boost Mac performance.) You can check for updates at any time on your Mac by going to the Apple Menu, choosing About this Mac > Software Update.

If your Mac is no longer getting security or software updates, then you might want to physically update it. By that, we mean go buy a new Mac machine. Newer Macs are known to be more efficient and can be quite affordable. We have a buying guide for every type of Mac — M1 Mac, Macbook Air, MacBook Pro, and the iMac.

Reset NVRAM

MacBook Keyboard.

Our last tip is one that’s a bit more technical, and something we don’t recommend for novice users. However, resetting the nonvolatile random access memory (NVRAM) is a trick that we recommend for situations where Macs might be behaving oddly. The NVRAM holds display settings, the time zone setting, speaker volume, and the startup volume choice. Again, it’s only recommended for advanced users, however, so try at your own risk.

According to Apple, you can reset the NVRAM by shutting down your Mac and turning it on while holding Option, Command, P, and R together on your keyboard, then releasing it for 20 seconds. On Mac computers that play a startup sound, you can release the keys after the second startup sound. And on Mac computers that have the Apple T2 Security Chip, you can release the keys after the Apple logo appears and disappears for the second time.

Another trick also involves resetting the System Management Controller (SMC.) However, this is reserved for issues with sleep, wake, power, charging your Mac notebook battery, or other power-related symptoms. We won’t get into that, and point you to Apple’s help article for more.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Neuro-ID tracks user behavioral data to help companies boost conversion

Neuro-ID, a Montana-based company offering software to combat online fraud and increase conversion rates through behavioral data and analytics, announced that it has raised $35 million.

Neuro-ID says it helps brands in a variety of industries that have significant digital transactions, including Intuit, Square, Affirm, OppFi, and Elephant Insurance. In some cases, the company says it has helped these companies increase conversion by 200% and reduce historical fraud rates by 35%.

Expanding on the company’s technology and the leverage it provides to technical decision-makers, CEO Jack Alton told VentureBeat that Neuro-ID’s “Human Analytics JavaScript” service sits behind digital interactions and in real-time translates the taps, types, and swipes of digital users into actionable insight. For example, when a new user is signing up for a merchant account, Neuro-ID looks at the actual timing of the signup, frequency of interaction — including changes to personal information — and the user’s timeline of activities.

Alton noted that this provides a valuable new view into the human behind the screen — enabling the company’s clients to better understand the risk and opportunity of onboarding new customers.

New visibility dimension into customer analytics

Sharing further on how Neuro-ID is differentiated from its key competitors in the behavioral analytics and security space, Alton said, “Traditional behavioral analytics companies like NuData have used behavioral technology to do things like authenticate existing customers to prevent account takeovers by malicious actors. Neuro-ID is focused on tackling a much larger ‘conversion crisis’ that all digital organizations face.”

“For the past decade, 90% of all digital onboarding journeys have continued to result in frustration or failure. This has frustrated executives and led to endless a/b testing as organizations try to understand why so many people start and end up abandoning their onboarding journey,” he added.

Behavioral strategy plays a key role in today’s corporate strategy and decision-making processes, as highlighted in this article from Mckinsey. Neuro-ID says its proprietary “Friction Index” dashboard tries to identify the root cause of friction while screening for fraud attacks.

The funding for the software company came from Canapi Ventures and existing investors Fin VC and TTV Capital. The announcement was made in a recent press release.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link