What the growth of AIops solutions means for the enterprise

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Without exaggeration, digital transformation is moving at breakneck speed, and the verdict is that it will only move faster. More organizations will migrate to the cloud, adopt edge computing and leverage artificial intelligence (AI) for business processes, according to Gartner.

Fueling this fast, wild ride is data, and this is why for many enterprises, data — in its various forms — is one of its most valuable assets. As businesses now have more data than ever before, managing and leveraging it for efficiency has become a top concern. Primary among those concerns is the inadequacy of traditional data management frameworks to handle the increasing complexities of a digital-forward business climate.

The priorities have changed: Customers are no longer satisfied with immobile traditional data centers and are now migrating to high-powered, on-demand and multicloud ones. According to Forrester’s survey of 1,039 international application development and delivery professionals, 60% of technology practitioners and decision-makers are using multicloud — a number expected to rise to 81% in the next 12 months. But perhaps most important from the survey is that “90% of responding multicloud users say that it’s helping them achieve their business goals.”

Managing the complexities of multicloud data centers

Gartner also reports that enterprise multicloud deployment has become so pervasive that until at least 2023, “the 10 biggest public cloud providers will command more than half of the total public cloud market.”


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

But that’s not where it ends — customers are also on the hunt for edge, private or hybrid multicloud data centers that offer full visibility of enterprise-wide technology stack and cross-domain correlation of IT infrastructure components. While justified, these functionalities come with great complexities. 

Typically, layers upon layers of cross-domain configurations characterize the multicloud environment. However, as newer cloud computing functionalities enter into the mainstream, new layers are required — thus complicating an already-complex system.

This is made even more intricate with the rollout of the 5G network and edge data centers to support the increasing cloud-based demands of a global post-pandemic climate. Ushering in what many have called “a new wave of data centers,” this reconstruction creates even greater complexities that place enormous pressure on traditional operational models. 

Change is necessary, but considering that the slightest change in one of the infrastructure, security, networking or application layers could result in large-scale butterfly effects, enterprise IT teams must come to terms with the fact that they cannot do it alone.

AIops as a solution to multicloud complexity

Andy Thurai, VP and principal analyst at Constellation Research Inc., also confirmed this. For him, the siloed nature of multicloud operations management has resulted in the increasing complexity of IT operations. His solution? AI for IT operations (AIops), an AI industry category coined by tech research firm Gartner in 2016.

Officially defined by Gartner as “the combination of big data and ML [machine learning] in the automation and improvement of IT operation processes,” the detection, monitoring and analytic capabilities of AIops allow it to intelligently comb through countless disparate components of data centers to provide a holistic transformation of its operations. 

By 2030, the rise in data volumes and its resulting increase in cloud adoption will have contributed to a projected $644.96 billion global AIops market size. What this means is that enterprises that expect to meet the speed and scale requirements of growing customer expectations must resort to AIops. Else, they run the risk of poor data management and a consequent fall in business performance. 

This need creates a demand for comprehensive and holistic operating models for the deployment of AIops — and that is where Cloudfabrix comes in.

AIops as a composable analytics solution

Inspired to help enterprises ease their adoption of a data-first, AI-first and automate-everywhere strategy, Cloudfabrix today announced the availability of its new AIops operating model. It is equipped with persona-based composable analytics, data and AI/ML observability pipelines and incident-remediation workflow capabilities. The announcement comes on the heels of its recent release of what it describes as “the world-first robotic data automation fabric (RDAF) technology that unifies AIops, automation and observability.”

Identified as key to scaling AI, composable analytics give enterprises the opportunity to organize their IT infrastructure by creating subcomponents that can be accessed and delivered to remote machines at will. Featured in Cloudfabrix’s new AIops operating model is a composable analytics integration with composable dashboards and pipelines.

Offering a 360-degree visualization of disparate data sources and types, Cloudfabrix’s composable dashboards feature field-configurable persona-based dashboards, centralized visibility for platform teams and KPI dashboards for business-development operations. 

Shailesh Manjrekar, VP of AI and marketing at Cloudfabrix, noted in an article published on Forbes that the only way AIops could process all data types to improve their quality and glean unique insights is through real-time observability pipelines. This stance is reiterated in Cloudfabrix’s adoption of not just composable pipelines, but also observability pipeline synthetics in its incident-remediation workflows.

In this synthesis, likely malfunctions are simulated to monitor the behavior of the pipeline and understand the probable causes and their solutions. Also included in the incident-remediation workflow of the model is the recommendation engine, which leverages learned behavior from the operational metastore and NLP analysis to recommend clear remediation actions for prioritized alerts. 

To give a sense of the scope, Cloudfabrix’s CEO, Raju Datla, said the launch of its composable analytics is “solely focused on the BizDevOps personas in mind and transforming their user experience and trust in AI operations.”

He added that the launch also “focuses on automation, by seamlessly integrating AIops workflows in your operating model and building trust in data automation and observability pipelines through simulating synthetic errors before launching in production.” Some of those operational personas for whom this model has been designed include cloudops, bizops, GitOps, finops, devops, DevSecOps, Exec, ITops and serviceops.

Founded in 2015, Cloudfabrix specializes in enabling businesses to build autonomous enterprises with AI-powered IT solutions. Although the California-based software company markets itself as a foremost data-centric AIops platform vendor, it’s not without competition — especially with contenders like IBM’s Watson AIops, Moogsoft, Splunk and others.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


10 top artificial intelligence (AI) solutions in 2022

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Among the many drivers of the tech ecosystem’s rapid growth, artificial intelligence (AI) and its subdomains are at the fore. Described by Gartner as the application of “advanced analysis and logic-based techniques” to simulate human intelligence, AI is an all-inclusive system with numerous use cases for individuals and enterprises across industries.

There are many ways of leveraging AI to support, automate and augment human tasks, as seen by the range of solutions available today. These offerings promise to simplify complex tasks with speed and accuracy, and to spur new applications that were impractical or possible previously.  Some question whether the technology will be used for good or perhaps become more effective than humans for certain business use cases, but its prevalence and popularity cannot be doubted. 

What is artificial intelligence (AI) software?

AI software can be defined in several ways. First, a lean description would consider it to be software that is capable of simulating intelligent human behavior. However, a broader perspective sees it as a computer application that learns data patterns and insights to meet specific customer pain points intelligently.

The AI software market includes not just technologies with built-in AI processes, but also the platforms that allow developers to build AI systems from scratch. This could range from chatbots to deep and machine learning software and other platforms with cognitive computing capabilities. 

To get a sense of the scope, AI encompasses the following:

  • Machine learning (ML): Allows a computer to collect data and learn from it to generate insights.
  • Deep learning (DL): A step further in ML used to detect patterns and trends in large volumes of data and learn from them.
  • Neural networks: Interconnecting units that are designed to learn and recognize patterns, much like the human brain. 
  • Natural language processing (NLP): NLP supports AI’s ability to read, understand and process human language.
  • Computer vision: Teaching computers to collect and interpret meaningful data from images and videos. 

These capabilities are leveraged to build AI software for different use cases, the top of which are knowledge management, virtual assistance and autonomous vehicles. With the large volumes of data that enterprises must comb through to meet customer demands, there’s an increased need for faster and more accurate software solutions.

As expected, the rise in enterprise-level adoption of AI has led to accelerated market growth of the global AI software market. Gartner places the growth at an estimated $62.5 billion in 2022 — a 21.3% increase on its value in 2021. By 2025, IDC projects this market to reach $549.9 billion. 

4 critical functions of an AI solution

Whether it powers surgical bots in healthcare, detects fraud in financial transactions, strengthens driver assistance technology in the automotive industry or personalizes learning content for students, the overarching purpose of AI solutions can be grouped into four broad functional categories, including:

1. Automate processes

The automation function of AI applications meets AI’s primary objective to minimize human intervention in executing tasks, whether mundane and repetitive or complex and challenging. By collecting and interpreting volumes of data fed into it, an AI solution can be leveraged to determine the next steps in a process and execute it seamlessly. It does this by leveraging the capabilities of ML algorithms to create a knowledge base of structured and unstructured data.

Process automation remains a top enterprise concern, with one survey exhibiting that 80% of companies expect to adopt intelligent automation in 2027.

2. Data analysis and interpretation

A core function of AI solutions, especially for enterprises, is to create knowledge bases of structured and unstructured data and then analyze and interpret such data before making predictions and recommendations from its findings. This is called AI analytics and it uses machine learning to study data and draw patterns. 

Whether the analytic tools are predictive, prescriptive, augmented, or even descriptive, AI is at the center of determining how the data is prepared, discovering new insights and patterns and predicting business outcomes. Enterprises are also turning to AI for improved data quality

3. User personalization and engagement 

Building a relationship has become the holy grail of customer acquisition and retention. A study from McKinsey shows that one sure way to do this is through personalization and engagement. AI technologies allow enterprises to make personalized offers to customers and predict and solve their concerns in real-time. This function manifests in programs like conversational chatbots and product recommendations generated from learned customer behavior.

Many organizations are still getting up to speed with the technology. Gartner reports that 63% of digital marketers struggle to maximize personalization technology. Their survey of 350 marketing executives revealed that only 17% are actively using AI and ML solutions across the board, although 83% believe in its potency.

4. Business efficacy

Along with greater automation of traditional processes, AI enables new services and capabilities that were not previously feasible. From driverless vehicles and natural language services for consumers to medical breakthroughs that could only have been imagined previously, AI is becoming the base of new products and markets that will continue to unfold.

Also read: Creating responsible AI products using human oversight

10 top artificial intelligence (AI) software solutions in 2022

AI software solutions include general platforms for supporting a range of applications and products for more narrow, industry-specific use cases. We include a sampling of both in the following representative list. With 56% of organizations adopting AI for at least one business function, there are many options on the market today.

Below are ten examples of AI software solutions available in 2022.

Google Cloud AI

Google’s dominant cloud offering includes assorted tools to support developer, data science and infrastructure use cases. Several speech and language translation tools, vision, audio and video tools and deep and machine earning capabilities bring AI functionality to skilled technology practitioners and mass consumer markets. Google was named a leader in Gartner’s Magic Quadrant for Cloud AI Developer Services in 2022. 

IBM Watson Studio

Like Google, IBM offers a platform for building and training AI software. The IBM Watson Studio provides a multicloud architecture for developers, data scientists and analysts to “build, run and manage”’ AI models collaboratively. With capabilities ranging from AutoAI to explainable AI, DL, model drift, modelops and model risk management, the studio gives subject-matter experts the tools they need to either gather and prepare data or create and train AI models.

It also allows these professionals the flexibility to deploy AI models on either public or private cloud (IBM Cloud Pak, Microsoft Azure, Google Cloud, or Amazon Web Services) and on-premises. IT teams can open source these models as they build them with embedded Waston tools like the Natural Language Classifier. Its hybrid environment may also provide developers with more data access and agility.

Salesforce Einstein

Named a leader in Gartner’s Magic Quadrant for CRM Customer Engagement Center thirteen times in a row and the #1 CRM solution for eight consecutive years by the International Data Corporation (IDC), Salesforce provides an advanced kit of sales, marketing and customer experience tools. Salesforce Einstein is an AI product that helps companies identify patterns in customer data. 

This platform has a set of built-in AI technologies supporting the Einstein bots, prediction builder, forecasting, commerce cloud Einstein, service cloud Einstein, marketing cloud Einstein and other functions. Users and developers of new and existing cloud applications can also deploy the platform’s predictive and suggestive capabilities into their models. For example, at Salesforce Einstein’s launch in 2016, John Ball, general manager at Einstein, revealed that by creating Einstein, the company “enables sales professionals to find better prospects and close more deals through predictive lead scoring and automatic data capture to convert leads into opportunities and opportunities into deals.”


Oculeus provides an industry-specific solution. For service providers, network operators and enterprises in the telecom industry that need to protect and defend their communication infrastructure against cyber threats, Oculeus offers a portfolio of software-based solutions that can help them better manage network operations. According to founder and CEO Arnd Baranowski, Oculeus uses AI and automation “to learn about an enterprise’s regular communications traffic and continually monitor it for exceptions to a baseline of expected communications activities. With its AI-driven technologies, suspicious traffic can be identified, investigated and blocked within milliseconds. This is done before any significant financial damage is caused to the enterprise and protects the brand reputation of the telecoms service provider.”

The Communications Fraud Control Association (CFCA)’s 2021 survey of international telecommunication fraud loss discovered losses amounting to over $39.89 billion, a 28% increase in losses over the previous year. Similarly, network security and operators are experiencing more fraud threats and attacks. 

Among other things, these insights amplify the need for enterprises to switch to a proactive defense approach that outwits adversaries, and this what Oculeus claims to provide with its AI-powered telecoms fraud protection solutions. In Baranowski’s words, Oculeus’ AI-driven approach to telecoms fraud protection does not only “…stop fraudulent telecommunications traffic before any significant financial damage is caused” but also includes extensive automation tools that weed out threats thoroughly.


Edsoma represents another narrow use case. Its AI-based reading application software features real-time, exclusive voice identification and recognition technology designed to uncover the strengths and weaknesses in children’s reading. This follow-along technology identifies users’ spoken words and speaking speed to determine if they are saying the words correctly. A correction program helps put them back on track if they mispronounce something.

As Edsoma founder and CEO Kyle Wallgren explained, once “…the electronic book is read, the child’s voice is transcribed in real-time by the automated speech recognition (ASR) system and immediate results are provided, including pronunciation assessment, phonetics, timing and other facets. These metrics are compiled to help teachers and parents make informed decision.”

This technology aims to improve children’s oral reading fluency skills and provide them the necessary support to inculcate a healthy reading culture. Edsoma seeks to establish a share of the $127 billion global edtech market. By leveraging real-time data to provide real-time literacy, Edsoma looks to provide future-focused learning powered by AI.


Appen has been one of the early leaders as a source for data required throughout the development lifecycle of AI products. This platform provides and improves image and video data, language processing, text and even alphanumeric data. 

It follows four steps in preparing data for AI processing: the first step is data sourcing which offers automatic access to over 250 pre-labeled datasets — then data preparation, which provides data annotation, data labeling and knowledge graphs and ontology mapping.

The third stage supports model building and development needs with the help of partners like Amazon Web Services, Microsoft, Nvidia and Google Cloud AI. The final step combines a human evaluation and AI system benchmarking, giving developers an understanding of how their modes work. 

Appen boasts a lingual database of more than 180 languages and a global skill force of over 1 million talents. Of its many features, its AI-assisted data annotation platform is the most popular.


Cognigy is a low-code conversational AI and automation platform recently named a leader in Gartner’s 2022 Magic Quadrant for Enterprise Conversational AI platforms. As the need for more excellent customer experience (CX) intensifies, more enterprises rely on conversational analytics solutions that dive deep into its customer’s text and voice data and discover insights that inform smarter decisions and automate processes. 

This is why Cognigy automates natural communication among employees and customers on multimodal channels and in over 100 languages. In addition, its technology allows enterprises to set up AI-powered voice and chatbots that can address customer concerns with human-like accuracy.

Cognigy also has an analytics feature — Cognigy Insights — that provides enterprises with data-driven insights on the best ways to optimize their virtual agents and contact centers. In addition, the platform allows users to either deploy the technology on the cloud or on-premises. Particularly praised by Gartner for its customer references, flexibility and sustainability, this platform helps businesses create new service experiences for customers.


Synthesis AI‘s solution generates synthetic data that allows developers to create more capable and ethical AI models. Engineers can source several well-labeled, photorealistic images and videos in deploying its models on this platform. These images and videos come perfectly labeled with labels ranging from depth maps, surface normals, segmentation maps, and even 2D/3D landmarks.

Virtual product prototyping and the chance to build more ethical AI with expanded datasets that account for equal identity, appearance and representations are also some of its product offerings. Organizations can deploy this technology across API documentation, teleconferencing, digital humans, identity verification and driver monitoring use cases. With 89% of tech executives believing that synthetic data would transform its industry,’s technology may be a great fit.


Tealium’s data orchestration platform is positioned as a universal data hub for businesses seeking a robust customer data platform (CDP) for marketing engagement. This CDP provider offers a tray of solutions in its customer data integration system that allows businesses to connect better with their customers. Tealium’s offerings include a tag management system for enterprises to track and unify its digital marketing deployments (Tealium iQ), an API hub to facilitate enterprise interconnectedness, an ML-powered data platform (Tealium AudienceStream) and data management solutions.

The company recently sponsored a comprehensive economic impact study from Forrester, calculating ROI on reference customers.


Coro provides holistic cybersecurity solutions for mid-market and small to medium-sized. The platform leverages AI to identify and remediate malware, ransomware, phishing and bot security threats across all endpoints while reducing the need for a dedicated IT team. In addition, it’s built on the principle of non-disruptive security, allowing it to provide security solutions for organizations with limited security budgets and expertise.

This cybersecurity-as-a-service (CaaS) vendor shows how AI can support higher-level services brought to lower-level business market tiers.

The wave of AI innovation

As AI-powered technologies continue to advance and more organizations adopt them, IT leaders must be sure to ask themselves how the solutions they choose fit into their goals as a business. With so many vendors riding the wave of AI innovation, buyers must select their solutions carefully.

IDC predicts that AI platforms and AI application development and deployment will continue to be the fastest-growing sectors of the AI market. This list provides a starting point for organizations to evaluate the approaches and solutions that best fit their needs.

Read next:New AI software cuts development time dramatically

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


AI in robotics: Problems and solutions

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Robotics is a diverse industry with many variables. Its future is filled with uncertainty: nobody can predict which way it will develop and what directions will be leading a few years from now. Robotics is also a growing sector of more than 500 companies working on products that can be divided into four categories:

  • Conventional industrial robots,
  • Stationary professional services (such as medical and agricultural applications),
  • Mobile professional services (construction and underwater activities),
  • Automated guided vehicles (AGVs) for carrying small and large loads in logistics or assembly lines.

According to the International Federation of Robotics data, 3 million industrial robots are operating worldwide – the number has increased by 10% over 2021. The global robotics market is estimated at $55.8 billion and is expected to grow to $91.8 billion by 2026 with a 10.5% annual growth rate.

Biggest industry challenges

The field of robotics is facing numerous issues based on its hardware and software capabilities. The majority of challenges surround facilitating technologies like artificial intelligence (AI), perception, power sources, etc. From manufacturing procedures to human-robot collaboration, several factors are slowing down the development pace of the robotics industry.

Let’s look at the significant problems facing robotics:


Different real-world environments may become challenging for robots to comprehend and take suitable action. There is no match for human thinking; thus, robotic solutions are not entirely dependable.


There was considerable progress in robots perceiving and navigating the environments – for example, self-driving vehicles. Navigation solutions will continue to evolve, but future robots need to be able to work in environments that are unmapped and not fully understood.


Full autonomy is impractical and too distant as of now. However, we can reason about energy autonomy. Our brains require lots of energy to function; without evolutionary mechanisms of optimizing these processes, they wouldn’t be able to achieve the current levels of human intelligence. This also applies to robotics: more power required decreases autonomy. 

New materials

Elaborate hardware is crucial to today’s robots. Massive work still needs to be performed with artificial muscles, soft robotics, and other items that will help to develop efficient machines.

The above challenges are not unique, and they are generally expected for any developing technology. The potential value of robotics is immense, attracting tremendous investment that focuses on removing existing issues. Among the solutions is collaborating with artificial intelligence.

Robotics and AI

Robots have the potential to replace about 800 million jobs globally in the future, making about 30% of all positions irrelevant. Unsurprisingly, only 7% of businesses currently do not employ AI-based technology but are looking into it. However, we need to be careful when discussing robots and AI, as these terms are often assumed to be identical, which has never been the case.

The definition of artificial intelligence tells about enabling machines to perform complex tasks autonomously. Tools based on AI can solve complicated problems by analyzing large quantities of information and finding dependencies not visible to humans. We at featured six cases when improvements in navigation, recognition, and energy consumption reached between 48% and 800% after applying AI. 

While robotics is also connected to automation, it combines with other fields – mechanical engineering, computer science, and AI. AI-driven robots can perform functions autonomously with machine learning algorithms. AI robots can be described as intelligent automation applications in which robotics provides the body while AI supplies the brain.

AI applications for robotics

The cooperation between robotics and AI is naturally called to serve mankind. There are numerous valuable applications developed so far, starting from household usage. For example, AI-powered vacuum cleaners have become a part of everyday life for many people.

However, much more elaborate applications are developed for industrial use. Let’s go over a few of them:

  • Agriculture. As in healthcare or other fields, robotics in agriculture will mitigate the impact of labour shortages while offering sustainability. Many apps, for example, Agrobot, enable precision weeding, pruning, and harvesting. Powered by sophisticated software, apps allow farmers to analyze distances, surfaces, volumes, and many other variables.
  • Aerospace. While NASA is looking to improve its Mars rovers’ AI and working on an automated satellite repair robot, other companies want to enhance space exploration through robotics and AI. Airbus’ CIMON, for example, is developed to assist astronauts with their daily tasks and reduce stress via speech recognition while operating as an early-warning system to detect issues.
  • Autonomous driving. After Tesla, you cannot surprise anybody with self-driving cars. Nowadays, there are two critical cases: self-driving robo-taxis and autonomous commercial trucking. In the short-term, advanced driver-assistance systems (ADAS) technology will be essential as the market gets ready for complete autonomy and seeks to gain profits from the technology capabilities.

With advances in artificial intelligence coming on in leaps and bounds every year, it’s certainly possible that the line between robotics and artificial intelligence will become more blurred over the coming decades, resulting in a rocketing increase in valuable applications.

Main market tendency

The competitive field of artificial intelligence in robotics is getting more fragmented as the market is growing and is providing clear opportunities to robot vendors. The companies are ready to make the first-mover advantage and grab the opportunities laid by the different technologies. Also, the vendors view expansion in terms of product innovation and global impact as a path toward gaining maximum market share.

However, there is a clear need for increasing the number of market players. The potential of robotics to substitute routine human work promises to be highly consequential by freeing people’s time for creativity. Therefore, we need many more players to speed up the process. 

Future of AI in robotics

Artificial intelligence and robotics have already formed a concrete aim for business investments. This technology alliance will undoubtedly change the world, and we can hope to see it happen in the coming decade. AI allows robotic automation to improve and perform complicated operations without a hint of error: a straightforward path to excellence. Both industries are the future driving force, and we will see many astounding technological inventions based on AI in the next decade.

Sergey Alyamkin, Ph.D. is CEO and founder of ENOT.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


Microsoft launches cloud-powered marketing, supply chain, and retail solutions

Beyond updates across Azure, this week at its Ignite 2021 conference, Microsoft has announced additions to its Dynamics 365 business app and industry portfolio, including Microsoft Customer Experience Platform, a marketing solution that is aimed at helping customers personalize, automate, and orchestrate customer journeys. Another — Microsoft Connected Spaces — is designed to let organizations leverage observational data to produce predictive insights, while Supply Chain Insights aims to proactively mitigate supply chain issues.

“[W]e are announcing innovation across the Microsoft Cloud that will allow every organization to build a hyperconnected business, providing the agility and flexibility for organizations and employees to thrive now and into the future,” Microsoft’s corporate VP of industry, apps, and data marketing Alysa Taylor said in a blog post. “In order to be successful given these trends, every organization must move beyond ‘business as usual’ and toward a new model of hyperconnected business.”

Microsoft Customer Experience

Microsoft pitches its Customer Experience Platform as a service to “deliver personalized and connected experiences from awareness to purchase.” Leveraging assets from the existing Microsoft Customer Insights and Dynamics 365 Marketing products, the goal is to understand and predict intent to deliver the right content on the right channel and at the right moment, the company says.

To the company’s point, Epsilon found that 80% of consumers are more likely to make a purchase from a brand that provides personalized experiences — marketing or otherwise. Moreover, according to a Smart Insights survey, 63% of consumers will stop buying from brands that use poor personalization tactics.

Customer Experience Platform’s Consent-enabled Consumer Data Platform (CDP) feature, currently in preview, enables chief data officers to use consent data to build customer profiles, manage known and pseudonymous aliases, ensure consumer data practices are compliant and protect the data with privacy and security controls. The complementary business-to-business CDP combines customer data from all sources — including customer relationship management software, email, websites, point-of-sales, partner systems, and social networks — and performs identity resolution at the contract and account level to generate profiles for people and companies. The AI-suggested content creation and delivery tool automatically generates a set of “content snippers” to serve as inspiration for customer emails.

Taylor says that customers including Home Depot, Chipotle, and agency partners like VMLY&R and Kin+Carta are already using Customer Experience Platform in their organizations. “Many [businesses] struggle to manage, organize, and gain insight from the vast quantity of data available to them, and for marketers that challenge is even more acute,” she added. “[C]ustomer data is the key to personalized, relevant, timely customer experiences.”

Connected Spaces

Another new service joining Dynamics 365, Connected Spaces, lets organizations “gain a new perspective” in the way people move and interact in nearly any space, Microsoft claims. With it, companies can monitor safety in high-risk areas and observe queue management, ranging in environments from retail stores to factory floors.

“With Connected Spaces … organization[s] can harness observational data with ease, [using] AI-powered models to unlock insights about [their] environment and respond in real-time to trends and patterns,” Taylor added.

At a high level, Connected Spaces provides analytics and trend information about people, places, and more. For example, in a store, Connected Spaces can monitor foot traffic patterns, cashier queue lengths, dwell times, and product display engagement. Microsoft says that Mattress Firm has piloted the technology to measure the effectiveness of its in-store promotions.

While the purported goal of products like Connected Spaces includes health, safety, and analytics, the technology could be co-opted for other, less humanitarian intents. Many privacy experts worry that they’ll normalize greater levels of surveillance, capturing data about workers’ movements and allowing managers to chastise employees in the name of productivity.

Microsoft did not detail the steps it’s taken to prevent the potential misuse of Connected Spaces. Once the service comes to preview, further details regarding this are likely to be disclosed.

Supply Chain Insights

The launch of the aforementioned Supply Chain Insights (in preview) comes as companies face historic supply challenges. According to one source, only 6% report full visibility on their supply chain. And 38.8% of U.S.-based small businesses experienced supply chain delays due to the pandemic.

“Customers like Daimler Trucks North America can gain new visibility into their supply chains across multiple tiers of suppliers. They can get data in near real-time, allowing them to assess risks and mitigate problems before a massive disruption occurs,” Taylor continued. “Supply Chain Insights also enables customers to enrich their own supply chain data with external signals like global weather data to predict its impact on shipments, putting increased intelligence and predictive power at their fingertips.”

With Supply Chain Insights, companies can reconcile data from third-party data providers, logistics partners, customers, and multi-tier suppliers and create a “digital twin” simulation of the supply chain — generating insights powered by AI. Digital twin approaches to simulation have gained currency in many domains, for instance helping SenSat clients in construction, mining, energy, and other industries create models of locations for projects they’re working on.

Supply Chain Insights also enriches signals with external constraints like environmental disasters or geopolitical events that could affect the supply chain. Beyond this, the service can automate and execute actions through existing enterprise resource planning and supply chain execution systems, according to Microsoft.

Microsoft Cloud


Building on the release of Supply Chain Insights, Microsoft is introducing Microsoft Cloud for Manufacturing, the newest entry in the company’s Industry Cloud lineup. Announced in February and now available in preview, Cloud for Manufacturing brings together new and existing capabilities across the Microsoft Cloud portfolio in addition to partner solutions to connect people, assets, workflows, and business processes.

According to a 2020 PricewaterhouseCoopers survey, companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion in 2025.

“Cloud for Manufacturing … connects experiences across the end-to-end product and service lifecycle and lighting up the entire Microsoft Cloud with capabilities specifically tailored to manufacturing,” Taylor said. “Customers including Johnson & Johnson are working with Microsoft on their digital manufacturing transformation with tools like Azure, AI, and Microsoft Cloud for Manufacturing capabilities.”

Nonprofit, Sustainability, Financial Services, and Healthcare

Microsoft Cloud for Nonprofit, a collection of tools, apps, cloud services, and infrastructure geared toward nonprofit scenarios, is now generally available following a preview earlier this year. Built for fundraisers, volunteer managers, and program managers, and other roles unique to the nonprofit segment, it’s designed to help address challenges ranging from constituent and supporter engagement to program design and delivery.

The closely related Microsoft Cloud for Sustainability, previously in private preview and now in public preview, aims to bring greater transparency and data sharing, agreement on common taxonomy and methods of measurement, and standard practices for tracking and reporting data on carbon emissions. The solution uses a common format to connect data from various sources and get a company’s carbon footprint as well as provide insights to understand data, measure progress, meet regulatory and reporting requirements, and identify actions needed to reduce the footprint.

Meanwhile, Microsoft announced that Microsoft Cloud for Financial Services — its collection of solutions for the financial industry — is generally available, with new capabilities across Microsoft Azure, Microsoft 365, Dynamics 365, and Power Platform to help enable retail banks to “enhance customer and employee experiences.” Alongside this, Microsoft Cloud for Healthcare — which launched last year — has received several updates, including:

  • An enhanced patient view that allows providers to associate one patient record with other patient records (in preview)
  • A new waiting room for Microsoft Teams (in preview).
  • Integration of Microsoft Forms with Microsoft Bookings (in preview)
  • A scheduled queue for virtual visits in Microsoft Bookings (in preview)
  • Virtual Visits Manager, a standalone Teams app to facilitate reporting about virtual consults (in preview)


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


AI Weekly: AI model training costs on the rise, highlighting need for new solutions

This week, Microsoft and Nvidia announced that they trained what they claim is one of the largest and most capable AI language models to date: Megatron-Turing Natural Language Generation (MT-NLP). MT-NLP contains 530 billion parameters — the parts of the model learned from historical data — and achieves leading accuracy in a broad set of tasks, including reading comprehension and natural language inferences.

But building it didn’t come cheap. Training took place across 560 Nvidia DGX A100 servers, each containing 8 Nvidia A100 80GB GPUs. Experts peg the cost in the millions of dollars.

Like other large AI systems, MT-NLP raises questions about the accessibility of cutting-edge research approaches in machine learning. AI training costs dropped 100-fold between 2017 and 2019, but the totals still exceed the compute budgets of most startups, governments, nonprofits, and colleges. The inequity favors corporations and world superpowers with extraordinary access to resources at the expense of smaller players, cementing incumbent advantages.

For example, in early October, researchers at Alibaba detailed M6-10T, a language model containing 10 trillion parameters (roughly 57 times the size of OpenAI’s GPT-3) trained across 512 Nvidia V100 GPUs for 10 days. The cheapest V100 plan available through Google Cloud Platform costs $2.28 per hour, which would equate to over $300,000 ($2.28 per hour multiplied by 24 hours over 10 days) — further than most research teams can stretch.

Google subsidiary DeepMind is estimated to have spent $35 million training a system to learn the Chinese board game Go. And when the company’s researchers designed a model to play StarCraft II, they purposefully didn’t try multiple ways of architecting a key component because the training cost would have been too high. Similarly, OpenAI didn’t fix a mistake when it implemented GPT-3 because the cost of training made retraining the model infeasible.

Paths forward

It’s important to keep in mind that training costs can be inflated by factors other than an algorithm’s technical aspects. As Yoav Shoham, Stanford University professor emeritus and cofounder of AI startup AI21 Labs, recently told Synced, personal and organizational considerations often contribute to a model’s final price tag.

“[A] researcher might be impatient to wait three weeks to do a thorough analysis and their organization may not be able or wish to pay for it,” he said. “So for the same task, one could spend $100,000 or $1 million.”

Still, the increasing cost of training — and storing — algorithms like Huawei’s PanGu-Alpha, Naver’s HyperCLOVA, and the Beijing Academy of Artificial Intelligence’s Wu Dao 2.0 is giving rise to a cottage industry of startups aiming to “optimize”  models without degrading accuracy. This week, former Intel exec Naveen Rao launched a new company, Mosaic ML, to offer tools, services, and training methods that improve AI system accuracy while lowering costs and saving time. Mosaic ML — which has raised $37 million in venture capital — competes with Codeplay Software, OctoML, Neural Magic, Deci, CoCoPie, and NeuReality in a market that’s expected to grow exponentially in the coming years.

In a sliver of good news, the cost of basic machine learning operations has been falling over the past few years. A 2020 OpenAI survey found that since 2012, the amount of compute needed to train a model to the same performance on classifying images in a popular benchmark — ImageNet — has been decreasing by a factor of two every 16 months.

Approaches like network pruning prior to training could lead to further gains. Research has shown that parameters pruned after training, a process that decreases the model size, could have been pruned before training without any effect on the network’s ability to learn. Called the “lottery ticket hypothesis,” the idea is that the initial values parameters in a model receive are crucial for determining whether they’re important. Parameters kept after pruning receive “lucky” initial values; the network can train successfully with only those parameters present.

Network pruning is far from a solved science, however. New ways of pruning that work before or in early training will have to be developed, as most current methods apply only retroactively. And when parameters are pruned, the resulting structures aren’t always a fit for the training hardware (e.g., GPUs), meaning that pruning 90% of parameters won’t necessarily reduce the cost of training a model by 90%.

Whether through pruning, novel AI accelerator hardware, or techniques like meta-learning and neural architecture search, the need for alternatives to unattainably large models is quickly becoming clear. A University of Massachusetts Amherst study showed that using 2019-era approaches, training an image recognition model with a 5% error rate would cost $100 billion and produce as much carbon emissions as New York City does in a month. As IEEE Spectrum’s editorial team wrote in a recent piece, “we must either adapt how we do deep learning or face a future of much slower progress.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


AI vendors must offer more solutions for niche use cases

All the sessions from Transform 2021 are available on-demand now. Watch now.

Most AI vendors develop solutions that target broad use cases with large markets. This is because investors have shown they are only interested in a target market if it is worth several billion dollars. Therefore smaller markets have been excluded, and AI solution ideas designed for niche markets often die out and the companies behind them come to a standstill before they have the chance to see the light of day.

Another side effect of the limited capital to build niche models is that AI vendors tend to build one model and market it to a large set of disparate users. For example, a company selling a vehicle detection system would normally build a single model to detect all types of vehicles across multiple use cases and geographies. An animal detection model typically would detect many different animals and have lower accuracy than a model designed to detect a single animal. These broad-reach products result in lower model accuracy and erode public trust in AI’s capabilities. They also require that humans remain in the loop for verification, consuming more human resources and increasing the overall cost of the solution for customers.

The reason investors focus on broad-reach solutions is that niche solutions are very costly to produce. In order to make a model for a niche use case, you need data that is very specific to it. And collecting data while addressing all of the relevant regulations and security concerns is a big challenge.

And even if a vendor is able to develop a model for a niche use case, the challenge isn’t over, because an AI model is rarely a standalone solution — it often relies on a number of external components. And the more niche your model, the more niche the components of the solution will be. For example, in the case of computer vision or vision AI, some critical components include:

  1. Camera setup and management
  2. AI Model management and updates
  3. Dealing with video storage and data retention policies
  4. Alerts and notification rules
  5. Role based access control for users

Running the AI model at one of the steps in the software stack is a very small piece of the puzzle. The bulk of an AI vendor’s time goes to building the rest of the software stack to handle the devices and other services that make a complete solution.

And then there are compliance and security issues to consider. Any AI solution a vendor sells has to be comply with different regulations in different geographies and must be secure. Handling those requirements is a big task for any company. This gets even more difficult if the company or developers have to deal with uncharted waters with no prior solution existing in their space. In such cases they may have to go to the local, state, or central government to navigate laws about deploying AI solutions.

Given that most lawmakers are not technology experts, it takes time to get regulations passed and adopted. This can be a slow process, risking the viability or success of such projects if the company does not have deep pockets to wait it out.

Making AI available for niche use cases

So how can the AI vendor community overcome these challenges to bring solutions to the many niches where broad-reach products don’t apply?

1. Build a customer council with friendly customers. In order to handle the data-collection challenge I outlined above, AI vendors should aim to find friendly customers who can help. Such customers can not only provide some of the necessary data; they can also help vendors put the right structure in place for data collection and management. In return, vendors should offer the solution to them at a very low cost. As the initial customer council, they can take pride in building a useful solution for others.

2. Avoid building from scratch. Some vendors decide to build everything themselves using existing software libraries and core infrastructure services from a cloud vendor. This approach provides complete control over the design but can take almost a year to build. The initial goal should be to build a solution quickly and take it to market for testing. The solution can always be improved or optimized later, after initial customers and early adopters have been established. Some solutions have emerged to increase go-to-market time. For example, AWS Panorama and Microsoft Percept have launched various solutions for edge deployments to help build AI solutions using existing or smart cameras. These devices especially help with deployment of AI models on the edge closer to the devices. In general, look for platforms that enable quick transition from AI model to full solution.

3. Build an AI/ML pipeline. AI vendors can build pipelines that allow them to quickly train models on specific data sets. They should design a pipeline so that the data used to build specific models can be easily tracked in order to make it easier to add new data from customers as it is available. There are several solutions in this space already like Kubeflow, AWS Sagemaker, GCP AI pipelines that mean you can avoid building a pipeline from scratch.

The bottom line

There’s a lot of talk about democratizing artificial intelligence to make it more available to more user organizations. Currently we have many broad-reach AI models on the market for things like human detection and voice recognition. But the models are so generic that they run the risk of being inaccurate. To increase precision and accuracy of AI, and to make it usable to a wider range of organizations, we must enable a long tail of AI models that are designed for niche use cases. Although the current cost of developing such niche models and taking them to market is currently too high, we must find ways to break that barrier.

 Ajay Gulati is CTO of vision AI company Kami Vision.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Full Speed Automation raises $3.2M for no-code solutions that accelerate industrial digitization

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

For all the talk about the factory of the future and Industry 4.0, manufacturing systems can still be quite antiquated and fragmented when it comes to achieving real automation. While some areas may be digitized, monitoring these systems and getting them to communicate with each other can still be a daunting, labor-intensive challenge.

After leading the automation engineering team at Tesla for almost 2 years, Luc Leroy believes he knows how to solve this problem. Last summer, he left Tesla to found Full Speed Automation, which has just raised $3.2 million to finish the development of the first version of its platform, called Vitesse (the French word for ‘speed’).

The Vitesse platform will be a middleware layer that automatically detects all the digital devices in a manufacturing operation and allows managers to connect them with a simple point-and-click. This no-code approach uses APIs to accelerate the speed at which manufacturers can enhance their own systems.

For Leroy, the startup is the culmination of a long journey that began in classic software development and unexpectedly took him into manufacturing. The differences in terms of speed and agility were jarring, and he is determined to bridge that divide.

“I started to see a huge gap between the expectations you would get from mobile users and web and tech startups where we had to get something running in 24 hours,” Leroy said. “The world of manufacturing was extremely slow. And I started to get intrigued about that. There was no notion of iteration and fail quickly, the things we all take for granted for the past 10 to 15 years in the world of pure software.”

Leroy founded his first startup more than 20 years ago in Paris, an augmented reality service for television broadcasts. In 2003, he co-founded Okyz, which enabled the creation of 3D models that could be placed in PDF documents. Adobe acquired the company in 2004, and Leroy moved to Silicon Valley where he began interacting more and more with manufacturers who were using the PDF 3D product.

What he learned surprised him.

“I realized that everything we learn at school about lean manufacturing and just-in-time and zero stock, it’s not right,” he said. “When you order a replacement part, it still gets made two months earlier in Japan and then crossed the world on a polluting ship and then sits on a shelf until it’s needed. And we still call that zero stock.”

Part of the issue was that manufacturing systems were still too slow. When it came time to upgrade software, employees were walking around the factory floor with USB sticks, manually updating each portion of the assembly line. To get these different parts of the process to communicate, companies had to spend huge resources writing custom software that could be slow and unresponsive at times as the system evolved.

Following a stint working for a company building an Iron Man-like smart helmet, Leroy was recruited to work at Tesla where automating the assembly line to meet ambitious production goals had become a high priority.

“The mission was to accelerate the transition to sustainable transportation and ultimately, sustainable energy,” Leroy said. “That meant a lot to me.”

Leroy’s team was placed in charge of solving the Tesla Model 3’s “production hell.” Over time, Leroy’s team developed tools to automate those updates on the thousands of robots in Tesla’s factory. After 18 months, Leroy began to think that the insights he’d gained at Tesla could be useful for a far larger base of manufacturers.

“You can have top-notch AI, and computer vision for quality control and safety,” he said. “But the software for each of these exists in its own verticals.”

Democratizing automation

While Tesla’s buzz and market cap allow it to hire many of the best engineers, that’s a tougher haul for incumbent manufacturers. Even though, in reality, giants in verticals like automotive, finance, and insurance have effectively become software companies over the past decade, employing huge armies of developers.

So what Tesla built may not be easy for others to replicate. To do solve that problem, Leroy argues that any solution has to be as simple as possible to manage, so it is accessible to anyone. That’s why he is focusing on the no-code approach. The UI will be point and click to connect all devices in a factory.

“We need to find a way to democratize these tools so that experienced technicians can use them and very quickly,” Leroy said.

Vitesse will be horizontal, linking to all of these different assets via an API. In doing so, factory managers will be able to dramatically expand the level of automation in their shops. Once this intermediate layer is in place, factories should be able to iterate rapidly and reset product lines in days instead of weeks or months. In addition, the collection of data will be simpler and neater, allowing for better monitoring in real-time.

Since last summer, Leroy has assembled a team of 6 to build the proof of concept. He’s working with an undisclosed list of manufacturing partners to refine it with the goal of releasing a beta version in Q2 of this year. That also includes partnering with component providers to integrate a large catalog of equipment into Vitesse’s library to ease integration.

To further that goal, the company has raised a Seed round of $3.2 million led by HCVC, the venture capital arm of Paris-based Hardware Club. The round also included money from Catapult Venture Capital, Seedcamp, Serena Capital, Kima Ventures, Diaspora Ventures, as well as Industry Executives such as Bruno Bouygues, CEO of manufacturing supplier GYS.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Google launches suite of AI-powered solutions for retailers

Google today announced the launch of Product Discovery Solutions for Retail, a suite of services deigned to enhance retailers’ ecommerce capabilities and help them deliver personalized customer experiences. Product Discovery Solutions for Retail brings together AI algorithms and a search service, Cloud Search for Retail, that leverages Google Search technology to power retailers’ product-finding tools.

The pandemic and corresponding rise in online shopping threaten to push supply chains to the breaking point. Early in the COVID-19 crisis, Amazon was forced to restrict the amount of inventory suppliers could send to its warehouses. Ecommerce order volume has increased by 50% compared with 2019, and shipment times for products like furniture more than doubled in March. Moreover, overall U.S. digital sales have jumped by 30%, expediting the online shopping transition by as much as two years.

Product Discovery Solutions for Retail, which is generally available to all companies as of today, aims to address the challenges with AI and machine learning. To that end, it includes access to Google’s Choice AI, which uses machine learning to dynamically adapt to customer behavior and changes in variables like assortment, pricing, and special offers.

Choice AI, which launched in beta in July and is now generally available, ostensibly excels at handling recommendations in scenarios with long-tail products and cold-start users and items. Thanks to “context-hungry” deep learning models developed in partnership with Google Brain and Google Research, it’s able to draw insights across tens of millions of items and constantly iterate on those insights in real time.

Google Choice AI

From a graphical interface, businesses using Choice AI can integrate, configure, monitor, and launch recommendations while connecting data by using existing integrations with Merchant Center, Google Tag Manager, Google Analytics 360, Cloud Storage, and BigQuery. Choice AI can incorporate unstructured metadata like product name, description, category, images, product longevity, and more while customizing recommendations to deliver desired outcomes, such as engagement, revenue, or conversions. And it lets Google Cloud customers apply rules to fine-tune what shoppers see and diversify which products are shown, filtering by product availability and custom tags.

Product Discovery Solutions for Retail also includes access to Google’s Vision API Product Search, which allows shoppers to search for products with an image and receive a ranked list of visually and semantically similar items. Google says Vision Product Search taps machine learning-powered object recognition and lookup to provide real-time results of similar, or complementary, items from retailers’ product catalog.

Beyond Choice AI and Vision API Product Search, Product Discovery Solutions for Retail ships with Cloud Search for Retail. Cloud Search for Retail, which is currently in private preview, pulls from Google’s understanding of user intent and context to provide retail product search functionality that can be embedded into websites and mobile apps.

“As the shift to online continues, smarter and more personalized shopping experiences will be even more critical for retailers to rise above their competition,” Google Cloud retail and consumer VP Carrie Tharp said in a statement. “Retailers are in dire need of agile operating models powered by cloud infrastructure and technologies like artificial intelligence and machine learning (AI/ML) to meet today’s industry demands. We’re proud to partner with retailers around the world and bring forward our Product Discovery offerings to help them succeed.”


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link