How machine identities are the key to successful identity management

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Securing digital identities is a problem for many organizations. In fact, according to the Identity Defined Security Alliance (IDSA), 84% of organizations have experienced an identity-related breach. 

Part of the challenge of identity management is the identities that organizations need to manage aren’t just human, but machine-based. 

Today, enterprise identity security provider SailPoint Technologies Holdings, Inc., released a new research report surveying 300 global cybersecurity executives and revealing that machine identities make up 43% of all identities within the average enterprise, followed by customers (31%) and employees (16%). 

This highlights that organizations need to have a solution for managing digital machine identities in real time if they want to secure their environments against the latest threats. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The identity threat landscape 

Over the past few years, the identity sprawl created by the adoption of new technologies and cloud-based apps have increased complexity for security teams dramatically. Today, machine identities now outweigh human identities by a factor of 45 times on average, with the average employee having over 30 digital identities

SailPoint’s research also anticipates that this sprawl will continue, estimating that the total number of identities will grow by 14% over the next 3-5 years. 

Yet, many organizations still have a long way to go before they’re ready to confront the identity threat landscape. 

“Our report shows that 45% of companies are still at the beginning of their identity journey. This means they have the unique opportunity to take advantage of today’s technology to build a comprehensive, [artificial intelligence] AI-enabled approach to identity security from the ground up,” said Matt Mills, SailPoint president of worldwide field operations. 

“As enterprise identity needs move beyond human capacity, this approach has quickly become table stakes. Not only that, but identity security has risen to the top as business-essential to securing today’s enterprise,” Mills said. 

In practice, an artificial intelligence (AI) and machine learning (ML) centric approach is essential for detecting digital identities in real time. This appears to be recognized by organizations, with 50% of respondents indicating they’ve implemented AI/ML models to boost their capabilities. 

SailPoint’s own identity security cloud platform leverages AI and ML to automatically identify user and machine identities throughout enterprise environments, so that security teams can more effectively manage and secure them. This approach serves to increase visibility over user access risks. 

The identity management market 

SailPoint falls within the global identity and access management market, which researchers estimate will grow from a value of $13.4 billion in 2021 to $34.5 billion in 2028. 

The vendor remains a significant player in the market, in August reporting that it has raised annual recurring revenue (ARR) of $429.5 million.  

It’s competing against a number of other key players in the market including Okta, an identity platform provider which offers a unified IAM solution with lifecycle management to automatically onboard and offboard employees and contractors, and a single sign-on solution that IT teams can use to monitor and manage user access. 

Okta recently announced raising $383 million in total revenue for during the fourth fiscal quarter of 2022. 

Another competitor is CyberArk, a vendor that provides an enterprise privileged access management solution for automatically discovering and onboarding credentials and secrets used by human and machine identities. It also offers automated password rotation and the ability to authenticate user access through a single web portal.

Earlier this year, CyberArk announced raising ARR of $427 million. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


AI-powered personalization: The key to unlocking ecommerce growth

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The ecommerce scene has evolved. Every other business is progressing online and trying to scoop some internet-driven profit. 

While certain enterprise tools can help digital-first businesses, increasing competition means that barely existing online or running half-hearted campaigns isn’t enough for brands to make it big in ecommerce. They need to up their marketing game and reinvent how they deal with their customers. Modern businesses need to predict their customers’ requirements and be proactive with solutions. They need to tailor their messages to be relevant. 

Fortunately, technology has not left them to do all of this alone. 

Artificial intelligence, more popularly known as AI, helps businesses revolutionize how they interact with their customers and allows them to carve out their portion of online success. This revolutionizing technology empowers brands to step up their game by offering what customers demand: personalization. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Introducing AI-powered personalization

AI grew beyond fantasy writers’ brains and sci-fi movies a long time ago. Yet many brands still fail to leverage its true potential, which restricts their growth. 

We are not living in just the era of AI. The world has successfully entered the era of advanced AI. 

Today, brands use artificial intelligence to tailor their offerings, personalize experiences and increase revenue. With the rapid adoption of AI-powered personalization, many ecommerce businesses are looking forward to shaping a brighter, more successful online future.

AI-driven personalization uses machine learning, deep learning, natural language processing, etc. to personalize a brand’s marketing messages, content, products and services. This technology reshapes how brands interact with their customers and sets them up for more profitable customer journeys. 

Personalization, of all kinds, is inherently rooted in data. 

The point where AI hits home is its ability to mine, read and draw conclusions from huge volumes of data. Brands with deeper access to data can personalize more accurately. And by doing so, they can deliver optimal customer experiences, the crux of all modern business-customer interactions. 

How AI-powered personalization helps ecommerce businesses

AI-powered personalization has multiple use cases within the ecommerce industry. And when used to its full potential, it leads to numerous benefits that we will discuss below. 

Targeted marketing: Lower marketing spend, higher revenue

Forty percent of customers prefer targeted ads aligned to what they are looking for to make the buying process easier. This could be why behaviorally targeted ads are twice as effective as non-targeted ads. AI helps marketers run more targeted brand campaigns, which eventually helps them decrease their marketing spend and increase their revenue. 

How does targeting increase revenue? With AI working behind the scenes, your ad reaches people who are most likely to take action somewhere down the line. This makes the ad impressions worth the money spent on acquiring them. 

AI digs through significant volumes of data and offers better customer segmentation. It also helps create more impactful ads based on previous ad performances. With these conclusions, marketers can reduce their ad spend by deliberately avoiding the ads that are nothing but money-draining pits. Additionally, AI helps to create and deliver tailored marketing content, like blogs, ad messages, CTAs, videos and much more. 

There are numerous AI-powered reporting tools available that collect, sort and draw valuable insights from audience data, helping you understand your customer better. 

Once you know that your audience is more receptive and responsive to a certain type of content — for example, video tutorials over written blogs — you will spend on creating the former and save your marketing budget from being spent on the latter. 

An SaaS startup deployed AI to tailor its marketing messages. The company personalized the images sent via emails to their subscribers, and as a result, the CTR from their opened emails increased four times

That’s why personalization matters. 

Despite all the benefits of targeted marketing, 76% of marketers fail to use behavioral data to run targeted ads campaigns. This statistic points toward the opportunity for brands to adopt AI-powered ad personalization and gain a competitive edge. 

Personalized product recommendations increase order value

When you are an ecommerce merchant, you want your existing customers to keep buying from you because we know upselling to existing customers is more profitable than acquiring new customers. 

AI-powered personalized product recommendation helps you do just that. When your customers visit your website, the AI algorithm picks up their buying behavior, previous transactions, demographic, interests and other such data. With this information, the website presents relevant and unique suggestions to the visitor, increasing their chances of buying more than what they came for. 

Put yourself in your customers’ shoes to grasp this concept. 

Say you have been looking for new clothes and finally on an apparel ecommerce store you find some shirts you like. As you are considering one of these shirts, the web page starts recommending some nice pants to go with them. You suddenly feel these pants are a good match for the shirts you like. This whole process brings the convenience of ordering both items from the same store, and hence you end up adding the pants to your cart. This is how personalized product recommendations increase order value, which eventually reflects in your revenue. 

A shoe retailer from London experienced an 8.6% increase in add-to-cart rate after serving personalized recommendations that appeared when a customer added one item to their cart. 

Personal touch: Excellent customer service via chatbots 

Personalization is just one part of building a pleasant customer experience. Offering excellent customer service is the other side of the coin. 

The modern customer is spoiled. Seventy-five percent of customers expect to be taken care of within five minutes. 

AI-powered chatbots come to the rescue here. And before you dismiss this, saying robotic, monotonic responses have become a thing of the past, know that the chatbots we are talking about today run on advanced AI. These chatbots are programmed to imitate humans as much as possible and are not limited to generic, scripted, rule-based conversations. 

Modern AI chatbots deploy NLP, sentiment analysis and other AI techniques to understand not just the meaning but the context, emotion and nuance behind each query. By doing so, they can carry a more resounding and contextually-accurate conversation with the customer and solve their problems. 

When a chatbot offers a tailored response for each unique query, this one chatbot becomes enough to compensate for an army of customer support representatives. Apart from offering a pleasant user experience, these chatbots also allow customers to enjoy the “self-service” they prefer. It solves their problems without making them wait for human representatives and may increase lead generations and conversions. 

A public transportation company that caters to thousands of customers every day increased its bookings by 25% and saved $1M in customer service within a year of deploying a chatbot designed to be its customer support representative. 

Optimized search leads to better customer experience

Optimized search learns from customers’ buying behavior and previous searches to customize search results in real time. 

Optimized search also deploys NLP to understand the nuances of human language and display accurate results. For example, it understands that “running shoes” and “runner’s shoes” both mean the same thing and hence would display similar results. When customers come across the results they want to see; they are more likely to be happy with your service. And it’s the customer’s experience that we are pursuing, aren’t we?

Final words

Customers these days switch loyalties based on the experiences brands offer. AI-driven personalization helps you offer tailored services, serve relevant content to customers and enjoy more profitable business relationships. But along with all its benefits, advanced AI adoption has its complexities. It can be expensive and may demand training. 

However, these complexities might be worth the results that you are likely to enjoy after the successful deployment of AI. 

Atul Jindal is a web design and marketing specialist.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


Why composability is key to scaling digital twins

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Digital twins enable enterprises to model and simulate buildings, products, manufacturing lines, facilities and processes. This can improve performance, quickly flag quality errors and support better decision-making. Today, most digital twin projects are one-off efforts. A team may create one digital twin for a new gearbox and start all over when modeling a wind turbine that includes this part or the business process that repairs this part. 

Ideally, engineers would like to quickly assemble more complex digital twins to represent turbines, wind farms, power grids and energy businesses. This is complicated by the different components that go into digital twins beyond the physical models, such as data management, semantic labels, security and the user interface (UI). New approaches for composing digital elements into larger assemblies and models could help simplify this process. 

Gartner has predicted that the digital twin market will cross the chasm in 2026 to reach $183 billion by 2031, with composite digital twins presenting the largest opportunity. It recommends that product leaders build ecosystems and libraries of prebuilt functions and vertical market templates to drive competitiveness in the digital twin market. The industry is starting to take note.

The Digital Twin Consortium recently released the Capabilities Periodic Table framework (CPT) to help organizations develop composable digital twins. It organizes the landscape of supporting technologies to help teams create the foundation for integrating individual digital twins. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

A new kind of model

Significant similarities and differences exist in the modeling used to build digital twins compared with other analytics and artificial intelligence (AI) models. All these efforts start with appropriate and timely historical data to inform the model design and calibrate the current state with model results.

However, digital twin simulations are unique compared to traditional statistical learning approaches in that the model structures are not directly learned from the data, Bret Greenstein, data, analytics and AI partner at PwC, told VentureBeat. Instead, a model structure is surfaced by modelers through interviews, research and design sessions with domain experts to align with the strategic or operational questions that are defined upfront.

As a result, domain experts need to be involved in informing and validating the model structure. This time investment can limit the scope of simulations to applications where ongoing scenario analysis is required. Greenstein also finds that developing a digital twin model is an ongoing exercise. Model granularity and systems boundaries must be carefully considered and defined to balance time investment and model appropriateness to the questions they are intended to support. 

“If organizations are not able to effectively draw boundaries around the details that a simulation model captures, ROI will be extremely difficult to achieve,” Greenstein said.

For example, an organization may create a network digital twin at the millisecond timescale to model network resiliency and capacity. It may also have a customer adoption model to understand demand at the scale of months. This exploration of customer demand and usage behavior at a macro level can serve as input into a micro simulation of the network infrastructure. 

Composable digital twins

This is where the DTC’s new CPT framework comes in. Pieter van Schalkwyk, CEO at XMPRO and cochair for Natural Resources Work Group at Digital Twin Consortium, said the CPT provides a common approach for multidisciplinary teams to collaborate earlier in the development cycle. A key element is a reference framework for thinking about six capability categories including data services, integration, intelligence, UX, management and trustworthiness.  

This can help enterprises identify composability gaps they need to address in-house or from external tools. The framework also helps to identify specific integrations at a capabilities level. The result is that organizations can think about building a portfolio of reusable capabilities. This reduces duplication of services and effort.

This approach goes beyond how engineers currently integrate multiple components into larger structures in computer-aided design tools. Schalkwyk said, “Design tools enable engineering teams to combine models such as CAD, 3D and BIM into design assemblies but are not typically suited to instantiating multi use case digital twins and synchronizing data at a required twinning rate.”

Packaging capabilities

In contrast, a composable digital twin draws from six clusters of capabilities that help manage the integrated model and other digital twin instances based on the model. It can also combine IoT and other data services to provide an up-to-date representation of the entity the digital twin represents. The CPT represents these different capabilities as a periodic table to make it agnostic to any particular technology or architecture. 

“The objective is to describe a business requirement or a use case in capability terms only,” Schalkwyk explained. 

Describing the digital twin in terms of capabilities helps match a specific implementation to the technologies that provide the appropriate capability. This mirrors the broader industry trend towards composable business applications. This approach allows different roles, such as engineers, scientists and other subject-matter experts, to compose and recompose digital twins for different business requirements. 

It also creates an opportunity for new packaged business capabilities that could be used across industries. For example, a “leak detection” packaged business capability could combine data integration and engineering analytics to provide a reusable component that can be used in a multitude of digital twins use cases, Schalkwyk explained. It could be used in digital twins for oil & gas, process manufacturing, mining, agriculture and water utilities.

Composability challenges

Alisha Mittal, practice director at Everest Group, said, “Many digital twin projects today are in pilot stages or are focused on very singular assets or processes.”

Everest research has found that only about 15% of enterprises have successfully implemented digital twins across multiple entities. 

“While digital twins offer immense potential for operational efficiency and cost reduction, the key reason for this sluggish scaled adoption is the composability challenges,” Mittal said. 

Engineers struggle to integrate the different ways equipment and sensors collect, process and format data. This complexity gets further compounded due to the lack of common standards and reference frameworks to enable easy data exchange. 

Suseel Menon, senior analyst at Everest Group, said some of the critical challenges they heard from companies trying to scale digital twins include:

  • Nascent data landscape: Polishing data architectures and data flow is often one of the biggest barriers to overcome before fully scaling digital twins to a factory or enterprise scale.
  • System complexity: It is rare for two physical things within a large operation to be similar, complicating integration and scalability. 
  • Talent availability: Enterprises struggle to find talent with the appropriate engineering and IT skills. 
  • Limited verticalization in off-the-shelf platforms and solutions: Solutions that work for assets or processes in one industry may not work in another. 

Threading the pieces together

Schalkwyk said the next step is to develop the composability framework at a second layer with more granular capabilities descriptions. A separate effort on a ‘digital-twin-capabilities-as-a-service’ model will describe how digital twin capabilities could be described and provisioned in a zero-touch approach from a capabilities marketplace. 

Eventually, these efforts could also lay the foundation for digital threads that help connect processes that span multiple digital twins. 

“In the near future, we believe a digital thread-centric approach will take center stage to enable integration both at a data platform silo level as well as the organizational level,” Mittal said. “DataOps-as-a-service for data transformation, harmonization and integration across platforms will be a critical capability to enable composable and scalable digital twin initiatives.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


Artificial intelligence (AI) vs. machine learning (ML): Key comparisons

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Within the last decade, the terms artificial intelligence (AI) and machine learning (ML) have become buzzwords that are often used interchangeably. While AI and ML are inextricably linked and share similar characteristics, they are not the same thing. Rather, ML is a major subset of AI.

AI and ML technologies are all around us, from the digital voice assistants in our living rooms to the recommendations you see on Netflix. 

Despite AI and ML penetrating several human domains, there’s still much confusion and ambiguity regarding their similarities, differences and primary applications.

Here’s a more in-depth look into artificial intelligence vs. machine learning, the different types, and how the two revolutionary technologies compare to one another.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

What is artificial intelligence (AI)? 

AI is defined as computer technology that imitate(s) a human’s ability to solve problems and make connections based on insight, understanding and intuition.

The field of AI rose to prominence in the 1950s. However, mentions of artificial beings with intelligence can be identified earlier throughout various disciplines like ancient philosophy, Greek mythology and fiction stories.

One notable project in the 20th century, the Turing Test, is often referred to when referencing AI’ history. Alan Turing, also referred to as “the father of AI,” created the test and is best known for creating a code-breaking computer that helped the Allies in World War II understand secret messages being sent by the German military. 

The Turing Test, is used to determine if a machine is capable of thinking like a human being. A computer can only pass the Turing Test if it responds to questions with answers that are indistinguishable from human responses.

Three key capabilities of a computer system powered by AI include intentionality, intelligence and adaptability. AI systems use mathematics and logic to accomplish tasks, often encompassing large amounts of data, that otherwise wouldn’t be practical or possible. 

Common AI applications

Modern AI is used by many technology companies and their customers. Some of the most common AI applications today include:

  • Advanced web search engines (Google)
  • Self-driving cars (Tesla)
  • Personalized recommendations (Netflix, YouTube)
  • Personal assistants (Amazon Alexa, Siri)

One example of AI that stole the spotlight was in 2011, when IBM’s Watson, an AI-powered supercomputer, participated on the popular TV game show Jeopardy! Watson shook the tech industry to its core after beating two former champions, Ken Jennings and Brad Rutter.

Outside of game show use, many industries have adopted AI applications to improve their operations, from manufacturers deploying robotics to insurance companies improving their assessment of risk.

Also read: How AI is changing the way we learn languages 

Types of AI

AI is often divided into two categories: narrow AI and general AI. 

  • Narrow AI: Many modern AI applications are considered narrow AI, built to complete defined, specific tasks. For example, a chatbot on a business’s website is an example of narrow AI. Another example is an automatic translation service, such as Google Translate. Self-driving cars are another application of this. 
  • General AI: General AI differs from narrow AI because it also incorporates machine learning (ML) systems for various purposes. It can learn more quickly than humans and complete intellectual and performance tasks better. 

Regardless of if an AI is categorized as narrow or general, modern AI is still somewhat limited. It cannot communicate exactly like humans, but it can mimic emotions. However, AI cannot truly have or “feel” emotions like a person can.

What is machine learning (ML)?

Machine learning (ML) is considered a subset of AI, whereby a set of algorithms builds models based on sample data, also called training data. 

The main purpose of an ML model is to make accurate predictions or decisions based on historical data. ML solutions use vast amounts of semi-structured and structured data to make forecasts and predictions with a high level of accuracy.

In 1959, Arthur Samuel, a pioneer in AI and computer gaming, defined ML as a field of study that enables computers to continuously learn without being explicitly programmed.

An ML model exposed to new data continuously learns, adapts and develops on its own. Many businesses are investing in ML solutions because they assist them with decision-making, forecasting future trends, learning more about their customers and gaining other valuable insights.

Types of ML

There are three main types of ML: supervised, unsupervised and reinforcement learning. A data scientist or other ML practitioner will use a specific version based on what they want to predict. Here’s what each type of ML entails:

  • Supervised ML: In this type of ML, data scientists will feed an ML model labeled training data. They will also define specific variables they want the algorithm to assess to identify correlations. In supervised learning, the input and output of information are specified.
  • Unsupervised ML: In unsupervised ML, algorithms train on unlabeled data, and the ML will scan through them to identify any meaningful connections. The unlabeled data and ML outputs are predetermined.
  • Reinforcement learning: Reinforcement learning involves data scientists training ML to complete a multistep process with a predefined set of rules to follow. Practitioners program ML algorithms to complete a task and will provide it with positive or negative feedback on its performance. 

Common ML applications

Major companies like Netflix, Amazon, Facebook, Google and Uber have ML a central part of their business operations. ML can be applied in many ways, including via:

  • Email filtering
  • Speech recognition
  • Computer vision (CV)
  • Spam/fraud detection
  • Predictive maintenance
  • Malware threat detection
  • Business process automation (BPA)

Another way ML is used is to power digital navigation systems. For example, Apple and Google Maps apps on a smartphone use ML to inspect traffic, organize user-reported incidents like accidents or construction, and find the driver an optimal route for traveling. ML is becoming so ubiquitous that it even plays a role in determining a user’s social media feeds. 

AI vs. ML: 3 key similarities

AI and ML do share similar characteristics and are closely related. ML is a subset of AI, which essentially means it is an advanced technique for realizing it. ML is sometimes described as the current state-of-the-art version of AI.

1. Continuously evolving

AI and ML are both on a path to becoming some of the most disruptive and transformative technologies to date. Some experts say AI and ML developments will have even more of a significant impact on human life than fire or electricity. 

The AI market size is anticipated to reach around $1,394.3 billion by 2029, according to a report from Fortune Business Insights. As more companies and consumers find value in AI-powered solutions and products, the market will grow, and more investments will be made in AI. The same goes for ML — research suggests the market will hit $209.91 billion by 2029. 

2. Offering myriad benefits

Another significant quality AI and ML share is the wide range of benefits they offer to companies and individuals. AI and ML solutions help companies achieve operational excellence, improve employee productivity, overcome labor shortages and accomplish tasks never done before.

There are a few other benefits that are expected to come from AI and ML, including:

  • Improved natural language processing (NLP), another field of AI
  • Developing the Metaverse
  • Enhanced cybersecurity
  • Hyperautomation
  • Low-code or no-code technologies
  • Emerging creativity in machines

AI and ML are already influencing businesses of all sizes and types, and the broader societal expectations are high. Investing in and adopting AI and ML is expected to bolster the economy, lead to fiercer competition, create a more tech-savvy workforce and inspire innovation in future generations.

3. Leveraging Big Data

Without data, AI and ML would not be where they are today. AI systems rely on large datasets, in addition to iterative processing algorithms, to function properly. 

ML models only work when supplied with various types of semi-structured and structured data. Harnessing the power of Big Data lies at the core of both ML and AI more broadly.

Because AI and ML thrive on data, ensuring its quality is a top priority for many companies. For example, if an ML model receives poor-quality information, the outputs will reflect that. 

Consider this scenario: Law enforcement agencies nationwide use ML solutions for predictive policing. However, reports of police forces using biased training data for ML purposes have come to light, which some say is inevitably perpetuating inequalities in the criminal justice system. 

This is only one example, but it shows how much of an impact data quality has on the functioning of AI and ML.

Also read: What is unstructured data in AI?

AI vs. ML: 3 key differences

Even with the similarities listed above, AI and ML have differences that suggest they should not be used interchangeably. One way to keep the two straight is to remember that all types of ML are considered AI, but not all kinds of AI are ML.

1. Scope

AI is an all-encompassing term that describes a machine that incorporates some level of human intelligence. It’s considered a broad concept and is sometimes loosely defined, whereas ML is a more specific notion with a limited scope. 

Practitioners in the AI field develop intelligent systems that can perform various complex tasks like a human. On the other hand, ML researchers will spend time teaching machines to accomplish a specific job and provide accurate outputs. 

Due to this primary difference, it’s fair to say that professionals using AI or ML may utilize different elements of data and computer science for their projects.

2. Success vs. accuracy

Another difference between AI and ML solutions is that AI aims to increase the chances of success, whereas ML seeks to boost accuracy and identify patterns. Success is not as relevant in ML as it is in AI applications. 

It’s also understood that AI aims to find the optimal solution for its users. ML is used more often to find a solution, optimal or not. This is a subtle difference, but further illustrates the idea that ML and AI are not the same. 

In ML, there is a concept called the ‘accuracy paradox,’ in which ML models may achieve a high accuracy value, but can give practitioners a false premise because the dataset could be highly imbalanced.

3. Unique outcomes

AI is a much broader concept than ML and can be applied in ways that will help the user achieve a desired outcome. AI also employs methods of logic, mathematics and reasoning to accomplish its tasks, whereas ML can only learn, adapt or self-correct when it’s introduced to new data. In a sense, ML has more constrained capabilities than AI.

ML models can only reach a predetermined outcome, but AI focuses more on creating an intelligent system to accomplish more than just one result. 

It can be perplexing, and the differences between AI and ML are subtle. Suppose a business trained ML to forecast future sales. It would only be capable of making predictions based on the data used to teach it.

However, a business could invest in AI to accomplish various tasks. For example, Google uses AI for several reasons, such as to improve its search engine, incorporate AI into its products and create equal access to AI for the general public. 

Identifying the differences between AI and ML

Much of the progress we’ve seen in recent years regarding AI and ML is expected to continue. ML has helped fuel innovation in the field of AI. 

AI and ML are highly complex topics that some people find difficult to comprehend.

Despite their mystifying natures, AI and ML have quickly become invaluable tools for businesses and consumers, and the latest developments in AI and ML may transform the way we live.

Read next: Does AI sentience matter to the enterprise?

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


Specialization is key in an exploding AI chatbot market

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Amid an exploding market for AI chatbots, companies that target their virtual assistants to specialized enterprise sectors may get a firmer foothold than general chatbots, according to Gartner analysts. That’s not news to Zowie, which claims to be the only AI-powered chatbot technology specifically built for ecommerce companies that use customer support to drive sales – no small feat in an industry in which customer service teams answer tens of thousands of repetitive questions daily. Today, the company announced it has secured $14 million in series A funding led by Tiger Global Management, bringing its total to $20 million.

Chatbots — sometimes referred to as virtual assistants — are built on conversational AI platforms (CAIP) that combine speech-based technology, natural language processing and machine learning tools to develop applications for use across many verticals. 

The CAIP market is extremely diverse, both in vendor strategies and in the enterprise needs that vendors target, says Magnus Revang, a Gartner research vice president covering AI, chatbots and virtual assistants.

The CAIP market is comprised of about 2,000 vendors worldwide. As a result, companies looking to implement AI chatbot technology often struggle with vendor selection, according to a 2020 Gartner report coauthored by Revang, “Making Sense of the Chatbot and Conversational AI Platform Market.

The report points out that In a market where no one CAIP vendor is vastly ahead of the pack, companies will need to select the provider best fit for their current short and midterm needs. 

The secret sauce in the AI chatbot market

That is Zowie’s secret sauce: specialization. Specifically, the company focuses on the needs of ecommerce providers, Maya Schaefer, Zowie’s chief executive officer and cofounder, told VentureBeat. The platform enables brands to improve their customer relationships and start generating revenue from customer service. 

Plenty of other CAIPs provide services for companies that sell products. But their solutions are also targeted to other verticals, such as banking, telecom and insurance.  Examples include Boost AI, Solvvy and Ada. Other chatbots — Ada is an example — can also be geared for use in the financial technology and software-as-a-service industries to answer questions, for instance, about a non- functioning system.

Zowie is built using the company’s automation technology, ‘Zowie X1’, to analyze the meaning and sentiment to find repetitive questions and trends. 

Zowie claims to automate 70% of inquiries ecommerce brands typically receive, such as “where’s my package?” or “how can I change my shipping address?” she says. The solution also includes a suite of tools that allows agents to provide personalized care and product recommendations, she says.

For example, if a customer says, “I would like help choosing new shoes” the system hands the request to a live product expert.

Before implementation, the platform analyzes a customer’s historical chats, frequently asked questions and knowledge base to automatically learn which questions to automate. It uses AI capabilities to analyze new questions and conversations, delivering more automation. 

By analyzing patterns, the AI chatbot can tell when something new or unusual is happening and alerts the customer service team, Schaefer said.

Live agents may also have difficulties upselling customers, so Zowie gives unique insights about customers to agents–what product they are looking at, did they order before and if so, what did they order – and has a direct product catalog integration which enables agents to send product suggestions in the form of a carousel, she added.

Time to optimization

In 2019, Schaefer and cofounder Matt Ciolek developed the all-in-one, modular CAIP designed for building highly customizable chatbots. Schaefer estimates that within six weeks of implementation, Zowie answers about 92% of customer inquiries such as “where’s my package?” and “what are the store hours?”

“Managers and agents don’t have to think about how to improve customer experience — our system will detect new trends and propose how to optimize the system automatically,” she said. “In a way, we automate the automation.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


How to use your phone as a two-factor authentication security key

If you want to verify your Google login and make it harder to access by anyone but yourself (always a good idea), one way is to use your iPhone or Android smartphone as a physical security key. While you can set up a third-party 2FA app such as Authy or even use Google’s own Authenticator, these require that you enter both your password and a code generated by the app. Google’s built-in security allows you to access your account by just hitting “Yes” or pressing your volume button after a pop-up appears on your phone. You can also use your phone as a secondary security key.

Use your phone to sign in

To set this up, your computer should be running a current version of Windows 10, iOS, macOS, or Chrome OS. Before you start, make sure that your phone is running Android 7 or later and that it has Bluetooth turned on.

  • While it’s unlikely you have an Android phone that doesn’t have a Google account associated with it, if you’re one of the few, you need to add a Google account to your phone by heading into Settings > Passwords & accounts, scroll down to and select Add account > Google
  • Once that’s done, open a Google Chrome browser on your computer
  • Head into on Chrome and click on Use your phone to sign in

  • Enter your account password. You’ll be asked to satisfy three steps: choose a phone (if you have more than one), make sure you have either Touch ID (for an iPhone) or a screen lock (for an Android), and add a recovery phone number.

You’ll be asked to satisfy three steps.

You’ll be asked to satisfy three steps.

You’ll then be run through a test of the system and invited to turn it on permanently.

Use your phone as a secondary security key

You can also use your phone as a secondary security key to ensure that it is indeed you who are signing into your account. In other words, to get into the account, it will be necessary to be carrying the correct phone with a Bluetooth connection.

  • If you don’t have two-step verification set up yet, go back to your account security page, click on 2-Step Verification and follow the instructions. The TL;DR is that you’ll need to log in, enter a phone number, and select what secondary methods of verification you’d like.
  • Scroll down the list of secondary methods and select Add security key.
  • And again, select Add security key.

You can choose your phone, a USB drive or an NFC key to act as a security key.

You can choose your phone, a USB drive or an NFC key to act as a security key.

  • You’ll be given the choice of adding your phone (or one of your phones, if you have more than one) or a physical USB or NFC key. Select your phone.
  • You’ll get a warning that you need to keep Bluetooth on and that you can only sign in using a supported browser (Google Chrome or Microsoft Edge).

That’s it! You’ve set up your phone as a security key and can now log in to Gmail, Google Cloud, and other Google services and use your phone as the primary or secondary method of verification.

When you sign in to your Google account, your phone will ask you to confirm the sign-in.

When you sign in to your Google account, your phone will ask you to confirm the sign-in.

Your phone will then confirm your ID with your computer using Bluetooth.

Your phone will then confirm your ID with your computer using Bluetooth.

Just make sure your phone is in close proximity to your computer whenever you’re trying to log in. Your computer will then tell you that your phone is displaying a prompt. Follow the directions to verify your login, and you’re all set!

Update March 29th, 2021, 11:20AM ET: This article was originally published on April 12th, 2019, and has been updated to account for changes in the Google interface.

Repost: Original Source and Author Link


Digitizing Spaces: Why keeping a digital copy of your physical space is key

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more

The was contributed by Jon Mason, cofounder, and CEO of Hotspring

Every industry is currently riding the wave of ‘digital transformation’. Whether to increase efficiency and productivity or enable teams to operate remotely, most companies have been forced to reevaluate their operations in response to the pandemic. According to Flexera, 54% of companies worldwide said they are now prioritizing their digital transformation in 2021. But digitization doesn’t have to be like boiling the ocean. It’s never been quicker and more cost-effective to build out digital doubles which can provide the foundation for further development.

Many industries are in the midst of their digital transformation and are looking for ways to accelerate digitization within their businesses. Over the last decade, the evolution of 3D mapping technology has come on leaps and bounds. In fact, the global 3D mapping market is expected to grow by 15% to $7.6 billion by 2025. What once might have cost thousands of dollars and needed countless hours of work can now be produced on a digital tablet in minutes. For example, digitizing a physical space with a company like Matterport might only cost a couple hundred dollars. It has never been more accessible.

One early developer of this technology, and arguably one of the most important, is Architecture, Engineering, and Construction (AEC). As you can imagine, the need to be efficient when coordinating a large-scale construction project amongst a number of different vendors and stakeholders is key. By using an intelligent 3D model-based process that efficiently plans, designs, constructs, and manages building and infrastructure, Building Information Modelling (BIM) has become an integral part of how this industry operates. In fact, the BIM software market’s global value is expected to almost triple and reach $15 billion by 2027.


The 2nd Annual GamesBeat and Facebook Gaming Summit and GamesBeat: Into the Metaverse 2

Learn More

As more companies start to realize the true value of investing in virtual assets, transforming large physical spaces into explorable 3D ‘digital twin‘ models that are easily accessible, provide valuable insights into the physical asset and offer opportunities for innovation. What was once an exclusive tool within the AEC industry, 3D mapping technology now offers a number of benefits for a wide variety of industries including real estate, event planning, property development, education, pharma, retail, and more.

Digitizing spaces removes physical limitations

When you digitize a space, you unlock a whole host of possibilities and one of the more common advantages is the removal of physical limitations. Picture walking through a virtually built environment that has a near unlimited amount of detail, allowing you to see varied interiors featuring different color patterns and decor, or the ability to view the location at various times of the day. 3D mapping technology has the power to construct a vividly powerful virtual experience based on the intentions of an architect or designer, bringing their vision to life in minute detail and making it accessible to a range of stakeholders.

Using a mix of 360-degree cameras and photogrammetry, 3D mapping technology can stitch together any captured footage of a space to create a singular model that is not only a virtual walkthrough but also a dimensionally accurate model of that space. This can then be brought into Unreal Engine, or a similar 3D visualization tool, to unlock new ways of experiencing the premises without actually having to be physically in that space.

Whether you’re selling a new apartment in New York, renting a holiday home in Cancun, or inviting guests to book hotel rooms in remote locations, potential customers need to be able to view the property. By creating a digital double of the space you remove all geographical constraints and open up the possibility for people to immerse themselves in the space from anywhere around the world. As surveyed by Planet Home Study, 75% of buyers agree that a virtual home tour is a major deciding factor in whether they would buy a property or not.

One company utilizing this technology is a spatial data company that bridges the physical to the digital and is able to take an existing space, scan it and upload it into the digital world. Utilizing 3D mapping technology empowers the users to capture and connect rooms to create truly interactive 3D models of spaces. This new technology has been hailed as a new platform for helping the real estate business to expand, particularly during the pandemic COVID-19.

Create a unique user experience by digitizing spaces

Another key benefit of having a digital copy of your physical space is the ability to create unique and innovative experiences. In an HBR survey, 40% of respondents said that customer experience was their top priority for digital transformation. Since every experience with a customer impacts their overall perception of a brand, taking an approach that focuses on relationships with customers is a wise move. So how can having a digital space help?

Staging an event or experience in a physical space may be the norm, but if access to the event is limited by location then you’re potentially excluding a vast number of potential customers. However, if the event is held in a virtual capacity, you tap into the freedom to expand not only the audience but also the levels of creativity behind the experience. For example, customers could meet with fashion designers in a virtual store or attend a virtual concert from their favorite artists – like one recently reformed band has planned to do.

From a business perspective, creating a digital copy of your premises offers cost-saving opportunities for all enterprises, increases productivity, and elevates their creativity. For outlets like supermarkets, that traditionally rely on their physical location, the advantages of creating a digitized space may not be immediately obvious. However, by virtually mapping all of your locations you can work on crafting campaigns that can be rolled out across all stores at once, allowing you to visualize the look, arrange fittings, and implement various installments at the push of a button.

Obstacles and misconceptions

As you can see, there are a ton of advantages for businesses when they embrace the digitization of their spaces, so why aren’t more people opting in? The immediate answer is simply lack of awareness; it’s just not something that people are thinking about. The truth of the matter is, most people don’t realize how amazing these virtual renderings can look. When people think of digital space, they automatically think of traditional architectural renders which at times have looked unrealistic. Thanks to the advancements in 3D mapping technology, these digital twins are now incredibly photorealistic. Companies like Cushman Wakefield, a global leader in the commercial real estate industry, have been working to help create highly accurate 3D digital twins of their properties for 24/7 virtual tours.

Traditionally it’s also been quite expensive to go through this digitization process and getting funding for a large-scale project like this is difficult. But as we mentioned previously, what once used to cost quite a lot can now be done much cheaper with the right services. In fact, Cushman Wakefield realized an estimated 53% cost savings by providing an alternative to purchasing cameras and individually training employees to scan such a large volume of digital twins.

However, it should be noted that this technology is still in constant development and can sometimes be limited in its application. At present, some companies only allow users to view these doubles from specific vantage points, which limits the lengths to which it can be truly immersive. By hiring creatives with the right skill set, businesses are empowered to generate digital copies of their spaces. As the world adapts to a hybrid working model in the wake of the pandemic, there are companies that connect creative talents from around the world with the companies that require their services. By marrying these digital doubles with highly-skilled artists, they can be transformed into fully immersive and interactive experiences.

Another common misconception is that VR or AR is required for it to be immersive and engaging and that this could be a blocker, but according to Statista, experts believe that demand for VR/AR headsets will increase almost eight-fold by 2025. But even if you don’t own a VR or AR headset, these digital spaces are still accessible through web browsers, mobile apps, or in-store. While not as immersive, it is still completely photo-realistic and allows users to experience a digitized space.

One such business utilizing this VR and AR technology is beauty powerhouse L’Oreal. Their teams have developed an app that allows customers to virtually apply makeup, such as nail polish and lipstick, to themselves with their smartphones. They’ve now incorporated this technology into their stores via “magic mirrors” which allow customers to virtually apply the makeup and order the products they prefer directly from the device. With the beauty industry becoming such a lucrative market, we expect to see more brands taking their lead.

What does the future hold?

While there is plenty of room for growth and expansion in the digitization space, the question remains: where do we go from here? The big buzzword on everyone’s lips at the moment is “the metaverse” — a collection of various online worlds in which physical, augmented, and virtual reality converge. Think of it as an online forum where people can interact with one another, go to work, buy goods and services, or even attend events. And while the metaverse as we’ve described above may not be a reality right now, there are businesses like Facebook that are actively trying to make this concept a reality.

Dubbed the next evolution of the internet, everybody is under the impression that we will be spending more and more of our time in this metaverse soon enough. Where your digitized space may once have been a solitary experience, it may soon grow to become a bustling and interactive marketplace where friends and colleagues can join together to experience what your business has to offer. Creating a digital copy of your physical space can help future-proof your business for what could be coming tomorrow.

Jon Mason is the cofounder and CEO of Hotspring



Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


Yubico key organizer keeps your house keys tidy and your YubiKey security key safe

Yubico has teamed up with Keyport on a new key organizer that’s designed to safely stash your YubiKey security key (a small dongle that can act as an extra layer of security for your logins) alongside your house keys in one compact little enclosure.

It’s a neat idea. My house keys have to put up with a lot of abuse from being carried around in my pockets and stuffed in the bottom of backpacks. And while I’m not too worried about a set of metal keys surviving this kind of treatment, I wouldn’t say not to a little more protection for a USB dongle that I need to access my most secure accounts. YubiKeys are built like tanks, but nothing’s invincible.

The $25 Yubico x Keyport Pivot 2.0 key organizer appears to have been released earlier this month, and it’s functionally a very similar accessory to Keyport’s existing Pivot 2.0 organizer. The differences appear to amount to a small Yubico logo on its outside, and Yubico’s website also notes that this version doesn’t include Keyport’s lost and found service.

Alongside a YubiKey security key, the organizer has space for up to seven other key-sized items. As well as keys, Keyport sells a variety of tools that are designed to sit in its holders, like multi-tools, mini-flashlights, and pens. Compatible YubiKeys include the 5 Series, as well as its new Bio Series, which are activated using a fingerprint.

Repost: Original Source and Author Link


DDR3 vs. DDR4 RAM: Key Differences Explained

DDR3 RAM had a long run after its 2007 release, powering mainstream laptops and desktop computers for many years. However, in 2014, DDR4 RAM was released to the public and has since become the most common memory type in desktop PCs, laptops, and tablets. With major changes to its physical design, specifications, and features, motherboards with DDR4 slots cannot use DDR3 RAM, and DDR4 RAM can’t be put into a DDR3 slot. Neither is compatible with the newer DDR5 memory.

DDR4 versus DDR3 RAM

DDR4 tends to run at 1.2 volts by default, whereas DDR3 runs at 1.5V. While it may not seem like much, that’s a 20% improvement in efficiency between generations. For most home users, the difference in voltage ultimately results in lower power consumption and heat generation, which can be especially important in laptops where it can impact battery life.

DDR4 is not just more power efficient — it’s a lot faster, too. DDR3 specifications range between 800 to 2,133 MTps (millions of transfers per second). In comparison, DDR4 RAM ranges between 2,133 to 3,200 MTps, not to mention the faster kits available through XMP and overclocking.

When purchasing RAM, you can identify the speed by its name, e.g. Crucial Ballistix 16GB DDR4-3200. This indicates it is DDR4 RAM at 3,200 MTps.

MTps versus MHz

To avoid any confusion, it’s worth clearing up the difference between MTps and MHz. Some companies will advertise RAM in MTps and others in MHz.

MTps (millions of mega transfers per second) measure the bus and channel speed based on effective cycles per second. MHz (megahertz per second) measures the transmission speed of a device in one million cycles per second.

Generally, DDR RAM is measured in MTps, whereas SRAM (static RAM) is measured in MHz since it cannot produce more than one operation per second. DDR3 RAM operating at 1066.6MHz will be presented as DDR3 2133. As DDR, or double data rate, RAM has two processes running simultaneously, the 1066.6MHz is doubled to equate to 2133MTps.

Essentially, MHz and MTps are the same things, but it’s important to be aware of how a manufacturer represents their measurements.

Is DDR4 faster than DDR3 RAM?


DDR4 can and does run much faster than even the best DDR3, but there is some crossover.

In terms of transfer rate, DDR4 is capable of a higher million transfers per second. However, MTps isn’t the only specification to consider when purchasing RAM.

Timings, like Column Access Strobe latency (CL), also play an important part in the overall memory performance. CL determines the number of clock cycles it takes for RAM to deliver data requested by the CPU.

To put it into context, fast clock speeds don’t necessarily mean faster RAM. High latency can drag the results down slightly, so it’s worth comparing all the specifications to find the right type of RAM for your requirements.

Taking Corsair’s Vengeance DDR3 kits and pitting them against their Vengeance LPX DDR4 kits, there’s a clear difference in performance and speed.

a comparison of corsair's ddr3 vs ddr4 read and write performance

While read/write speeds are marginally lower on their DDR4 kits compared to DDR3 at 2,133MHz, it’s important to remember this is the entry-level speed for DDR4.

a comparison of corsair's ddr3 vs ddr4 ram in latency tests

Latency-wise, DDR3-1600 has a higher latency than any DDR4 kit. At 2,133MHz, the DDR4 kit is slightly higher than DDR3-2133, but as the memory clock speed increases, the overall latency decreases, even if the timings are looser.

Ultimately, DDR4 is faster than DDR3 RAM. It also offers more performance at a lower cost-per-dollar compared to any DDR3 RAM, except at its entry-level 1,600MHz.

High-Frequency DDR4 RAM

High-frequency DDR4 RAM like the Corsair Vengeance LPX 16GB is capable of reaching speeds of up to 4,000MHz. Gamers who want to push their machines to achieve the best performance can overclock their RAM for demanding systems.

However, manufacturers like HyperX have pushed the boundaries even further with their HyperX Predator DDR4 family of RAM, which is available between 2,666Mhz to 5,333Mhz.

In fact, in April, they achieved the world overclocking record of 7,200MHz with their HyperX Predator DDR4 memory.

Why DDR3 and DDR4 don’t work together

High performance DDR3 ECC computer memory

One of the main differences between DDR3 and DDR4 RAM is the layout of the physical pins. DDR3 RAM uses a 240-pin connector, whereas DDR4 uses a 288-pin connector. A motherboard with DDR4 compatibility won’t work with DDR3 RAM and vice versa. The connector pins are different so that you can’t accidentally install the wrong type of RAM.

The different voltage demands also mean that a system designed with DDR4 in mind wouldn’t provide the correct voltage for DDR3 at default and may not even be designed with that voltage capability in mind.

The difference between DD4 and DDR3 pricing

When DDR4 first came to the market, the price gap was significant. However, with more compatible motherboards and CPUs, DDR4 RAM has dropped in price.

But compared to DDR3, DDR4 RAM is often more expensive. It’s not a huge difference, especially if you’re buying a couple of modules. However, if you need a large amount of RAM, the costs can add up pretty quickly.

What about DDR5 RAM?

DDR5 RAM was released in 2020 but has yet to make a significant impact on the market.

With the release of Intel’s Alder Lake CPUs, it would be safe to say DDR5 will become more widely available in late 2021 and 2022.

DDR5 specifications start at 4,800MHz and cap at 6,400MHz, with the potential for faster clock speeds in the future.

Choosing the right RAM

DDR3 versus DDR4 RAM comes down to the hardware in your system. If you’re using an older motherboard and CPU, your option will likely be DDR3.

However, if you already have a compatible motherboard and CPU or are thinking of investing in one, DDR4 RAM is a more future-proof choice.

It’s slightly more expensive, and in very few cases at lower clock speeds it may not be as fast as DDR3 RAM, but it will be supported by the latest hardware.

Editors’ Choice

Repost: Original Source and Author Link


Virtru launches zero-trust key management for entire Google ecosystem

Virtru, a well-known name in data encryption and privacy, has launched an external zero-trust key-management solution expressly for admins of the Google Cloud Platform (GCPs).

Virtru’s cloud-based software protects data throughout its lifecycle as it travels through email and file-sharing platforms, including SaaS solutions, cloud environments, and a diverse range of file ecosystems. It is designed to smooth out a normally thorny and painstaking routine for security admins.

Virtru enables admins to manage encryption keys separately from data. This allows them to mitigate breaches, prevent unauthorized access, and ensure no third party can access the data — Google and Virtru included — without disturbing data stores. Users also have choices on how they want to host their encryption keys: on-premises, in a private cloud, fully hosted, or as optional HSM integrations.

Advantages of managing encryption keys separately from data

With the help of the external key-management solution, enterprises can securely manage their encryption keys independently of their data across Google Workspaces, GCPs, and other cloud applications. The solution safeguards the information in data lakes, databases, and other various containers that pass through Google’s cloud computing services and AI capabilities.

With Virtru, all data moving across the Google ecosystem has a single global framework and policy language, irrespective of where the data source is generated. It could be created by users, devices, or systems.

Virtru’s cofounder and CEO John Ackerly said that this solution directly addresses a widespread lack of security in “leveraging big-data cloud computing.”

Zero-trust security for Google portfolio

The zero-trust data standard of the solution means that Google users can now optimize the GCP by extending data sovereignty from collaboration suites to cloud applications, as use cases dictate.

“Virtru is now the only Google partner bringing data security to the entire Google portfolio, including key management, supporting regulations like ITAR in the U.S. and Schrems II in the EU,” Ackerly said in a media advisory.

Although Virtru has been a longstanding partner of Google for data protection and Google-recommended encryption key management, the new solution extends its jurisdiction to Gmail, Google Drive, Google Docs, Sheets, and Slides. Virtru leverages the open Trusted Data Format (TDF), a well-known encryption technology that is also used by the U.S. National Security Agency, to build these solutions.

Developers will likely be pleased to know that the solution also has integration compatibility with Google Cloud Platform’s Kubernetes Engine, Secret Engine, Compute Engine, BigQuery, Dataflow, Cloud SQL, and Pub/Sub. Virtru’s Data Protection Gateway, working in conjunction with Google Cloud EKM, is also compatible with enterprise applications like Salesforce, Zendesk, and Looker.

The company said its service is used to secure more than 7,000 customers and enhances collaboration capabilities for a network of more than 260,000 domains. The average number of emails and files that draw protection from Virtru’s platform exceeds an average of 2 million per day.

Virtru’s client base includes companies and institutions such as Next Insurance, DNA Worldwide, the state of Maryland, and the Ivy League institution, Brown University.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link