Responsible use of machine learning to verify identities at scale 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

In today’s highly competitive digital marketplace, consumers are more empowered than ever. They have the freedom to choose which companies they do business with and enough options to change their minds at a moment’s notice. A misstep that diminishes a customer’s experience during sign-up or onboarding can lead them to replace one brand with another, simply by clicking a button. 

Consumers are also increasingly concerned with how companies protect their data, adding another layer of complexity for businesses as they aim to build trust in a digital world. Eighty-six percent of respondents to a KPMG study reported growing concerns about data privacy, while 78% expressed fears related to the amount of data being collected. 

At the same time, surging digital adoption among consumers has led to an astounding increase in fraud. Businesses must build trust and help consumers feel that their data is protected but must also deliver a quick, seamless onboarding experience that truly protects against fraud on the back end.

As such, artificial intelligence (AI) has been hyped as the silver bullet of fraud prevention in recent years for its promise to automate the process of verifying identities. However, despite all of the chatter around its application in digital identity verification, a multitude of misunderstandings about AI remain. 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Machine learning as a silver bullet

As the world stands today, true AI in which a machine can successfully verify identities without human interaction doesn’t exist. When companies talk about leveraging AI for identity verification, they’re really talking about using machine learning (ML), which is an application of AI. In the case of ML, the system is trained by feeding it large amounts of data and allowing it to adjust and improve, or “learn,” over time. 

When applied to the identity verification process, ML can play a game-changing role in building trust, removing friction and fighting fraud. With it, businesses can analyze massive amounts of digital transaction data, create efficiencies and recognize patterns that can improve decision-making. However, getting tangled up in the hype without truly understanding machine learning and how to use it properly can diminish its value and in many cases, lead to serious problems. When using machine learning ML for identity verification, businesses should consider the following.

The potential for bias in machine learning

Bias in machine learning models can lead to exclusion, discrimination and, ultimately, a negative customer experience. Training an ML system using historical data will translate biases of the data into the models, which can be a serious risk. If the training data is biased or subject to unintentional bias by those building the ML systems, decisioning could be based on prejudiced assumptions.

When an ML algorithm makes erroneous assumptions, it can create a domino effect in which the system is consistently learning the wrong thing. Without human expertise from both data and fraud scientists, and oversight to identify and correct the bias, the problem will be repeated, thereby exacerbating the issue.

Novel forms of fraud 

Machines are great at detecting trends that have already been identified as suspicious, but their crucial blind spot is novelty. ML models use patterns of data and therefore, assume future activity will follow those same patterns or, at the least, a consistent pace of change. This leaves open the possibility for attacks to be successful, simply because they have not yet been seen by the system during training. 

Layering a fraud review team onto machine learning ensures that novel fraud is identified and flagged, and updated data is fed back into the system. Human fraud experts can flag transactions that may have initially passed identity verification controls but are suspected to be fraud and provide that data back to the business for a closer look. In this case, the ML system encodes that knowledge and adjusts its algorithms accordingly.

Understanding and explaining decisioning

One of the biggest knocks against machine learning is its lack of transparency, which is a basic tenet in identity verification. One needs to be able to explain how and why certain decisions are made, as well as share with regulators information on each stage of the process and customer journey. Lack of transparency can also foster mistrust among users.

Most ML systems provide a simple pass or fail score. Without transparency into the process behind a decision, it can be difficult to justify when regulators come calling. Continuous data feedback from ML systems can help businesses understand and explain why decisions were made and make informed decisions and adjustments to identity verification processes.

There is no doubt that ML plays an important role in identity verification and will continue to do so in the future. However, it’s clear that machines alone aren’t enough to verify identities at scale without adding risk. The power of machine learning is best realized alongside human expertise and with data transparency to make decisions that help businesses build customer loyalty and grow. 

Christina Luttrell is the chief executive officer for GBG Americas, comprised of Acuant and IDology.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Repost: Original Source and Author Link


Liveness tests used by banks to verify ID are ‘extremely vulnerable’ to deepfake attacks

Automated “liveness tests” used by banks and other institutions to help verify users’ identity can be easily fooled by deepfakes, demonstrates a new report.

Security firm Sensity, which specializes in spotting attacks using AI-generated faces, probed the vulnerability of identity tests provided by 10 top vendors. Sensity used deepfakes to copy a target face onto an ID card to be scanned and then copied that same face onto a video stream of a would-be attacker in order to pass vendors’ liveness tests.

Liveness tests generally ask someone to look into a camera on their phone or laptop, sometimes turning their head or smiling, in order to prove both that they’re a real person and to compare their appearance to their ID using facial recognition. In the financial world, such checks are often known as KYC, or “know your customer” tests, and can form part of a wider verification process that includes document and bill checks.

“We tested 10 solutions and we found that nine of them were extremely vulnerable to deepfake attacks,” Sensity’s chief operating officer, Francesco Cavalli, told The Verge.

“There’s a new generation of AI power that can pose serious threats to companies,” says Cavalli. “Imagine what you can do with fake accounts created with these techniques. And no one is able to detect them.”

Sensity shared the identity of the enterprise vendors it tested with The Verge, but it requested that the names not be published for legal reasons. Cavalli says Sensity signed non-disclosure agreements with some of the vendors and, in other cases, fears it may have violated companies’ terms of service by testing their software in this way.

Cavalli also says he was disappointed by the reaction from vendors, who did not seem to consider the attacks significant. “We told them ‘look you’re vulnerable to this kind of attack,’ and they said ‘we do not care,’” he says. “We decided to publish it because we think, at a corporate level and in general, the public should be aware of these threats.”

The vendors Sensity tested sell these liveness checks to a range of clients, including banks, dating apps, and cryptocurrency startups. One vendor was even used to verify the identity of voters in a recent national election in Africa. (Though there’s no suggestion from Sensity’s report that this process was compromised by deepfakes.)

Cavalli says such deepfake identity spoofs are primarily a danger to the banking system where they can be used to facilitate fraud. “I can create an account; I can move illegal money into digital bank accounts of crypto wallets,” says Cavalli. “Or maybe I can ask for a mortgage because today online lending companies are competing with one another to issue loans as fast as possible.”

This is not the first time deepfakes have been identified as a danger to facial recognition systems. They’re primarily a threat when the attacker can hijack the video feed from a phone or camera, a relatively simple task. However, facial recognition systems that use depth sensors — like Apple’s Face ID — cannot be fooled by these sorts of attacks, as they verify identity not only based on visual appearance but also the physical shape of a person’s face.

Repost: Original Source and Author Link


Using AI to verify renter eligibility and risk

Imagine a software app that creates peace and understanding between landlords and tenants. How much value would that have in this world of constant rental turnover and strife?

This is the challenge taken on by Obligo, a New York-based fintech company that is using AI and machine learning to determine the level of risk of renters so that landlords feel safer about transactions. The company just announced a series B funding of $35 million.

“Our whole idea here is simple: We want to make renting an apartment or single-family home as easy as getting a hotel room,” Omri Dor, cofounder and COO of Obligo, told VentureBeat. “The main barrier to doing this has been the security deposit, which is as much [of] a pain to landlords as it is to tenants. It’s all about trust. If we can establish trust between landlords and tenants, then most of these barriers that cause strife to fall away.”

Open banking is an important factor for determining renter eligibility

At move-in time, Oblogo’s platform uses open banking data and AI-based underwriting to determine a renter’s eligibility to rent a unit without putting down a deposit.

Open banking is a relatively new approach that requires all deposit-taking financial institutions to open up customer and/or payment data to regulated providers to access, use and share. This breaks up the monopolies of financial services and allows more players to enter the market.

Obligo has done AI- and machine-learning-based software development incorporating open banking in its platform.

“There are a lot of interesting technological challenges,” Dor said. “On the one hand, the unspoken heroes of all these kinds of products are really the integrations and the engineers building the integrations to work with them, the accounting systems that the landlords use — and these are various industry-standard ones that you’ve got to work with very seamlessly.” The more sophisticated landlords actually use Obligo’s API, Dor said.

The more challenging type of technology, certainly, is focused on machine learning and AI. “That’s where I think there’s really incredible progress that we’ve been able to make, because we get all this rich data that I mentioned,” Dor said. “We’ll take a bank account, but I’m not going to look at too much data … we don’t want to know where you go shopping, for example. We take the data and extract (meta-type) features. Then they’re basically aggregated and anonymized, so we don’t know exactly where you’ve been shopping. Here’s an example:

“We’ll look at the average balance in your bank account in the last six months divided by your monthly rent,” Dor said. “Is that number high or low? If that number is low, that means that there isn’t a lot of cash usually floating in your account, and that’s potentially a riskier situation. If there’s a lot of money floating around, usually that may mean that you are a safer renter. So we use these kinds of features.”

What Obligo’s AI engine produces

The AI engine of Obligo’s platform predicts which renters are most or least risky, in the sense that their lease could result in unpaid debt to the landlord, Dor said. Traditional solutions to predict renter risk had a few drawbacks that Obligo was able to solve.

First, Dor said, the data used for traditional solutions was not very rich, relying on items such as FICO scores, background checks, and total income. In contrast, Dor said, Obligo’s AI engine predominantly relies on very rich open banking data. This means that, with the renter’s consent, Obligo gains access to the renter’s bank account transaction history.

The second drawback of traditional attempts to predict renter risk is that they are usually not aware of the outcome of the lease. Those traditional models are set in stone, relying on old datasets that are not just outdated but typically biased due to the specific property portfolio from which they draw, Dor said. In contrast, since Obligo handles the move-out process, Obligo has visibility into the outcome of every lease, enabling a true machine-learning cycle to take place.

One of the key challenges that Obligo faces on its AI front is that it takes a very long time for leases to end. This means Obligo must wait a long time to observe sufficiently many lease-ends to allow its AI engine to learn, Dor said.

Getting deeper into the Obligo tech

Senior Engineer Ori Zviran, head of Obligo’s Core Technology team, answered a few detailed questions from VB on how this all works.

VentureBeat: What AI and ML tools are you using specifically?

Zviran: “We are researching on Jupyter notebooks with pandas, Scikit-learn, and Statsmodels (python libraries). We then deploy to production on AWS Sagemaker.”

VentureBeat: Are you using models and algorithms out of a box — for example, from DataRobot or other sources?

Zviran: “We are using Scikit-learn and Statsmodels.”

VentureBeat: What overall cloud solutions are you using? Are you an AWS shop and using a lot of the AI workflow tools there, for example, Sagemaker?

Zviran: “Yes, we use Sagemaker and our entire platform is hosted on AWS.” We use AWS-managed Mongo and Postgres.

VentureBeat: How much do you do yourselves?

Zviran: “We are piecing the model together ourselves on Python, Scikit, and of course relying on our own platform’s backend to get the data and preprocess it. We deploy the model to Sagemaker for production.

VentureBeat: How are you “labeling” data for the ML and AI workflows?

Zviran: “This is our secret sauce and our domain expertise. We need to define very carefully what is the lease ‘outcome’ that we are optimizing for. I’m afraid I can’t share more about this.”

VentureBeat: Can you talk about how much data you are processing?

Zviran: Our open banking data is not super high dimensional (no videos, images), and we dimensionally reduce it further. This means our models can be trained in memory pretty quickly. In the future, I’m sure we will need to use more sophisticated solutions to handle the increasing scale.”

Obligo’s value proposition

Landlords and property managers can use Obligo to simplify their move-in process, comply with the ever-changing regulatory landscape, and make their listings more appealing to renters, Dor said.

Obligo’s product suite provides a streamlined rental process that includes an option for landlords to do away with security deposits, although it’s always available if needed. Renters then proceed to make their move-in payments online.

At move-out, Obligo handles any end-of-lease deductions, refunding the deposit or billing the renter for any open charges. Landlords are off the hook for all of this, and if the prospective tenant is a qualifier, he or she is off the hook for a security deposit. All the conventional paperwork becomes unnecessary, Dor said.

Partnering with property owners

Obligo has partnered with more than 100 tech-savvy U.S. property owners and managers, including AIR, Beam Living (StuyTown), and Common.

“Obligo has achieved remarkable technological milestones, both in its ability to make predictions about renter risk and in its effective debt recovery process,” Yoram Snir, managing partner of 83North, said in a media advisory. “We believe the product suite that Obligo’s team is building may soon become an irreplaceable industry standard, in the U.S. and beyond.”

The funding round was led by investor 83North. Additional investors participating in the round include Highsage Ventures, 10D, Entree Capital, Alumni Venture Group, and MUFG.

Combined with its recent series A round, Obligo has raised $50 million in the last 12 months. The company said its new funding will be used to expand its product suite, grow market share and bring industry-changing rental solutions to millions of homes across the U.S.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Snowflake integrates Talend to help enterprises verify cloud data

Elevate your enterprise data technology and strategy at Transform 2021.

Today, Talend and data warehousing giant Snowflake will announce a major boost to their partnership, specifically the full integration of Talend’s machine learning-based Trust Score into Snowflake’s environment. The arrangement will allow Talend, an open source data integration platform, to reach more customers, while making it easier for Snowflake users, including non-experts, to make better sense of their data and maintain compliance.

“We’ve had all those tools for specialists for a long time. They’re still there in Talend, and if you’re a data quality specialist, we give you all the tools you need to do very complete work,” Christophe Toum, senior director of product management at Talend, told VentureBeat. “What we’ve done [here] is we’ve taken the best practices, looked at all the different dimensions of data quality — validity, completeness, uniqueness, and other other dimensions — and we packaged this data quality into a trust score. We do the heavy lifting. We give you something that’s easy to understand.”

The announcement will be made at Snowflake’s Summit 2021. Yesterday at the event, Snowflake previewed data marketplace, governance tools, and management features, also announcing the general availability of a marketplace that would allow organizations to sell data based on usage.

Partnership 2.0

Prior to this announcement, Snowflake considered Talend an “Elite Partner,” which — among other criteria — means the two companies share over 1,000 customers, and therefore business interests. The distinction made Talend eligible for dedicated Snowflake resources. Snowflake also previously selected Talend for a private preview of Snowpark, which ultimately led to the companies leveling up their arrangement with this full integration.

Now, the integration will make it possible for enterprises to run checks on entire data sets with a simple click, without the use of external applications or moving sample sets. And because Talend’s algorithms will run, compute, and scale entirely natively within Snowflake using Snowpark and Java UDF, the data never leaves the environment. This can reduce risk, complexity, and cost, while also making it easier to meet compliance requirements, according to the companies. With this integration, Talend becomes the first partner to leverage Snowpark and Java UDF to run an application natively inside Snowflake. The capability will be available for all Snowflake customers beginning in Q4 2021.

This isn’t Snowflake’s only recent partnership aimed at easing its customers’ data experiences. In May, the company announced a partnership with ZoomInfo, using a subscription-based B2B intelligence platform to help with the integration of business contact data, specifically.

From diagnosis to prevention

While not part of the announcement, Toum also gave VentureBeat a look into what’s next for Talend’s data offerings: a new concept called “data health.” The idea is that the Trust Score is only one ingredient for ensuring the full health of data and data management, and that this new product would help customers not only act on findings, but do so in a preventative way.

“That’s going to go a lot further than just giving you the diagnosis,” he said. “We actually want to give you the health system for prevention, vaccine, and cure.”

When ready for deployment, the new offering will be rolled into the Snowflake partnership.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Tech News

WhatsApp on Android to make flash calls to verify logins

There are different systems these days for securing accounts beyond fragile and weak passwords. While authenticator apps are often the recommended method, others also use your phone number as a sort of a second authentication factor. That’s especially true for services that use your phone number as your account number anyway, like WhatsApp. It seems that Facebook’s messaging service is going to use that number to implement another layer of security, making a flash call to verify the number that you gave for login is a valid one.

This upcoming feature, if it does make it out the door, is for both security and convenience. With the currently existing system, WhatsApp sends OTPs via SMS when logging into their accounts. Users either type in the numbers or, depending on the permissions granted to the WhatsApp Android app, is automatically entered by the app itself.

This method, while better than just a password, has also been criticized for offering no real security because of the vulnerability of the SMS protocol. WABetaInfo, which often leaks upcoming or in-development WhatsApp features, reveals that the network is working on yet another method to verify logins. Instead of sending an OTP, it will call you and immediately drop the call and will then scan your call history to check if the phone’s number and the number it called, which would have been the number it would send the OTP to, is one and the same.

The catch is that to perform this action, WhatsApp needs permission to read your phone’s call history log. This is something it will ask Android users once when setting up the app for the first time and WhatsApp promises the data won’t be used for any other purpose. Given the recent scandal the network is under due to its new Facebook-friendly privacy policy, that’s a rather big promise to make.

That requirement is also one reason why this feature will never make it to iOS since Apple’s platform doesn’t give third-party apps access to call history. It is also an optional verification method so those with privacy concerns can keep using the older methods, presuming they still use WhatsApp, of course.

Repost: Original Source and Author Link

Tech News

WhatsApp might soon call you to verify your account

When you buy a new phone and restore your WhatsApp account, the app verifies by sending you a six-digit code through SMS. However, the company’s working on another method to verify your account: flash calls.

According to a report by reliable WABetaInfo, the chat app is testing this ability on Android. Here’s how it’ll work: you can opt-in to receive a call for verification instead of an SMS code. WhatsApp will call you for a brief moment and end the call; you don’t need to pick it up.

Even when this feature is rolled out, it’ll be limited to Android, as iOS doesn’t allow apps to read call history. WABetaInfo said that while the app will access your call log to compare the last entry, it won’t use the data for anything else.

Credit: WhatsApp