Categories
AI

Why AI is the future of fraud detection

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


The accelerated growth in ecommerce and online marketplaces has led to a surge in fraudulent behavior online perpetrated by bots and bad actors alike. A strategic and effective approach to online fraud detection will be needed in order to tackle increasingly sophisticated threats to online retailers.

These market shifts come at a time of significant regulatory change. Across the globe, new legislation is coming into force that alters the balance of responsibility in fraud prevention between users, brands, and the platforms that promote them digitally. For example, the EU Digital Services Act and US Shop Safe Act will require online platforms to take greater responsibility for the content on their websites, a responsibility that was traditionally the domain of brands and users to monitor and report.

Can AI find what’s hiding in your data?

In the search for security vulnerabilities, behavioral analytics software provider Pasabi has seen a sharp rise in interest in its AI analytics platform for online fraud detection, with a number of key wins including the online reviews platform, Trustpilot. Pasabi maintains its AI models based on anonymised sets of data collected from multiple sources.

Using bespoke models and algorithms, as well as some open source and commercial technology such as TensorFlow and Neo4j, Pasabi’s platform is proving itself to be advantageous in the detection of patterns in both text and visual data. Customer data is provided to Pasabi by its customers for the purposes of analysis to identify a range of illegal activities – – illegal content, scams, and counterfeits, for example – – upon which the customer can then act.

Chris Downie, Pasabi CEO says, “Pasabi’s technology uses AI-driven, behavioral analytics to identify bad actors across a range of online infringements including counterfeit products, grey market goods, fake reviews, and illegal content. By looking for common behavioral patterns across our customers’ data and cross-referencing this with external data that we collect about the reputation of the sources (individuals and companies), the software is perfectly positioned to help online platforms, marketplaces, and brands tackle these threats.”

The proof is in the data

Pasabi shared with VB that their platform is built entirely in-house, with some external services to enhance their data such as translation services. Pasabi’s combination of customer (behavioral) and external (reputational) data is what they say allows them to highlight the biggest threats to their customers.

In the Q&A, Pasabi told VentureBeat that their platform performs analysis on hundreds of data points, which are provided by customers and then combined with Pasabi’s own data collected from external sources. Offenders are then identified at scale, revealing patterns of behavior in the data and potentially uncovering networks working together to mislead consumers.

Anoop Joshi, senior director of legal at Trustpilot said, “Pasabi’s technology finds connections between individuals and businesses, highlighting suspicious behavior and content. For example, in the case of Trustpilot, this can help to detect when individuals are working together to write and sell fake reviews. The technology highlights the most prolific offenders, and enables us to use our investigation and enforcement resources more efficiently and effectively to maintain the integrity of the platform.”

Relevant data is held on Google Cloud services, using logical tenant separation and VPCs. Data is stored securely using encryption in transit and encryption at rest.  Data is stored only for as long as strictly necessary and solely for the purpose of identifying suspicious behavior.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Justt emerges from stealth with $70M to fight chargeback fraud with AI

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Justt, a company developing an AI-powered platform to fight fraudulent chargebacks, today emerged from stealth with $70 million raised across three funding rounds, including a series B led by Oak HC/FT and two previously unannounced rounds led by Zeev Ventures and F2 Venture Capital, respectively. CEO Ofir Tahor says that the proceeds will be used to expand Justt’s sales and marketing operations in the U.S. and Europe and to triple the size of the company’s Israel-based R&D team.

False chargebacks, also known as “friendly fraud,” occur when shoppers wrongly dispute credit or debit card charges — costing online sellers an estimated over $125 billion in lost revenue. As transactions move online, this type of fraud has become more common. Ecommerce merchants stand to lose roughly $20 billion in 2021 due to criminal activity, according to Juniper Research — an 18% increase versus 2020.

Justt’s product combines AI with human reviewers to gather evidence refuting illegitimate chargeback claims and work with credit card companies on behalf of merchants. Justt integrates with merchants’ payment providers, kicking off dispute resolutions when the platform’s algorithms detect potentially incorrect chargebacks.

“The pandemic has shifted buying online, driving a boom in online transactions and a resultant increase in fraudulent chargeback activities. That has led many merchants to realize that their existing laissez-faire approach to chargebacks is not scalable or sustainable — and that, in turn, has driven a surge in demand for chargeback mitigation solutions,” Tahor told VentureBeat via email. “Justt has seen business skyrocket during the pandemic, not only because chargeback fraud has increased, but also due to merchants who are simultaneously facing other pressures — economic turbulence, supply chain issues, labor shortages, and more — that make it hard to divert resources to in-house mitigation efforts.”

Fighting chargebacks with AI

Valby, Denmark-based Justt was cofounded in 2016 by Tahor and Roenen Ben-Ami. Tahor was previously the head of social media at Adobe and Magento, while Ben-Ami served as a business risk manager and fraud analyst at Simplex.com, a cryptocurrency payments startup.

“While spearheading risk management efforts at Simplex, Ben-Ami began developing tailored manual processes to help crypto merchants to fight back against illegitimate chargebacks. He soon realized, however, that delivering customized mitigation support at scale for all industries rather than just crypto would require a new approach, with AI-powered automation to deliver an effective but hands-off experience for merchants,” Tahor said. “To create that solution, Roenen reached out to me. I’d seen merchants struggle with similar challenges — and grow frustrated with the low-tech, labor-intensive chargeback dispute process — while launching ecommerce social-marketing startup Shopial and serving as head of Magento.”

Justt employs AI and machine learning to customize mitigation processes on a merchant-by-merchant basis, according to Tahor. The platform builds evidence using various decision-making models for a dispute trained on data from a merchant’s transaction history, as well as live analysis of ongoing results.

Justt is currently processing millions of data points. By next year, that number is expected to grow to billions.

“Every transaction is indexed in relation to scores of chargeback reason codes used by credit card networks, providing key insights into the kinds of transactions that trigger chargebacks, and enabling us to rapidly identify and remedy potential fraud across a wide range of industries,” Tahor explained. “Every merchant’s Justt implementation is unique, but every merchant benefits from our ability to leverage big data from the vast number of chargebacks and disputes that we now scrutinize, spot patterns and signals amid the noise, and optimize our dispute tools in real time.”

Competition

Eighty-six percent of all chargebacks are probable cases of friendly fraud, according to one source, and merchants lose $2.40 for every dollar of chargeback fraud. At the current rate, the cost in revenue, merchandise, shipping costs, and fees due to chargebacks could approach the $30 billion mark by the end of the decade.

While Justt competes with Chargehound, Chargeback.com, Midigator, Chargebacks911, and ChargebackGuru in a fraud management solutions market that is anticipated to be worth $10.4 billion by 2023. The company claims it’s already processing 10,000 chargebacks per month for “very large” enterprise clients. Tahor attributes the success in part to Justt’s “contingency-style” business model, which only charges merchants when Justt recovers funds from a chargeback dispute.

“[Our competitors] either offer low-tech ‘hands-on’ manual products that require significant time and energy from merchants, or products that rely on non-expert outsourced teams to manage chargebacks for merchants,” Tahor said. “Justt’s solution delivers a simple but transformative core benefit — it’s a genuinely hands-off mitigation solution that delivers a radically higher success rate when contesting chargebacks.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Russian ‘King of Fraud’ sentenced to 10 years in prison for Methbot digital ad scheme

A Russian man convicted on wire fraud and money laundering charges for his role in the Methbot digital advertising scheme was sentenced to 10 years in prison on Wednesday.

The Department of Justice said between September 2014 and December 2016, Aleksandr Zhukov, 41, and several co-conspirators made deals with ad networks to place online ads but used a bot farm and rented servers to simulate users visiting spoofed versions of websites like the New York Times and the New York Daily News. The ads were never shown to human users, but Zhukov raked in $7 million running the fake traffic scam (which became known as “Methbot,” after the name of his phony ad network Media Methane), according to the DOJ.

“Sitting at his computer keyboard in Bulgaria and Russia, Zhukov boldly devised and carried out an elaborate multi-million-dollar fraud against the digital advertising industry, and victimized thousands of companies across the United States,” US Attorney Breon Peace said in a statement.

As part of the elaborate plan, Zhukov recruited programmers and others to help build the infrastructure that made the scheme possible. Authorities said he referred to the recruits as his developers and to himself as “the king of fraud.”

In May, a jury convicted Zhukov of wire fraud conspiracy, wire fraud, money laundering conspiracy, and money laundering. In addition to the 10-year prison term, Zhukov was ordered to pay $3.8 million in forfeiture.

Repost: Original Source and Author Link

Categories
Game

Georgia man used $57k in COVID fraud cash to buy this Pokemon card

A Georgia man appeared in court documents accused of defrauding the government for thousands of dollars in COVID-19 relief money. Part of said money was placed in a bank account, and part was spent on one single Pokemon card. As you may have guessed, that Pokemon card was a 1st-edition Shadowless Charizard from the first US-released Pokemon TCG set with a 9.5 “gem mint” rating – for $57,789.

The court documents do not disclose the exact card purchased, but DO give the exact amount of cash spent on said card. As sleuthed by Polygon, a card auction with an ending bid matching the price quoted by court documents was, indeed, a Charizard. As shown on PWCC Marketplace, this Charizard was graded by Beckett as 9.5 gem mint, with grading code 0011532499. That’ll be handy if we ever want to discover the pathway of ownership for this card. Not that the card needed any more infamy, as several similar cards have sold quite recently for massive amounts of cash.

This card had an end bid on December 28, 2020. That lines up with the suggested purchase date of “on or about January 8, 2021.” This, as well as the rest of the fraud outlined by court documents this week, were “all done in violation of Title 18, United States Code, Section 1343.”

The base crime here was that the Georgia man allegedly presented false information in an Economic Injury Disaster Loans document intended for small businesses experiencing financial disruption due to the COVID-19 pandemic. False and fraudulent representations are alleged to have been made by Vinath Oudomsine (also known as Georgia man), to the United States Small Business Administration.

Documents filed with case 3:21-cr-00013-DHB-BKE with the United States District Court, Southern District of Georgia, Dublin Division, can be found online via Law and Crime this week. If convicted of Wire Fraud, the defendant could face up to 20 years in prison, up to a $250k fine, and up to three years of supervised release.

Repost: Original Source and Author Link

Categories
AI

How voice biometrics can protect your customers from fraud

All the sessions from Transform 2021 are available on-demand now. Watch now.


Voice identity verification is catching on, especially in finance. Talking is convenient, particularly for users already familiar with voice technologies like Siri and Alexa. Voice identification offers a level of security that PIN codes and passwords can’t, according to experts from two leading companies innovating in the voice biometrics space.

In a conversation at VentureBeat’s Transform 2021 virtual conference, Daniel Thornhill, senior VP at cybersecurity solutions company Validsoft, and Paul Magee, president of voice biometrics company Auraya, discussed the emerging field with Richard Dumas, Five9 VP of marketing.

Passive vs. active voice biometrics

Just like a fingerprint, an iris, or a face, voice biometrics are unique to an individual. To create a voiceprint, a speaker provides a sample of their voice.

“When you want to verify your identity, you use another sample of your voice to compare it to that initial sample,” Magee explained. “It’s as simple as that.”

What sets it apart from other biometrics is that every time someone speaks when prompted, the voiceprint is unique, Magee said. “Nobody can steal my voice because you can’t steal what I’m going to say next.”

When users are prompted to say their phone or account numbers or digits displayed on the screen, that’s active biometrics.

“Passive is more in the background,” Magee said. “So while I’m talking with the call center agent, my voice is being sampled and the agent is being provided with a confirmation that it really is me.”

Voice identity biometrics security

An organization responsible for the voice biometrics can store it with a trusted service provider, Magee said. “The last thing that we advocate is for the voiceprints to be flying around into some unknown place with limited security,” he added. “We think they should be locked up securely behind the clients’ firewall, like [companies] protect the rest of their clients’ information.”

Cheating in the voice identification system

Thornhill described how the system can be cheated: Someone can record a user and replay that audio, or someone can use a computer to generate synthetic versions of people’s voices, also known as deep fakes.

But there are ways to prevent such fraud. “You can apply some kind of [live element], so maybe a random element of the phrase, or use passive voice biometrics so the user is continuously speaking,” Thornhill explained.

There’s also technology that looks at anomalies in speech. “Does this look like it’s being recorded and replayed? Does it look like it’s been synthetically produced or modified by a machine?” Thornhill said. “So there are ways that fraudsters can potentially try to subvert the system, but we do have measures in place that detect those and prevent them.”

Industry-wide voice identification adoption

The greatest barrier to a successful biometric deployment is getting people to enroll their voice, Magee said. That’s why companies should avoid a one-size-fits-all approach.

If a customer often contacts a call center for their needs, that’s the best way to enroll them, Magee said. If they usually use an app, present them with the invitation there. A great time to enroll in a voiceprint is while customers enter their account details during onboarding.

Thornhill agreed. “It’s about understanding your client’s needs, their interactions with their customers, to help them get those enrollments up and help them achieve return on investment,” he said. “They’ll benefit from it, whether it’s from fraud reduction or customer experience.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Capital One uses NLP to discuss potential fraud with customers over SMS

Join executive leaders at the Conversational AI & Intelligent AI Assistants Summit, presented by Five9. Watch now!


Capital One has a 99% success rate when it comes to understanding customer responses to an SMS fraud alert, according to Ken Dodelin, the company’s VP of mobile, web, conversational AI, and messaging products. Dodelin was speaking today about how the bank harnesses the power of personalization and automation in a conversation with VentureBeat senior reporter Sage Lazzaro at VentureBeat’s Transform 2021 virtual conference.

When Capital One notices an anomaly in a customer’s transactions, it reaches out over SMS and asks the customer to verify those transaction details. If the customer doesn’t recognize the transaction, Capital One can proceed to treat it as fraudulent.

By adding a third-party natural language processing/understanding solution, the AI assistant Eno is able to understand customers’ written responses, such as “that was me shopping in Philadelphia,” which is not easy for machines to understand, Dodelin said.

Capital One first considered using AI for customer service in 2016, when the company was among the early recipients of Amazon Echo devices. Back then, Amazon was searching for partners across industries to see how they might create a conversational experience. Capital One said it became the first bank to develop a skill — a special program that enabled customers to accomplish tasks on Amazon platforms. In the following years, Capital One started to incorporate natural language understanding into its SMS alerts, as well as its website and mobile apps.

The current AI assistant has evolved a lot from the initial version, Dodelin said. First, the assistant is available to chat with customers in more places. Whether the customer is inquiring about the bank or a car loan, they have an opportunity to ask questions about their account. Second, the company said it hasn’t restricted chats to conversations initiated by the customer. The company relies on its advanced data infrastructure to anticipate customer needs and reach out proactively, either through push notifications or email. The interaction would include important information and actions customers would expect from a human assistant.

One challenge Capital One had to address was what to do if the customer wanted something not included in the options displayed on the screen. “Now we have to not just design experiences for the things we expect them to get, but continuously learn about all the other things that are coming in and the different ways they are coming in,” Dodelin said.

Context matters when applying AI technologies to customer service. In many cases, scripts are relatively consistent, regardless of who the customer is or their specific circumstances. But when creating an experience, it is important to remember that customers are being contacted under very different circumstances. Levity may not be appropriate during a moment of emotional and financial stress, for example.

Capital One has continued to enhance the service so it will proactively anticipate where a customer might need help and respond in an appropriate tone, Dodelin said.

Another challenge is anticipating the breadth of questions customers have. Customers who encounter issues often lack an outlet to express their frustration beyond having a human assistant pick up the phone, he said. Learning more about those experiences helps the AI assistant provide better answers and lets the company adjust which options are included in the user interface.

“As we learn more, we got better and expanded the audience that [the AI assistant] is available to,” Dodelin said. Capital One did not make the service available to all customers but started with a small segment of its credit card business. Over time, the company has opened the service to more customers.

“It’s a lot of work done by some very talented people here at Capital One to try to make it successful in all these different circumstances,” Dodelin concluded.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

This is what a deepfake voice clone used in a failed fraud attempt sounds like

One of the stranger applications of deepfakes — AI technology used to manipulate audiovisual content — is the audio deepfake scam. Hackers use machine learning to clone someone’s voice and then combine that voice clone with social engineering techniques to convince people to move money where it shouldn’t be. Such scams have been successful in the past, but how good are the voice clones being used in these attacks? We’ve never actually heard the audio from a deepfake scam — until now.

Security consulting firm NISOS has released a report analyzing one such attempted fraud, and shared the audio with Motherboard. The clip below is part of a voicemail sent to an employee at an unnamed tech firm, in which a voice that sounds like the company’s CEO asks the employee for “immediate assistance to finalize an urgent business deal.”

The quality is certainly not great. Even under the cover of a bad phone signal, the voice is a little robotic. But it’s passable. And if you were a junior employee, worried after receiving a supposedly urgent message from your boss, you might not be thinking too hard about audio quality. “It definitely sounds human. They checked that box as far as: does it sound more robotic or more human? I would say more human,” Rob Volkert, a researcher at NISOS, told Motherboard. “But it doesn’t sound like the CEO enough.”

The attack was ultimately unsuccessful, as the employee who received the voicemail “immediately thought it suspicious” and flagged it to the firm’s legal department. But such attacks will be more common as deepfake tools become increasingly accessible.

All you need to create a voice clone is access to lots of recordings of your target. The more data you have and the better quality the audio, the better the resulting voice clone will be. And for many executives at large firms, such recordings can be easily collected from earnings calls, interviews, and speeches. With enough time and data, the highest-quality audio deepfakes are much more convincing than the example above.

The best known and first reported example of an audio deepfake scam took place in 2019, where the chief executive of a UK energy firm was tricked into sending €220,000 ($240,000) to a Hungarian supplier after receiving a phone call supposedly from the CEO of his company’s parent firm in Germany. The executive was told that the transfer was urgent and the funds had to be sent within the hour. He did so. The attackers were never caught.

Earlier this year, the FTC warned about the rise of such scams, but experts say there’s one easy way to beat them. As Patrick Traynor of the Herbert Wertheim College of Engineering told The Verge in January, all you need to do is hang up the phone and call the person back. In many scams, including the one reported by NISOS, the attackers are using a burner VOIP account to contact their targets.

“Hang up and call them back,” says Traynor. “Unless it’s a state actor who can reroute phone calls or a very, very sophisticated hacking group, chances are that’s the best way to figure out if you were talking to who you thought you were.”

Repost: Original Source and Author Link