Categories
Security

Cloudflare just stopped one of the largest DDoS attacks ever

Cloudflare, a company that specializes in web security and distributed denial of service (DDoS) attack mitigation, just reported that it managed to stop an attack of an unprecedented scale.

The HTTPS DDoS attack was one of the largest such attacks ever recorded, and it came from unusual sources — data centers.

Cloudflare

The attack was detected and mitigated automatically by Cloudflare’s defense systems, which were set up for one of its customers using the paid Professional plan. At its peak, the attack reached a massive 15.3 million requests-per-second (rps). This makes it the largest HTTPS DDoS attack ever mitigated by Cloudflare.

Cloudflare has previously seen attacks on a larger scale targeting unencrypted HTTP, but as Cloudflare mentions in its announcement, targeting HTTPS is a much more expensive and difficult venture. Such attacks typically require extra computational resources due to the need to establish a transport layer security (TLS) encrypted connection. The increase in costs is twofold: It costs more for the attacker to establish the attack, and it costs more for the targeted server to mitigate it.

The attack lasted less than 15 seconds, and its target was a cryptocurrency launchpad. Crypto launchpads are platforms that startups within the crypto space can use to raise early-stage funding while leveraging the reach of the launchpad. Cloudflare mitigated the attack without any additional actions being taken by the customer.

The source of the attack was not unfamiliar to Cloudflare — it said that it has seen attacks hitting up to 10 million rps from sources that match the same attack fingerprint. However, the devices that carried out the attack were something new, seeing as they came mostly from data centers. Cloudflare notes that this marks a shift that it has already been noticing as of late, with larger attacks moving from residential network internet service providers (ISPs) to huge networks of cloud compute ISPs.

Cloudflare DDoS attack sources.
Cloudflare

Approximately 6,000 unique bots across over 1,300 networks carried out the DDoS attack that Cloudflare managed to mitigate automatically, without any human intervention. Perhaps more impressive is the number of locations involved, adding up to a total of 112 countries all around the globe. The largest share of it (15%) came from Indonesia, followed by Russia, Brazil, India, Colombia, and the U.S.

While this wasn’t the largest DDoS attack ever mitigated by Cloudflare, it’s definitely up there in terms of volume and severity. In 2021, the service managed to stop a 17.2 million rps HTTP DDoS attack. Earlier this year, the company reported that it has seen a massive rise in the number of DDoS attacks which increased by a staggering 175% quarter-over-quarter based on data from the fourth quarter of 2021.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

Go read this report about the horrifying leaks coming from school ransomware attacks

Ransomware has been a hot-button topic in 2021 due to its impact on critical infrastructure, hospitals, and computer manufacturers. However, a recent report from NBC News may be one of the more heartbreaking accounts of the effects hackers can have: it details how data leaks from attacks on schools can put student’s most sensitive information out onto the internet, available to anyone who knows how to find it and is willing to pay. It’s a story that’s well worth a read for all the details it goes into and edge cases it explores.

According to NBC’s report, one school district had an Excel sheet called “Basic student information” posted to the dark web after it refused to pay a ransom, according to the FBI’s instructions. The article’s author, Kevin Collier, breaks down the shocking information it contains:

It lists students by name and includes entries for their date of birth, race, Social Security number and gender, as well as whether they’re an immigrant, homeless, marked as economically disadvantaged and if they’ve been flagged as potentially dyslexic.

The school knew about the attack and informed parents about it — making it potentially one of the better scenarios. Insurance covered identity theft protection for staff, but it’s unclear whether that benefit extends to students even after getting lawyers involved. In other cases, when NBC News asked some schools about their leaks, they seemed “unaware of the problem.”

It’s hard even to comprehend how it could affect a student’s social life if their grades, medical info, or free or reduced-price lunch benefit status leaked online. What’s easier to understand is the impact of having their SSNs, birthdays, and names sold to unscrupulous people: NBC tells the story of a student whose info was used in attempts to get a credit card and car loan.

I know firsthand the hell that can come from having your credit wrecked before you even get out of high school, and I wouldn’t wish it on anyone. The report cites Eva Velasquez from the Identity Theft Resource Center, who tells parents to freeze their kid’s credit to keep them safe from identity theft. Parents already have enough concerns — dealing with kids who are learning remotely or figuring out how to get kids physically to school, all the while worrying that they could catch COVID while they’re there. It’s hard to accept that parents should also become the data security and privacy experts that school systems are missing.

As an expert at a non-profit for protecting school’s IT systems told NBC, “it is a solemn responsibility that schools have to care for kids, so they collect a lot of data with that.” Clearly, many schools (the report mentions that 1,200 schools’ info had been published by ransomware attackers this year) aren’t up to the task of keeping that data safe — though doing so is easier said than done, especially while working with budgets that don’t allow for the level of corporate security attackers are bypassing daily.

It’s incredibly sad to imagine students having to simultaneously worry about their school using FBI-grade tech to steal personal data and hackers stealing information for their school and selling it to criminals. While it may be hard to think about, it’s even more difficult to push for change if we don’t know what’s happening, which makes reports like NBC’s so essential and worth the read.



Repost: Original Source and Author Link

Categories
Security

Microsoft says it mitigated one of the largest DDoS attacks ever recorded

Microsoft says it was able to mitigate a 2.4Tbps Distributed Denial-of-Service (DDoS) attack in August. The attack targeted an Azure customer in Europe and was 140 percent higher than the highest attack bandwidth volume Microsoft recorded in 2020. It also exceeds the peak traffic volume of 2.3Tbps directed at Amazon Web Services last year, though it was a smaller attack than the 2.54Tbps one Google mitigated in 2017.

Microsoft says the attack lasted more than 10 minutes, with short-lived bursts of traffic that peaked at 2.4Tbps, 0.55Tbps, and finally 1.7Tbps. DDoS attacks are typically used to force websites or services offline, thanks to a flood of traffic that a web host can’t handle. They’re usually performed through a botnet, a network of machines that have been compromised using malware or malicious software to control them remotely. Azure was able to stay online throughout the attack, thanks to its ability to absorb tens of terabits of DDoS attacks.

The attack on Azure lasted more than 10 minutes.
Image: Microsoft

“The attack traffic originated from approximately 70,000 sources and from multiple countries in the Asia-Pacific region, such as Malaysia, Vietnam, Taiwan, Japan, and China, as well as from the United States,” explains Amir Dahan, a senior program manager for Microsoft’s Azure networking team.

While the number of DDoS attacks have increased in 2021 on Azure, the maximum attack throughput had declined to 625Mbps before this 2.4Tbps attack in the last week of August. Microsoft doesn’t name the Azure customer in Europe that was targeted, but such attacks can also be used as cover for secondary attacks that attempt to spread malware and infiltrate company systems.

The attack is one of the biggest in recent memory. Last year, Google detailed a 2.54Tbps DDoS attack it mitigated in 2017, and Amazon Web Services (AWS) mitigated a 2.3Tbps attack. In 2018, NetScout Arbor fended off a 1.7Tbps attack.

Correction October 12th, 3:17PM ET: We originally reported that Microsoft had mitigated the largest DDoS attack ever recorded, but Google mitigated a larger one in 2017. We have changed the headline and the article to reflect this. We regret the error.

Repost: Original Source and Author Link

Categories
AI

ReversingLabs raises $56M to combat software supply chain attacks

All the sessions from Transform 2021 are available on-demand now. Watch now.


ReversingLabs, a Cambridge, Massachusetts-based cybersecurity company developing threat detection and analysis solutions, has raised $56 million in series B funding led by Crosspoint Capital Partners with participation from ForgePoint Capital and Prelude. Cofounder and CEO Mario Vuksan says the proceeds, which bring its total raised to $81 million, will be put toward scaling ReversingLabs’ sales and marketing efforts as ReversingLabs looks to expand its global reach.

Over the past year, there’s been several high-profile incidents where attackers have attempted to compromise enterprises through the software supply chain. According to a recent Anchore survey, 64% of companies were affected by.a supply chain attack in 2021 and 60% have made securing the software supply chain a top 2022 priority. The attacks highlight the need for controls that can help validate the integrity of software and its components through the development, deployment, and adoption lifecycle.

ReversingLabs, which was founded in 2009 by Mario Vuksan and Tomislav Pericin, aims to combat the growing threat with static analysis and file reputation services that provide visibility into malware and its location. The platform analyzes file and binary-based threats emerging from the web, mobile, email, cloud, and app development across industry verticals like software, financial services, defense, retail, and insurance.

“The level of sophistication and complexity in today’s cybersecurity attacks means that enterprises can no longer assume that software products from their providers are safe,” CrossPoint managing partner Dr. Hugh Thompson said in a press release. “ReversingLabs provides a proactive and transparent approach to understanding the threats that exist within software even in cases where you don’t have access to source code.”

AI engine

At the core of the ReversingLabs platform is the “Titanium” engine, an AI system that harvests thousands of file types and continuously monitors an index of over 10 billion files for future threats. The system unpacks files in the underlying object structure — down to embedded executables, libraries, documents, resources, and icons — and maps “human-readable” indicators to classifications. Security analysts get threat intelligence that they can us to prioritize threats, while threat intelligence and hunting teams get a workbench for deep file analysis, ostensibly enabling them to accelerate investigations.

“Every organization, whether an integrated software vendor developing software or an enterprise procuring or using software, needs controls to manage the software supply chain attack surface,” Crosspoint managing partner Greg Clark said in a statement. “This attack surface is nuanced, and traditional approaches like source code scanning are insufficient. Every part of the code, compile, build and deploy cycle needs to be checked. ReversingLabs is a great ally in the fight against these threats. Their solution is unique, very hard to replicate and immensely valuable.”

ReversingLabs competes in a cybersecurity market anticipated to be worth $170.4 billion in 2022, according to Gartner. But the company claims to have made inroads, nabbing customers including four of the top six software companies and two of the top five defense and aerospace firms. It also counts SolarWinds, the IT monitoring and management firm at the center of the widespread U.S. federal government hack earlier this year, as a partner.

“As an element of our Secure By Design initiatives, we’ve applied maximum attention to protecting the integrity of our software development and deployment pipeline from even the most determined and sophisticated attackers,” SolarWinds president and CEO Sudhakar Ramakrishna said in a statement. “We are working to help establish new standards for secure software development in the industry and ReversingLabs has since become an important part of our overall efforts.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Egress: 73% of orgs were victims of phishing attacks in the last year

All the sessions from Transform 2021 are available on-demand now. Watch now.


73% of organizations were victims of successful phishing attacks in the last year, according to the Egress 2021 Insider Data Breach Survey. IT leaders indicate that the remote and hybrid future of work will make it harder to prevent phishing incidents. Remote work has already increased the risk of a data breach, with over half (53%) of IT leaders reporting an increase in incidents caused by phishing. In addition, the research has revealed concerns over future hybrid working, with 50% of IT leaders saying it will make it harder to prevent breaches caused by malicious email attacks.

The survey, independently conducted by Arlington Research on behalf of Egress, polled 500 IT leaders and 3,000 employees across the US and UK in numerous vertical markets including financial services, healthcare and legal.

Phishing attacks are still very prevalent. Employees continue to fall victim to phishing attacks with 43% not following security protocols and 36% rushing and making mistakes. The results also highlight the human cost of phishing as it found that in almost one quarter (23%) of organizations, employees who were hacked via a phishing email left the organizations — either voluntarily or involuntarily. IT leaders need to gain a firm grasp on phishing risk and put an effective strategy in place to mitigate it.

Read the full report by Egress.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Mimecast’s new AI tools protect against the sneakiest phishing attacks

All the sessions from Transform 2021 are available on-demand now. Watch now.


Email security provider Mimecast this week launched Mimecast CyberGraph, an AI-driven add-on to Mimecast Secure Email Gateway (SEG) that sniffs out sophisticated and hard-to-detect phishing and impersonation threats, the company said.

Mimecast CyberGraph uses machine-learning technology to detect and prevent phishing attacks delivered via email, the Lexington, Massachusetts-based company said in a statement. The addition to Mimecast’s flagship email security product creates an identity graph — a chart of relationships between email senders — and uses an AI engine to identify potential threat actors and warn employees of possible cyber threats.

“Phishing and impersonation attacks are getting more sophisticated, personalized, and harder to stop. If not prevented, these attacks can have devastating results for an enterprise organization,” Mimecast VP of product management for threat intelligence Josh Douglas said.

“Security controls need to be constantly updated and improved to outsmart threat actors. CyberGraph leverages our AI and machine-learning technologies to help keep employees one step ahead with real-time warnings, directly at the point of risk.”

Adding email protection without a hitch

Mimecast SEG customers can integrate and activate the add-on without disrupting their email security operations, Douglas said. He also noted that the addition of CyberGraph’s capabilities means enterprise SEG customers no longer need to find a third-party point product to provide high-level protection against email threats.

In addition to the identity graph, CyberGraph includes other capabilities to prevent cyber attacks, like blocking embedded trackers and warning users of potential threats with color-coded banners.

Douglas said the release is timely because email threats have never been more pervasive or sophisticated than they are in the COVID-19 era, which greatly increased the exposure of remote workforces to threats. He cited Mimecast research published in the company’s State of Email Security Report, which found that both the number of threats and the number of employees falling for threats dramatically increased during the pandemic.

CyberGraph is available now to Mimecast SEG customers in the United States and United Kingdom, with availability in more regions coming soon, the company said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

NYC launches cyberdefense center amid major ransomware attacks

Amid growing ransomware attacks, New York City is the first major metro region in the US to launch a cyberdefense center, one that is, in this case, located in a Manhattan skyscraper. A mix of private and government entities are working together to help prevent similar future cyberattacks, including everything from Amazon to the NYPD.

The new cyberdefense center represents an evolution of the fully virtual New York City Cyber Critical Services and Infrastructure initiative, according to the Wall Street Journal. The new cyberdefense center has 282 partners that will work together to illuminate possible cybersecurity threats, helping protect the city and its critical infrastructure.

The new center is the result of years of talks and effort, according to the report — an effort that first led to the aforementioned online project launched in 2019. The growing number of ransomware attacks prompted the evolution in this initiative, better positioning the major metropolitan region to prevent and address cyberattacks that may threaten major businesses, financial hubs, and city infrastructure.

The report reveals that the cyberdefense center has already conducted their own version of “war games” at an IBM cyber range for practice using various systems to address cyberattacks. Likewise, the collective shares data amongst themselves whenever a cyberattack, such as ransomware, occurs anywhere in the US in order to ensure it doesn’t spread into the city.

The announcement comes only weeks after the Colonial pipeline ransomware attack, which resulted in gas shortages in parts of the US. Though the cyberattack was eventually resolved, it required the company to pay a substantial ransom, only part of which was later recovered by the federal government.

Hospitals have likewise been hit with ransomware attacks that lock down their systems — in one particularly large case in Southern California, hospital officials were forced to switch to paper-based records and communication, severely limiting their ability to treat patients. Such attacks have the potential to shutdown large sectors of the US, representing a major threat to the nation.

Repost: Original Source and Author Link

Categories
AI

How AI is helping enterprises turn the tables on malicious attacks

Presented by Huawei Technologies


Malicious attackers have turned to AI to invade enterprise networks. To combat attacks, organizations need to embrace AI in turn. Join this VB Live event to learn more about the powerful, proactive AI security solutions that are enabling intelligent threat detection and response, security operations and maintenance, and more.

Register here for free.


Check off another consequence of COVID: It’s directly responsible for the uptick in security risks for organizations. Many companies were forced to accelerate digital transformation, adopting brand-new technologies and policies to meet pandemic challenges. Now more intelligent devices are connected to the network than ever before – which expands the company’s threat surface exponentially. From the unsecured laptops of remote employees, to devices with no security policies in place (who’d expect an air conditioner to be online?) attackers have their pick of vulnerable new blind spots in the network.

Then there’s the increased competence of threat agents. Sophisticated IT expertise isn’t necessary to compromise a network anymore — ransomware, botnets as a service, and crypto miners are easy to obtain and easy to use. With sufficient start-up capital and a basic understanding of IT, any bad actor can outsource bad intentions as a service.

“As these technologies keep evolving, these threats are going to evolve with it,” says Yair Kler, head of solution security at Huawei Technologies. “But AI is now playing a major part in how enterprises can successfully meet these security risks in return.”

Powerful AI security solutions in the wild

The major benefit of AI security tools is how they can address the needle in the haystack problem, Kler says. Humans cannot handle the proliferation of data points and the massive amounts of data pouring into the system, but AI is very good at identifying, filtering, and prioritizing threat warnings.

“It replaces the two overwhelmed SIEM guys trying to filter the millions of alerts in your SOC center,” Kler says. “AI can prioritize and correlate alerts, then direct your attention to the next urgent task.” In the future, AI will also help us in threat hunting in the network, uncovering fine correlations and statistical anomalies to highlight them for security teams.

AI can also be used for overall threat intelligence, predicting when, where, and what kind of attacks your organization might be facing next — predictive maintenance, in other words, to determine what’s going to go wrong next. For instance, if attacks on medical facilities ramp up, it can warn you that your own medical facility is now at increased risk.

But remember that AI is not a silver bullet that’s going to solve every security issue, Kler says.

“If a marketing guy tells you that AI is going to solve all your cybersecurity problems, gracefully show him the door and tell him to come up with another pitch,” he says. “Like any other tool, it’s powerful if properly used, but it’s just one part of an overall security arsenal.”

Striking the human/AI balance

A lot of research is being done right now to try try to find the right balance of AI usage and human oversight. It comes down to risk management. Any location where AI might potentially cause physical, psychological, or reputational damage requires strong oversight.

The other requirement is determining the degree of tolerance an enterprise has for the AI to misbehave or to fail, along with the time and costs to recover from failure. In critical domains where a misbehaving AI can irreversibly bring down the business, enterprises must leverage AI very carefully with strict policies and stringent security controls.

On the other hand, if you’re using AI as part of your security monitoring system in order to deliver meaningful security insights, you still maintain access to the underlying data, therefore if a problem in AI recommendations system occurs, the impact would be lower. An oversight process can be used to identify and correct such issues with minimal to no damage to the network.

“Businesses  should introduce graceful failures as part of their AI cybersecurity strategy” Kler says. “Enterprises can allow AI to make decisions and take actions if they know and can control the blast radius in case of an AI failure.”

Implementing AI successfully

The cost-benefit analysis is step zero in a successful AI security implementation — and in gettomg essential stakeholders on board. Security leaders must first identify and demonstrate how AI can reduce costs, whether these are financial, reputational, or any other vulnerable facet of the organization, and show how it’s going to help  reduce the number of successful incidents or reduce CAPEX or OPEX.

For most companies, the easiest place to introduce AI into their cybersecurity architecture, with the biggest gain, is probably going to be the event monitoring domain. Integrating AI into the monitoring platform may vastly improves a team’s ability to identify and address the most urgent events, reduce attackers’ dwell time and improve the overall detection and response metrics. AI can also help analyze security events after post-processing, delivering insights and helping companies to continuously improve their security posture.

After you identify where integration of  AI in the security architecture would provide the biggest gain, the next step is to focus on policies, education, and management. First, policies would help drive and shape the business processes and justifies your security decisions. Next, employees need to be adequately trained to properly use AI tools, in order to maximize the business benefits. And finally, you need to monitor and measure the impact of AI on your security solution and overall security posture and optimize accordingly.

Learn more about how AI security tools are helping secure enterprise networks, strategies for successful risk identification and management, how to strike the right balance between AI automation and human control, and more.


Don’t miss out.

Register here for free.


You’ll learn:

  • How AI is changing the game for network security solutions
  • How to mitigate attacks with proactive AI network security and keep your company out of the headlines
  • How to take data and analytics one step further to level up your network security game

Speakers:

  • Yair Kler, Head of Solution Security, Huawei Technologies
  • Andy Purdy, Chief Security Officer, Huawei Technologies USA (moderator)

More speakers to be announced soon!

Repost: Original Source and Author Link

Categories
Security

REvil ransomware attacks systems using Kaseya’s remote IT management software

Just in time to ruin the holiday weekend, ransomware attackers have apparently used Kaseya — a software platform designed to help manage IT services remotely — to deliver their payload. Sophos director and ethical hacker Mark Loman tweeted about the attack earlier today, and now reports that affected systems will demand $44,999 to be unlocked. A note on Kaseya’s website implores customers to shut off their VSA servers for now “because one of the first things the attacker does is shutoff administrative access to the VSA.”

According to a report from Bleeping Computer, the attack targeted six large MSPs and has encrypted data for as many as 200 companies.

At DoublePulsar, Kevin Beaumont has posted more details about how the attack seems to work, with REvil ransomware arriving via a Kaseya update and using the platform’s administrative privileges to infect systems. Once the Managed Service Providers are infected, their systems can attack the clients that they provide remote IT services for (network management, system updates, and backups, among other things).

In a statement, Kaseya told The Verge that “We are investigating a potential attack against the VSA that indicates to have been limited to a small number of our on-premises customers only.” A notice claims that all of its cloud servers are now in “maintenance mode,” a move that the spokesperson said is being taken due to an “abundance of caution.” Later on Friday evening, Kaseya CEO Fred Voccola issued a statement saying they estimate the number of MSPs affected is fewer than 40, and are preparing a patch to mitigate the vulnerability.

Today’s attack has been linked to the notorious REvil ransomware gang (already linked to attacks on Acer and meat supplier JBS earlier this year), and The Record notes that, collecting incidents under more than one name, this may be the third time Kaseya software has been a vector for their exploits.

Beginning around mid-day (EST/US) on Friday July 2, 2021, Kaseya’s Incident Response team learned of a potential security incident involving our VSA software.

We took swift actions to protect our customers:

Immediately shut down our SaaS servers as a precautionary measure, even though we had not received any reports of compromise from any SaaS or hosted customers;

Immediately notified our on-premises customers via email, in-product notices, and phone to shut down their VSA servers to prevent them from being compromised.

We then followed our established incident response process to determine the scope of the incident and the extent that our customers were affected.

We engaged our internal incident response team and leading industry experts in forensic investigations to help us determine the root cause of the issue;

We notified law enforcement and government cybersecurity agencies, including the FBI and CISA.

While our early indicators suggested that only a very small number of on-premises customers were affected, we took a conservative approach in shutting down the SaaS servers to ensure we protected our more than 36,000 customers to the best of our ability. We have received positive feedback from our customers on our rapid and proactive response.

While our investigation is ongoing, to date we believe that:

Our SaaS customers were never at-risk. We expect to restore service to those customers once we have confirmed that they are not at risk, which we expect will be within the next 24 hours;

Only a very small percentage of our customers were affected – currently estimated at fewer than 40 worldwide.

We believe that we have identified the source of the vulnerability and are preparing a patch to mitigate it for our on-premises customers that will be tested thoroughly. We will release that patch as quickly as possible to get our customers back up and running.

I am proud to report that our team had a plan in place to jump into action and executed that plan perfectly today. We’ve heard from the vast majority of our customers that they experienced no issues at all, and I am grateful to our internal teams, outside experts, and industry partners who worked alongside of us to quickly bring this to a successful outcome.

Today’s actions are a testament to Kaseya’s unwavering commitment to put our customers first and provide the highest level of support for our products.

— Fred Voccola, CEO of Kaseya

Update July 2nd, 10:40PM ET: Added statement from Kaseya CEO.



Repost: Original Source and Author Link

Categories
AI

Adversarial attacks in machine learning: What they are and how to stop them

Elevate your enterprise data technology and strategy at Transform 2021.


Adversarial machine learning, a technique that attempts to fool models with deceptive data, is a growing threat in the AI and machine learning research community. The most common reason is to cause a malfunction in a machine learning model. An adversarial attack might entail presenting a model with inaccurate or misrepresentative data as it’s training, or introducing maliciously designed data to deceive an already trained model.

As the U.S. National Security Commission on Artificial Intelligence’s 2019 interim report notes, a very small percentage of current AI research goes toward defending AI systems against adversarial efforts. Some systems already used in production could be vulnerable to attack. For example, by placing a few small stickers on the ground, researchers showed that they could cause a self-driving car to move into the opposite lane of traffic. Other studies have shown that making imperceptible changes to an image can trick a medical analysis system into classifying a benign mole as malignant, and that pieces of tape can deceive a computer vision system into wrongly classifying a stop sign as a speed limit sign.

The increasing adoption of AI is likely to correlate with a rise in adversarial attacks. It’s a never-ending arms race, but fortunately, effective approaches exist today to mitigate the worst of the attacks.

Types of adversarial attacks

Attacks against AI models are often categorized along three primary axes — influence on the classifier, the security violation, and their specificity — and can be further subcategorized as “white box” or “black box.” In white box attacks, the attacker has access to the model’s parameters, while in black box attacks, the attacker has no access to these parameters.

An attack can influence the classifier — i.e., the model — by disrupting the model as it makes predictions, while a security violation involves supplying malicious data that gets classified as legitimate. A targeted attack attempts to allow a specific intrusion or disruption, or alternatively to create general mayhem.

Evasion attacks are the most prevalent type of attack, where data are modified to evade detection or to be classified as legitimate. Evasion doesn’t involve influence over the data used to train a model, but it is comparable to the way spammers and hackers obfuscate the content of spam emails and malware. An example of evasion is image-based spam in which spam content is embedded within an attached image to evade analysis by anti-spam models. Another example is spoofing attacks against AI-powered biometric verification systems..

Poisoning, another attack type, is “adversarial contamination” of data. Machine learning systems are often retrained using data collected while they’re in operation, and an attacker can poison this data by injecting malicious samples that subsequently disrupt the retraining process. An adversary might input data during the training phase that’s falsely labeled as harmless when it’s actually malicious. For example, large language models like OpenAI’s GPT-3 can reveal sensitive, private information when fed certain words and phrases, research has shown.

Meanwhile, model stealing, also called model extraction, involves an adversary probing a “black box” machine learning system in order to either reconstruct the model or extract the data that it was trained on. This can cause issues when either the training data or the model itself is sensitive and confidential. For example, model stealing could be used to extract a proprietary stock-trading model, which the adversary could then use for their own financial gain.

Attacks in the wild

Plenty of examples of adversarial attacks have been documented to date. One showed it’s possible to 3D-print a toy turtle with a texture that causes Google’s object detection AI to classify it as a rifle, regardless of the angle from which the turtle is photographed. In another attack, a machine-tweaked image of a dog was shown to look like a cat to both computers and humans. So-called “adversarial patterns” on glasses or clothing have been designed to deceive facial recognition systems and license plate readers. And researchers have created adversarial audio inputs to disguise commands to intelligent assistants in benign-sounding audio.

In a paper published in April, researchers from Google and the University of California at Berkeley demonstrated that even the best forensic classifiers — AI systems trained to distinguish between real and synthetic content — are susceptible to adversarial attacks. It’s a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the meteoric rise in deepfake content online.

One of the most infamous recent examples is Microsoft’s Tay, a Twitter chatbot programmed to learn to participate in conversation through interactions with other users. While Microsoft’s intention was that Tay would engage in “casual and playful conversation,” internet trolls noticed the system had insufficient filters and began feeding Tay profane and offensive tweets. The more these users engaged, the more offensive Tay’s tweets became, forcing Microsoft to shut the bot down just 16 hours after its launch.

As VentureBeat contributor Ben Dickson notes, recent years have seen a surge in the amount of research on adversarial attacks. In 2014, there were zero papers on adversarial machine learning submitted to the preprint server Arxiv.org, while in 2020, around 1,100 papers on adversarial examples and attacks were. Adversarial attacks and defense methods have also become a highlight of prominent conferences including NeurIPS, ICLR, DEF CON, Black Hat, and Usenix.

Defenses

With the rise in interest in adversarial attacks and techniques to combat them, startups like Resistant AI are coming to the fore with products that ostensibly “harden” algorithms against adversaries. Beyond these new commercial solutions, emerging research holds promise for enterprises looking to invest in defenses against adversarial attacks.

One way to test machine learning models for robustness is with what’s called a trojan attack, which involves modifying a model to respond to input triggers that cause it to infer an incorrect response. In an attempt to make these tests more repeatable and scalable, researchers at Johns Hopkins University developed a framework dubbed TrojAI, a set of tools that generate triggered data sets and associated models with trojans. They say that it’ll enable researchers to understand the effects of various data set configurations on the generated “trojaned” models and help to comprehensively test new trojan detection methods to harden models.

The Johns Hopkins team is far from the only one tackling the challenge of adversarial attacks in machine learning. In February, Google researchers released a paper describing a framework that either detects attacks or pressures the attackers to produce images that resemble the target class of images. Baidu, Microsoft, IBM, and Salesforce offer toolboxes — Advbox, Counterfit, Adversarial Robustness Toolbox, and Robustness Gym — for generating adversarial examples that can fool models in frameworks like MxNet, Keras, Facebook’s PyTorch and Caffe2, Google’s TensorFlow, and Baidu’s PaddlePaddle. And MIT’s Computer Science and Artificial Intelligence Laboratory recently released a tool called TextFooler that generates adversarial text to strengthen natural language models.

More recently, Microsoft, the nonprofit Mitre Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch released the Adversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with Mitre to build a schema that organizes the approaches malicious actors employ in subverting machine learning models, bolstering monitoring strategies around organizations’ mission-critical systems.

The future might bring outside-the-box approaches, including several inspired by neuroscience. For example, researchers at MIT and MIT-IBM Watson AI Lab have found that directly mapping the features of the mammalian visual cortex onto deep neural networks creates AI systems that are more robust to adversarial attacks. While adversarial AI is likely to become a never-ending arms race, these sorts of solutions instill hope that attackers won’t always have the upper hand — and that biological intelligence still has a lot of untapped potential.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link