Categories
Security

Cloudflare just stopped one of the largest DDoS attacks ever

Cloudflare, a company that specializes in web security and distributed denial of service (DDoS) attack mitigation, just reported that it managed to stop an attack of an unprecedented scale.

The HTTPS DDoS attack was one of the largest such attacks ever recorded, and it came from unusual sources — data centers.

Cloudflare

The attack was detected and mitigated automatically by Cloudflare’s defense systems, which were set up for one of its customers using the paid Professional plan. At its peak, the attack reached a massive 15.3 million requests-per-second (rps). This makes it the largest HTTPS DDoS attack ever mitigated by Cloudflare.

Cloudflare has previously seen attacks on a larger scale targeting unencrypted HTTP, but as Cloudflare mentions in its announcement, targeting HTTPS is a much more expensive and difficult venture. Such attacks typically require extra computational resources due to the need to establish a transport layer security (TLS) encrypted connection. The increase in costs is twofold: It costs more for the attacker to establish the attack, and it costs more for the targeted server to mitigate it.

The attack lasted less than 15 seconds, and its target was a cryptocurrency launchpad. Crypto launchpads are platforms that startups within the crypto space can use to raise early-stage funding while leveraging the reach of the launchpad. Cloudflare mitigated the attack without any additional actions being taken by the customer.

The source of the attack was not unfamiliar to Cloudflare — it said that it has seen attacks hitting up to 10 million rps from sources that match the same attack fingerprint. However, the devices that carried out the attack were something new, seeing as they came mostly from data centers. Cloudflare notes that this marks a shift that it has already been noticing as of late, with larger attacks moving from residential network internet service providers (ISPs) to huge networks of cloud compute ISPs.

Cloudflare DDoS attack sources.
Cloudflare

Approximately 6,000 unique bots across over 1,300 networks carried out the DDoS attack that Cloudflare managed to mitigate automatically, without any human intervention. Perhaps more impressive is the number of locations involved, adding up to a total of 112 countries all around the globe. The largest share of it (15%) came from Indonesia, followed by Russia, Brazil, India, Colombia, and the U.S.

While this wasn’t the largest DDoS attack ever mitigated by Cloudflare, it’s definitely up there in terms of volume and severity. In 2021, the service managed to stop a 17.2 million rps HTTP DDoS attack. Earlier this year, the company reported that it has seen a massive rise in the number of DDoS attacks which increased by a staggering 175% quarter-over-quarter based on data from the fourth quarter of 2021.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Go read this story on how Facebook’s focus on growth stopped its AI team from fighting misinformation

Facebook has always been a company focused on growth above all else. More users and more engagement equals more revenue. The cost of that single-mindedness is spelled out clearly in this brilliant story from MIT Technology Review. It details how attempts to tackle misinformation by the company’s AI team using machine learning were apparently stymied by Facebook’s unwillingness to limit user engagement.

“If a model reduces engagement too much, it’s discarded. Otherwise, it’s deployed and continually monitored,” writes author Karen Hao of Facebook’s machine learning models. “But this approach soon caused issues. The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff.”

On Twitter, Hao noted that the article is not about “corrupt people [doing] corrupt things.” Instead, she says, “It’s about good people genuinely trying to do the right thing. But they’re trapped in a rotten system, trying their best to push the status quo that won’t budge.”

The story also adds more evidence to the accusation that Facebook’s desire to placate conservatives during Donald Trump’s presidency led to it turning a blind eye to right-wing misinformation. This seems to have happened at least in part due to the influence of Joel Kaplan, a former member of George W. Bush’s administration who is now Facebook’s vice president of global public policy and “its highest-ranking Republican.” As Hao writes:

All Facebook users have some 200 “traits” attached to their profile. These include various dimensions submitted by users or estimated by machine-learning models, such as race, political and religious leanings, socioeconomic class, and level of education. Kaplan’s team began using the traits to assemble custom user segments that reflected largely conservative interests: users who engaged with conservative content, groups, and pages, for example. Then they’d run special analyses to see how content-moderation decisions would affect posts from those segments, according to a former researcher whose work was subject to those reviews.

The Fairness Flow documentation, which the Responsible AI team wrote later, includes a case study on how to use the tool in such a situation. When deciding whether a misinformation model is fair with respect to political ideology, the team wrote, “fairness” does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But members of Kaplan’s team followed exactly the opposite approach: they took “fairness” to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless. “There’s no point, then,” the researcher says. A model modified in that way “would have literally no impact on the actual problem” of misinformation.

The story also says that the work by Facebook’s AI researchers on the problem of algorithmic bias, in which machine learning models unintentionally discriminate against certain groups of users, has been undertaken, at least in part to preempt these same accusations of anti-conservative sentiment and forestall potential regulation by the US government. But pouring more resources into bias has meant ignoring problems involving misinformation and hate speech. Despite the company’s lip service to AI fairness, the guiding principle, says Hao, is still the same as ever: growth, growth, growth.

[T]esting algorithms for fairness is still largely optional at Facebook. None of the teams that work directly on Facebook’s news feed, ad service, or other products are required to do it. Pay incentives are still tied to engagement and growth metrics. And while there are guidelines about which fairness definition to use in any given situation, they aren’t enforced.

You can read Hao’s full story at MIT Technology Review here.



Repost: Original Source and Author Link