Categories
AI

Facebook disputes report that its AI can’t detect hate speech or violence consistently

Facebook vice president of integrity Guy Rosen wrote in blog post Sunday that the prevalence of hate speech on the platform had dropped by 50 percent over the past three years, and that “a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress” was false.

“We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it,” Rosen wrote. “What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.”

The post appeared to be in response to a Sunday article in the Wall Street Journal, which said the Facebook employees tasked with keeping offensive content off the platform don’t believe the company is able to reliably screen for it.

The WSJ report states that internal documents show that two years ago, Facebook reduced the time that human reviewers focused on hate speech complaints, and made other adjustments that reduced the number of complaints. That in turn helped create the appearance that Facebook’s artificial intelligence had been more successful in enforcing the company’s rules than it actually was, according to the WSJ.

A team of Facebook employees found in March that the company’s automated systems were removing posts which generated between 3 and 5 percent of the views of hate speech on the social platform, and less than 1 percent of all content that was in violation of its rules against violence and incitement, the WSJ reported.

But Rosen argued that focusing on content removals alone was “the wrong way to look at how we fight hate speech.” He says the technology to remove hate speech is just one method Facebook uses to fight it. “We need to be confident that something is hate speech before we remove it,” Rosen said.

Instead, he said, the company believes focusing on the prevalence of hate speech people actually see on the platform and how it reduces it using various tools is a more important measure. He claimed that for every 10,000 views of a piece of content on Facebook, there were five views of hate speech. “Prevalence tells us what violating content people see because we missed it,” Rosen wrote. “It’s how we most objectively evaluate our progress, as it provides the most complete picture.”

But the internal documents obtained by the WSJ showed some significant pieces of content were able to evade Facebook’s detection, including videos of car crashes that showed people with graphic injuries, and violent threats against trans children.

The WSJ has produced a series of reports about Facebook based on internal documents provided by whistleblower Frances Haugen. She testified before Congress that the company was aware of the negative impact its Instagram platform could have on teenagers. Facebook has disputed the reporting based on the internal documents.

Repost: Original Source and Author Link

Categories
Tech News

Google has a secret blocklist that hides YouTube hate videos from advertisers — but it’s full of holes

This story is the first of two parts.

If you want to find YouTube videos related to “KKK” to advertise on, Google Ads will block you. But the company failed to block dozens of other hate and White nationalist terms and slogans, an investigation by The Markup has found.

Using a list of 86 hate-related terms we compiled with the help of experts, we discovered that Google uses a blocklist to try to stop advertisers from building YouTube ad campaigns around hate terms. But less than a third of the terms on our list were blocked when we conducted our investigation.

Google Ads suggested millions upon millions of YouTube videos to advertisers purchasing ads related to the terms “White power,” the fascist slogan “blood and soil,” and the far-right call to violence “racial holy war.”

The company even suggested videos for campaigns with terms that it clearly finds problematic, such as “great replacement.” YouTube slaps Wikipedia boxes on videos about the “the great replacement,” noting that it’s “a white nationalist far-right conspiracy theory.”

Some of the hundreds of millions of videos that the company suggested for ad placements related to these hate terms contained overt racism and bigotry, including multiple videos featuring re-posted content from the neo-Nazi podcast The Daily Shoah, whose official channel was suspended by YouTube in 2019 for hate speech. Google’s top video suggestions for these hate terms returned many news videos and some anti-hate content—but also dozens of videos from channels that researchers labeled as espousing hate or White nationalist views.

“The idea that they sell is that they’re guiding advertisers and content creators toward less controversial content,” said Nandini Jammi, who co-founded the advocacy group Sleeping Giants, which uses social media to pressure companies to stop advertising on right-wing media websites and now runs the digital marketing consulting firm Check My Ads.

“But the reality on the ground is that it’s not being implemented that way,” she added. “If you’re using keyword technology and you’re not keeping track of the keywords that the bad guys are using, then you’re not going to find the bad stuff.”

‘Offensive and harmful’

When we approached Google with our findings, the company blocked another 44 of the hate terms on our list.

“We fully acknowledge that the functionality for finding ad placements in Google Ads did not work as intended,” company spokesperson Christopher Lawton wrote in an email; “these terms are offensive and harmful and should not have been searchable. Our teams have addressed the issue and blocked terms that violate our enforcement policies.”

“We take the issue of hate and harassment very seriously,” he added, “and condemn it in the strongest terms possible.”

Even after Lawton made that statement, 14 of the hate terms on our list—about one in six of them—remained available to search for videos for ad placements on Google Ads, including the anti-Black meme “we wuz kangz”; the neo-Nazi appropriated symbol “black sun”; “red ice tv,” a White nationalist media outlet that YouTube banned from its platform in 2019; and the White nationalist slogans “you will not replace us” and “diversity is a code word for anti-white.”

We again emailed Lawton asking why these terms remained available. He did not respond, but Google quietly removed 11 more hate terms, leaving only the White nationalist slogan “you will not replace us,” “American Renaissance” (the name of a publication the Anti-Defamation League describes as White supremacist), and the anti-Semitic meme “open borders for Israel.”

Blocking future investigations

Google also responded by shutting the door to future similar investigations into keyword blocking on Google Ads. The newly blocked terms are indistinguishable in Google’s code from searches for which there are no related videos, such as a string of gibberish. This was not the case when we conducted our investigation.

YouTube has faced repeated criticism for years over its handling of hate content, including boycotts by advertisers who were angry about their ads running next to offensive videos. The company responded by promising reforms, including taking down hate content. Most of the advertisers have returned, and the company reports that advertising on YouTube generates nearly $20 billion in annual revenues for Google.

In addition to overlooking common hate terms, we discovered that almost all the blocks Google had implemented were weak. They did not account for simple workarounds, such as pluralizing a singular word, changing a suffix, or removing spaces between words. “Aryan nation,” “globalist Jews,” “White pride,” “White pill,” and “White genocide” were all blocked from advertisers as two words but together resulted in hundreds of thousands of video recommendations once we removed the spaces between the words.

Credit: The Markup
Categories
AI

Researchers find that debiasing doesn’t eliminate racism from hate speech detection models

Current AI hate speech and toxic language detection systems exhibit problematic and discriminatory behavior, research has shown. At the core of the issue are training data biases, which often arise during the dataset creation process. When trained on biased datasets, models acquire and exacerbate biases, for example flagging text by Black authors as more toxic than text by white authors.

Toxicity detection systems are employed by a range of online platforms including Facebook, Twitter, YouTube, and various publications. While one of the premiere providers of these systems, Alphabet-owned Jigsaw, claims it’s taken pains to remove bias from its models following a study showing it fared poorly on Black-authored speech, it’s unclear the extent to which this might be true of other AI-powered solutions.

To see whether current model debiasing approaches can mitigate biases in toxic language detection, researchers at the Allen Institute investigated techniques to address lexical and dialectal imbalances in datasets. Lexical biases associate toxicity with the presence of certain words, like profanities, while dialectal biases correlate toxicity with “markers” of language variants like African-American English (AAE).

Hate speech detection racism

In the course of their work, the researchers looked at one debiasing method designed to tackle “predefined biases” (e.g., lexical and dialectal). They also explored a process that filters “easy” training examples with correlations that might mislead a hate speech detection model.

According to the researchers, both approaches face challenges in mitigating biases from a model trained on a biased dataset for toxic language detection. In their experiments, while filtering reduced bias in the data, models trained on filtered datasets still picked up lexical and dialectal biases. Even “debiased” models disproportionately flagged text in certain snippets as toxic. Perhaps more discouragingly, mitigating dialectal bias didn’t appear to change a model’s propensity to label text by Black authors as more toxic than white authors.

In the interest of thoroughness, the researchers embarked on a proof-of-concept study involving relabeling examples of supposedly toxic text whose translations from AAE to “white-aligned English” were deemed nontoxic. They used OpenAI’s GPT-3 to perform the translations and create a synthetic dataset — a dataset, they say, that resulted in a model less prone to dialectal and racial biases.

Hate speech detection racism

“Overall, our findings indicate that debiasing a model already trained on biased toxic language data can be challenging,” wrote the researchers, who caution against deploying their proof-of-concept approach because of its limitations and ethical implications. “Translating” the language a Black person might use into the language a white person might use both robs the original language of its richness and makes potentially racist assumptions about both parties. Moreover, the researchers note that GPT-3 likely wasn’t exposed to many African American English varieties during training, making it ill-suited for this purpose.

“Our findings suggest that instead of solely relying on development of automatic debiasing for existing, imperfect datasets, future work focus primarily on the quality of the underlying data for hate speech detection, such as accounting for speaker identity and dialect,” the researchers wrote. “Indeed, such efforts could act as an important step towards making systems less discriminatory, and hence safe and usable.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link