Categories
AI

Facebook disputes report that its AI can’t detect hate speech or violence consistently

Facebook vice president of integrity Guy Rosen wrote in blog post Sunday that the prevalence of hate speech on the platform had dropped by 50 percent over the past three years, and that “a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress” was false.

“We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it,” Rosen wrote. “What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.”

The post appeared to be in response to a Sunday article in the Wall Street Journal, which said the Facebook employees tasked with keeping offensive content off the platform don’t believe the company is able to reliably screen for it.

The WSJ report states that internal documents show that two years ago, Facebook reduced the time that human reviewers focused on hate speech complaints, and made other adjustments that reduced the number of complaints. That in turn helped create the appearance that Facebook’s artificial intelligence had been more successful in enforcing the company’s rules than it actually was, according to the WSJ.

A team of Facebook employees found in March that the company’s automated systems were removing posts which generated between 3 and 5 percent of the views of hate speech on the social platform, and less than 1 percent of all content that was in violation of its rules against violence and incitement, the WSJ reported.

But Rosen argued that focusing on content removals alone was “the wrong way to look at how we fight hate speech.” He says the technology to remove hate speech is just one method Facebook uses to fight it. “We need to be confident that something is hate speech before we remove it,” Rosen said.

Instead, he said, the company believes focusing on the prevalence of hate speech people actually see on the platform and how it reduces it using various tools is a more important measure. He claimed that for every 10,000 views of a piece of content on Facebook, there were five views of hate speech. “Prevalence tells us what violating content people see because we missed it,” Rosen wrote. “It’s how we most objectively evaluate our progress, as it provides the most complete picture.”

But the internal documents obtained by the WSJ showed some significant pieces of content were able to evade Facebook’s detection, including videos of car crashes that showed people with graphic injuries, and violent threats against trans children.

The WSJ has produced a series of reports about Facebook based on internal documents provided by whistleblower Frances Haugen. She testified before Congress that the company was aware of the negative impact its Instagram platform could have on teenagers. Facebook has disputed the reporting based on the internal documents.

Repost: Original Source and Author Link

Categories
Tech News

Consent apps don’t stop sexual violence, so quit trying to make them

Yesterday, New South Wales Police Commissioner Mick Fuller suggested technology should be part of the solution to growing concerns around sexual assault. He encouraged serious discussion about using a digital app to record positive sexual consent.

In our research, we have studied a wide range of mobile applications and artificial intelligence (AI) chatbots used in attempts to counter sexual violence over the past decade. We found these apps have many limitations and unexpected consequences.

How apps are being used to address sexual abuse

Apps aimed at responding to sexual harassment and assault have circulated for at least a decade. With support from government initiatives, such as the Obama administration’s 2011 Apps Against Abuse challenge, and global organisations, such as UN Women, they have been implemented in corporate environments, universities and mental health services.

These apps are not limited to documenting consent. Many are designed to offer emergency assistance, information and a means for survivors of sexual violence to report and build evidence against perpetrators. Proponents often frame these technologies as empowering tools that support women through the accessible and anonymous processing of data.

In the case of the proposed consent app, critics have noted that efforts to time-stamp consent fail to recognize consent can always be withdrawn. In addition, a person may consent out of pressure, fear of repercussions or intoxication.

If a person does indicate consent at some point but circumstances change, the record could be used to discredit their claims.

How digital apps fail to address sexual violence

The use of apps will not address many longstanding problems with common responses to sexual violence. Research indicates safety apps often reinforce rape myths, such as the idea that sexual assault is most often perpetrated by strangers. In reality, the vast majority of rapes are committed by people the victims already know.

Usually marketed to women, these apps collect data from users through surveillance using persistent cookies and geolocational tracking. Even “anonymized” data can often be identifiable.

Digital tools can also enable violence. Abusive partners can use them for cyberstalking, giving them constant access to victims. Apps designed to encourage survivors to report violence raise similar concerns, because they fail to address the power imbalances that lead to authorities discrediting survivors’ accounts of violence.

Apps don’t change the bigger picture

The introduction of an app does not itself change the wider landscape in which sexual violence cases are handled.

The high-profile sex abuse scandal involving Larry Nassar, a former USA Gymnastics and Michigan State University doctor convicted of a range of sex offences after being accused by more than 350 young women and girls, led to reforms that included the SafeSport app.

This resulted in 1,800 reports of sexual misconduct or abuse within a year of the app’s introduction. However, a lack of funding meant the reports could not be properly investigated, undermining organisational promises to enforce sanctions for sexual misconduct.


Read more: Anti-rape devices may have their uses, but they don’t address the ultimate problem


Poor implementation and cost-saving measures compromise users’ safety. In Canada and the United States, the hospitality industry is rolling out smart panic buttons to 1.2 million hotel and casino staff. This is a response to widespread sexual violence: a union survey found 58% of employees had been sexually harassed by a guest and 65% of casino workers experienced unwanted touching.

Employers are now required by law to provide panic buttons, but they are turning to cheap and inferior devices, raising security concerns. Legislation does not prevent them using these devices to monitor the movements of their employees.

Who owns the data?

Even if implemented as intended, apps raise questions about data protection. They collect vast amounts of sensitive data, which is stored on digital databases and cloud servers that are vulnerable to cyberattacks.


Read more: The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do


The data may be owned by private companies who can sell it on to other organisations, allowing authorities to circumvent privacy laws. Last month, it was revealed US Immigration and Customs Enforcement purchased access to the Reuters CLEAR database containing information about 400 million people whose data they could not legally collect on their own.

In short, apps don’t protect victims or their data.

Why we need to take this ‘bad idea’ seriously

Fuller, the NSW police commissioner, admitted his recommendation might be a bad idea. His idea was built on the premise that the important issue to address is making sure consent is clearly communicated. It misunderstands the nature of sexual violence, which is grounded in unequal power relations.

In practice, a consent app would be unlikely to protect victims. Research shows data collected through new forms of investigation often result in evidence that is used against victims’ wishes.

There are other reasons why the consent app is a bad idea. It perpetuates misguided assumptions about technology’s ability to “fix” societal harms. Consent, violence and accountability are not data problems. These complex issues require strong cultural and structural responses, not simply quantifiable and time-stamped data.

This article by Kathryn Henne, Professor and Director, School of Regulation and Global Governance, Australian National University; Jenna Imad Harb, PhD Scholar, Australian National University, and Renee M. Shelby, Postdoctoral fellow, Sexualities Project, Northwestern University, is republished from The Conversation under a Creative Commons license. Read the original article.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In –
and you can subscribe to it right here.

Published March 22, 2021 — 12:18 UTC



Repost: Original Source and Author Link