Categories
AI

Google is releasing an open source harassment filter for journalists

Google’s Jigsaw unit is releasing the code for an open source anti-harassment tool called Harassment Manager. The tool, intended for journalists and other public figures, employs Jigsaw’s Perspective API to let users sort through potentially abusive comments on social media platforms starting with Twitter. It’s debuting as source code for developers to build on, then being launched as a functional application for Thomson Reuters Foundation journalists in June.

Harassment Manager can currently work with Twitter’s API to combine moderation options — like hiding tweet replies and muting or blocking accounts — with a bulk filtering and reporting system. Perspective checks messages’ language for levels of “toxicity” based on elements like threats, insults, and profanity. It sorts messages into queues on a dashboard, where users can address them in batches rather than individually through Twitter’s default moderation tools. They can choose to blur the text of the messages while they’re doing it, so they don’t need to read each one, and they can search for keywords in addition to using the automatically generated queues.

A picture of the Harassment Manager dashboard as described in the post

Google

Harassment Manager also lets users download a standalone report containing abusive messages; this creates a paper trail for their employer or, in the case of illegal content like direct threats, law enforcement. For now, however, there’s not a standalone application that users can download. Instead, developers can freely build apps that incorporate its functionality and services using it will be launched by partners like the Thomson Reuters Foundation.

Jigsaw announced Harassment Manager on International Women’s Day, and it framed the tool as particularly relevant to female journalists who face gender-based abuse, highlighting input from “journalists and activists with large Twitter presences” as well as nonprofits like the International Women’s Media Foundation and the Committee To Protect Journalists. In a Medium post, the team says it’s hoping developers can tailor it for other at-risk social media users. “Our hope is that this technology provides a resource for people who are facing harassment online, especially female journalists, activists, politicians and other public figures, who deal with disproportionately high toxicity online,” the post reads.

A screenshot of the reporting option in Jigsaw’s Harassment Manager

Google has harnessed Perspective for automated moderation before. In 2019 it released a browser extension called Tune that let social media users avoid seeing messages with a high chance of being toxic, and it’s been used by many commenting platforms (including Vox Media’s Coral) to supplement human moderation. But as we noted around the release of Perspective and Tune, the language analysis model has historically been far from perfect. It sometimes misclassifies satirical content or fails to detect abusive messages, and Jigsaw-style AI can inadvertently associate terms like “blind” or “deaf” — which aren’t necessarily negative — with toxicity. Jigsaw itself has also been criticized for a toxic workplace culture, although Google has disputed the claims.

Unlike AI-powered moderation on services like Twitter and Instagram, however, Harassment Manager isn’t a platform-side moderation feature. It’s apparently a sorting tool for helping manage the sometimes overwhelming scale of social media feedback, something that could be relevant for people far outside the realm of journalism — even if they can’t use it for now.

Repost: Original Source and Author Link

Categories
Security

Pegasus spyware used to target phones of journalists and activists, investigation finds

A sweeping investigation by 17 media outlets found that NSO Group’s Pegasus software was used in hacking attempts on 37 smartphones belonging to human rights activists and journalists, The Washington Post reported. The phones were on a leaked list of numbers discovered by Paris journalism nonprofit Hidden Stories and human rights group Amnesty International, according to the Post. The numbers on the list were singled out for possible surveillance by countries who are clients of NSO, the report states, which markets its spyware to governments to track potential terrorists and criminals.

Pegasus can extract all of a mobile device’s data, and activate the device’s microphone to listen in on conversations surreptitiously, as The Guardian notes. The list of journalists dates back to 2016, the Post reports, and includes reporters from the Post, CNN, the Associated Press, Voice of America, the New York Times, the Wall Street Journal, Bloomberg News, Le Monde, the Financial Times, and Al Jazeera.

In a statement emailed to The Verge on Sunday, an NSO spokesperson denied the claims in the report, saying it was “full of wrong assumptions and uncorroborated theories that raise serious doubts about the reliability and interests of the sources,” and questioned the sources that supplied the information.

“After checking their claims, we firmly deny the false allegations made in their report,” the statement continues. The company is considering a defamation lawsuit according to its statement, because it says “these allegations are so outrageous and far from reality.”

It’s not the first time NSO’s Pegasus spyware has been accused of being part of a larger surveillance campaign. Between July and August 2020, research organization Citizen Lab found that 36 phones belonging to Al Jazeera journalists had been hacked using Pegasus technology, possibly by hackers working for governments in the Middle East. In 2019, WhatsApp sued NSO, claiming Pegasus was used to hack users of WhatsApp’s encrypted chat service.

Repost: Original Source and Author Link

Categories
AI

An online propaganda campaign used AI-generated headshots to create fake journalists

A network of fictional journalists, analysts, and political consultants has been used to place opinion pieces favorable to certain Gulf states in a range of media outlets, an investigation from The Daily Beast has revealed. At least 19 fake personas were used to author op-eds published in dozens of mainly conservative publications, with AI-generated headshots of would-be authors used to trick targets into believing the writers were real people.

It’s not the first time AI has been used in this way, though it’s unusual to see machine learning tech deployed for online misinformation in the wild. Last year, a report from The Associated Press found a fake profile on LinkedIn, part of a network of likely spies trying to make connections with professional targets, that also used an AI-generated headshot.

AI-generated profile pictures created by sites like ThisPersonDoesNotExist.com have some unique advantages when it comes to building fake online personas. The most important characteristic is that each image is uniquely generated, meaning they can’t be traced back to a source picture (and thus quickly proved to be a fake) using a reverse image search.

However, the current generation of AI headshots isn’t flawless. They share a number of common tells, including odd-looking teeth, asymmetrical features, hair that blurs into nothing, earlobes that are strangely melted, and indistinct background imagery.

Some of these features can be seen in a number of headshots used by fake writers uncovered by The Daily Beast’s investigation. Others, though, just use stolen avatars. The personas share a number of attributes, which suggest they’re part of a single, coordinated campaign:

The personas identified by The Daily Beast were generally contributors to two linked sites, The Arab Eye and Persia Now; had Twitter accounts created in March or April 2020; presented themselves as political consultants and freelance journalists mostly based in European capitals; lied about their academic or professional credentials in phony LinkedIn accounts; used fake or stolen avatars manipulated to defeat reverse image searches; and linked to or amplified each others’ work.

Although it’s not clear who created the network, op-eds published by the fake writers do share certain editorial values. They argue for more sanctions against Iran, praise certain Gulf states like the United Arab Emirates, and criticize Qatar (currently the subject of a diplomatic and economic embargo from the UAE and other states in the Middle East because of the country’s alleged support for terrorism).

The network was used to create op-eds published in US outlets like the Washington Examiner and the American Thinker, as well as Middle Eastern papers like The Jerusalem Post and Al Arabiya, and even in the English-language Hong Kong-based publication the South China Morning Post. As a result of The Daily Beast’s investigation, Twitter has suspended 15 accounts belonging to the fake writers.



Repost: Original Source and Author Link

Categories
AI

Microsoft’s AI journalists confuse mixed-race Little Mix singers

Microsoft’s decision to replace human journalists with AI to run its news and search site MSN.com has been criticized after the automated system confused two mixed-race members of British pop group Little Mix.

As first reported by The Guardian, the newly-instated robot editors of MSN.com selected a story about Little Mix singer Jade Thirlwall’s experience with racism to appear on the homepage, but used a picture of Thirlwall’s bandmate Leigh-Anne Pinnock to illustrate it.

Thirlwall drew attention to the mistake on her Instagram story, writing: “@MSN If you’re going to copy and paste articles from other accurate media outlets, you might want to make sure you’re using an image of the correct mixed race member of the group.”

She added: “This shit happens to @leighannepinnock and I ALL THE TIME that it’s become a running joke … It offends me that you couldn’t differentiate the two women of colour out of four members of a group … DO BETTER!”

Thirlwall called on MSN on her Instagram story, saying “this shit happens to @leighannepinnock and I ALL THE TIME.”
Screenshot via The Guardian

According to The Guardian, the mistake was made by Microsoft’s new automated systems. The tech giant laid off the editorial staff of MSN late last month. These journalists were not tasked with writing stories, but selecting articles from other outlets to spotlight on the MSN homepage. Around 50 journalists were reportedly let go in the US and 27 in the UK.

It’s not clear exactly what caused this error, but in an updated statement, Microsoft said it was not a result of algorithmic bias but an experimental feature in the automated system.

A spokesperson told The Verge: “Whilst removing bias and improving accuracy remain an area of focus for AI research, this mistake was not a result of these issues. In testing a new feature to select an alternate image, rather than defaulting to the first photo, a different image on the page of the original article was paired with the headline of the piece. This made it erroneously appear as though the headline was a caption for the picture.”

However, this is exactly the sort of mistake human editors are supposed to spot. Though, as Thirlwall’s comments make clear, this is far from the first time such errors have been made. Earlier this year, for example, the BBC was forced to apologize after using footage of basketball player LeBron James to illustrate news of the death of Kobe Bryant, both of whom played for the Lakers at different periods of time.

Notably, The Guardian reports that the remaining human staff at MSN have been warned that critical coverage of the site’s automated systems is being published by news outlets. The staff were told that the AI may select these stories as interesting and place them on the MSN homepage. If this happens the human staff have been told to remove the stories.

Update Jun 10th, 5:59AM ET: The story has been updated with a new statement from Microsoft.

Correction: The story previously described Kobe Bryant and LeBron James as “teammates.” Although they played for the same team it was not at the same time.

Repost: Original Source and Author Link

Categories
Security

Dozens of Al Jazeera journalists targeted in apparent iOS spyware attack

36 personal phones belonging to Al Jazeera journalists, producers, anchors, and executives were hacked in a spyware campaign between July and August 2020, a new report from Citizen Lab alleges. The attacks reportedly used Pegasus technology provided by the Israeli firm NSO Group, and are thought to be the work of four operators. Citizen Lab says it has “medium confidence” that one is working on behalf of the UAE government and another for the Saudi government.

The attacks are worrying not just because they appear to show politically-motivated targeting of journalists, but also because they’re part of a trend of using increasingly advanced methods that are harder to detect. According to Citizen Lab, the attacks seem to have used a zero-click exploit to compromise iPhones via iMessage, meaning the attacks happened without the victims needing to do anything, and leave much less of a trace once a device is infected. In July 2020, the exploit chain was a zero-day.

Citizen Lab’s report says “almost all iPhone devices” which haven’t been updated to iOS 14 appear to be vulnerable to the hack, meaning the infections it found are likely to be a “miniscule fraction” of the total number. It has disclosed its findings to Apple, and the company is looking into the issue. Citizen Lab’s analysis suggests the spyware can record audio from a phone (including ambient noise and audio from phone calls), take photos, track location, and access passwords. Devices updated to iOS 14 don’t appear to be affected.

Citizen Lab discovered one of the hacks after Al Jazeera journalist, Tamer Almisshal, allowed the organization to install a VPN on his device because he was worried it might have been compromised. Using this software, Citizen Lab, noticed that his phone visited a suspected installation server for NSO Group’s spyware. Seconds later, his phone uploaded over 200MB of data to three IP addresses for the first time.

As well as the Al Jazeera employees, Citizen Lab reports that a journalist with Al Araby TV, Rania Dridi, was also the victim of hacks using NSO Group’s spyware. These attacks date back to October 2019, and appear to include two zero-day exploits.

This is not the first time allegations have emerged that spyware from NSO Group has been used to target journalists. The Guardian reports that the software has allegedly been used to target journalists in Morocco, as well as political dissidents from Rwanda and Spanish politicians.

When contacted for comment a spokesperson for NSO Group told The Verge that Citizen Lab’s report was based on “speculation” and “lacks any evidence supporting a connection to NSO.”

“NSO provides products that enable governmental law enforcement agencies to tackle serious organized crime and counterterrorism only, and as stated in the past we do not operate them,” the spokesperson said. “However, when we receive credible evidence of misuse with enough information which can enable us to assess such credibility, we take all necessary steps in accordance with our investigation procedure in order to review the allegations.”

As a result of its investigation, Citizen Lab is calling for more regulations over the use of surveillance technology, and for a global moratorium on its sale and transfer until safeguards are put in place to guard against its misuse.

Repost: Original Source and Author Link