Categories
AI

Deepfake satellite imagery poses a not-so-distant threat, warn geographers

When we think of deepfakes, we tend to imagine AI-generated people. This might be lighthearted, like a deepfake Tom Cruise, or malicious, like nonconsensual pornography. What we don’t imagine is deepfake geography: AI-generated images of cityscapes and countryside. But that’s exactly what some researchers are worried about.

Specifically, geographers are concerned about the spread of fake, AI-generated satellite imagery. Such pictures could mislead in a variety of ways. They could be used to create hoaxes about wildfires or floods, or to discredit stories based on real satellite imagery. (Think about reports on China’s Uyghur detention camps that gained credence from satellite evidence. As geographic deepfakes become widespread, the Chinese government can claim those images are fake, too.) Deepfake geography might even be a national security issue, as geopolitical adversaries use fake satellite imagery to mislead foes.

The US military warned about this very prospect in 2019. Todd Myers, an analyst at the National Geospatial-Intelligence Agency, imagined a scenario in which military planning software is fooled by fake data that shows a bridge in an incorrect location. “So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,” said Myers.

The first step to tackling these issues is to make people aware there’s a problem in the first place, says Bo Zhao, an assistant professor of geography at the University of Washington. Zhao and his colleagues recently published a paper on the subject of “deep fake geography,” which includes their own experiments generating and detecting this imagery.

Bo Zhao and his colleagues at the University of Washington were able to create their own AI-generated satellite imagery (above).
Image: ‘Deep fake geography? When geospatial data encounter Artificial Intelligence,’ Zhao et al

The aim, Zhao tells The Verge over email, “is to demystify the function of absolute reliability of satellite images and to raise public awareness of the potential influence of deep fake geography.” He says that although deepfakes are widely discussed in other fields, his paper is likely the first to touch upon the topic in geography.

“While many GIS [geographic information system] practitioners have been celebrating the technical merits of deep learning and other types of AI for geographical problem solving, few have publicly recognized or criticized the potential threats of deep fake to the field of geography or beyond,” write the authors.

Far from presenting deepfakes as a novel challenge, Zhao and his colleagues locate the technology in a long history of fake geography that dates back millennia. Humans have been lying with maps for pretty much as long as maps have existed, they say, from mythological geographies devised by ancient civilizations like the Babylonians, to modern propaganda maps distributed during wartime “to shake the enemy’s morale.”

One particularly curious example comes from so-called “paper towns” and “trap streets.” These are fake settlements and roads inserted by cartographers into maps in order to catch rivals stealing their work. If anyone produces a map which includes your very own Fakesville, Ohio, you know — and can prove — that they’re copying your cartography.

“It is a centuries-old phenomenon,” says Zhao of fake geography, though new technology produces new challenges. “It is novel partially because the deepfaked satellite images are so uncannily realistic. The untrained eyes would easily consider they are authentic.”

It’s certainly easier to produce fake satellite imagery than fake videos of humans. Lower resolutions can be just as convincing and satellite imagery as a medium is inherently believable. This may be due to what we know about the expense and origin of these pictures, says Zhao. “Since most satellite images are generated by professionals or governments, the public would usually prefer to believe they are authentic.”

As part of their study, Zhao and his colleagues created software to generate deepfake satellite images, using the same basic AI method (a technique known as generative adversarial networks, or GANs) used in well-known programs like ThisPersonDoesNotExist.com. They then created detection software that was able to spot the fakes based on characteristics like texture, contrast, and color. But as experts have warned for years regarding deepfakes of people, any detection tool needs constant updates to keep up with improvements in deepfake generation.

For Zhao, though, the most important thing is to raise awareness so geographers aren’t caught off-guard. As he and his colleagues write: “If we continue being unaware of an unprepared for deep fake, we run the risk of entering a ‘fake geography’ dystopia.”

Repost: Original Source and Author Link

Categories
Tech News

Google will now warn you if your search results are probably crap

Your Google searches for breaking news stories may now produce a surprising outcome: a warning that your results could be unreliable.

The company has started showing notifications for searches on emerging topics, which suggest that users return later when more information is available.

The notice is Google’s latest efforts to mitigate misinformation in search results for breaking news. In a blog post, Danny Sullivan, public liaison for search at Google, said that sometimes reliable information isn’t online at the time that users search:

To help with this, we’ve trained our systems to detect when a topic is rapidly evolving and a range of sources hasn’t yet weighed in. We’ll now show a notice indicating that it may be best to check back later when more information from a wider range of sources might be available.

The feature was first spotted by Stanford Internet Observatory researcher Renee DiResta, who described it as a “positive step.”

Google has long been criticized for letting unreliable sources and conspiracy theories reach the top of search results for rapidly evolving stories.

[Read: Why entrepreneurship in emerging markets matters]

Twitter and Facebook have faced similar accusations. Karen North, an expert in social media at the University of Southern California, told the New York Times in 2018 that users can game ranking algorithms in these situations:

Before reliable sources put up stories, it’s a bit of a free-for-all. People who are in the business of posting sensationalized opinions about the news have learned that the sooner they put up their materials, the more likely their content will be found by an audience.

The warnings may help stem the tide of misinformation, but they could also exacerbate concerns about Google censoring alternative media outlets.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link

Categories
AI

AI experts warn Facebook’s anti-bias tool is ‘completely insufficient’

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Facebook today published a blog post detailing Fairness Flow, an internal toolkit the company claims enables its teams to analyze how some types of AI models perform across different groups. Developed in 2018 by Facebook’s Interdisciplinary Responsible AI (RAI) team in consultation with Stanford University, the Center for Social Media Responsibility, the Brookings Institute, and the Better Business Bureau Institute for Marketplace Trust, Fairness Flow is designed to help engineers determine how the models powering Facebook’s products perform across groups of people.

The post pushes back against the notion that the RAI team is “essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization [on Facebook’s platform],” as MIT Tech Review’s Karen Hao wrote in an investigative report earlier this month. Hao alleges that the RAI team’s work — mitigating bias in AI — helps Facebook avoid proposed regulation that might hamper its growth. The piece also claims that the company’s leadership has repeatedly weakened or halted initiatives meant to clean up misinformation on the platform because doing so would undermine that growth.

According to Facebook, Fairness Flow works by detecting forms of statistical bias in some models and data labels commonly used at Facebook. Here, Facebook defines “bias” as systematically applying different standards to different groups of people, like when Facebook-owned Instagram’s system disabled the accounts of U.S.-based Black users 50% more often than accounts of those who were white.

Given a dataset of predictions, labels, group membership (e.g., gender or age), and other information, Fairness Flow can divide the data a model uses into subsets and estimate its performance. The tool can determine whether a model accurately ranks content for people from a specific group, for example, or whether a model under-predicts for some groups relative to others. Fairness Flow can also be used to compare annotator-provided labels with expert labels, which yields metrics showing the difficulty in labeling content from groups and the criteria used by the original labelers.

Facebook says its Equity Team, a product group within Instagram focused on addressing bias, uses “model cards” that leverage Fairness Flow to provide information potentially preventing models from being used “inappropriately.”  The cards include a bias assessment that could be applied to all Instagram models by the end of next year, although Facebook notes the use of Fairness Flow is currently optional.

Mike Cook, an AI researcher at the Queen Mary University of London, told VentureBeat via email that Facebook’s blog post contains “very little information” about what Fairness Flow actually does. “While it seems that the main aim of the tool is to connect the Facebook engineers’ expectations with the model’s output, … the old adage ‘garbage in, garbage out’ still holds. This tool just confirms that the garbage you’ve gotten out is consistent with the garbage you’ve put in,” he said. “In order to fix these bigger problems, Facebook needs to address the garbage part.”

Cook pointed to language in the post suggesting that because groups might have different positive rates in factual (or “ground truth”) data, bias isn’t necessarily present. In machine learning, a false positive is an outcome where a model incorrectly predicts something, while a true positive measures the percentage of the model’s correct predictions.

“One interpretation of this is that Facebook is fine with bias or prejudice, as long as it’s sufficiently systemic,” Cook said. “For example, perhaps it’s reasonable to advertise technology jobs primarily to men, if Facebook finds that mostly men click on them? That’s consistent with the standards of fairness set here, to my mind, as the system doesn’t need to take into account who wrote the advert, what the tone or message of the advert is, what the state of the company it’s advertising is, or what the inherent problems in the industry the company is based in are. It’s simply reacting to the ‘ground truth’ observable in the world.”

Indeed, a Carnegie Mellon University study published last August found evidence that Facebook’s ad platform discriminates against certain demographic groups. The company claims its written policies ban discrimination and that it uses automated controls — introduced as part of the 2019 settlement — to limit when and how advertisers target ads based on age, gender, and other attributes. But many previous studies have established that Facebook’s ad practices are at best problematic.

Facebook says Fairness Flow is available to all product teams at the company and can be applied to models even after they’re deployed in production. But Facebook admits that Fairness Flow, the use of which is optional, can only analyze certain types of models — particularly supervised models that learn from a “sufficient volume” of labeled data. Facebook chief scientist Yann LeCun recently said in an interview that removing biases from self-supervised systems, which learn from unlabeled data, might require training the model with an additional dataset curated to unteach specific biases. “It’s a complicated issue,” he told Fortune.

University of Washington AI researcher Os Keyes characterized Fairness Flow as “a very standard process,” as opposed to a novel way to address bias in models. They pointed out that Facebook’s post indicates the tool compares accuracy to a single version of “real truth” rather than assessing what “accuracy” might mean to, for instance, labelers in Dubai versus in Germany or Kosovo.

“In other words, it’s nice that [Facebook is] assessing the accuracy of their ground truths … [but] I’m curious about where their ‘subject matter experts’ are from, or on what grounds they’re subject matter experts,” Keyes told VentureBeat via email. “It’s noticeable that [the company’s] solution to the fundamental flaws in the design of monolithic technologies is a new monolithic technology. To fix code, write more code. Any awareness of the fundamentally limited nature of fairness … It’s even unclear as to whether their system can recognise the intersecting nature of multiple group identities.”

Exposés about Facebook’s approaches to fairness haven’t done much to engender trust within the AI community. A New York University study published in July 2020 estimated that Facebook’s machine learning systems make about 300,000 content moderation mistakes per day, and problematic posts continue to slip through Facebook’s filters. In one Facebook group that was created last November and rapidly grew to nearly 400,000 people, members calling for a nationwide recount of the 2020 U.S. presidential election swapped unfounded accusations about alleged election fraud and state vote counts every few seconds.

Separately, a May 2020 Wall Street Journal article brought to light an internal Facebook study that found the majority of people who join extremist groups do so because of the company’s recommendation algorithms. And in an audit of the human rights impact assessments (HRIAs) Facebook performed regarding its product and presence in Myanmar following a genocide of the Rohingya people in that country, Carr Center at Harvard University coauthors concluded that the third-party HRIA largely omitted mention of the Rohingya and failed to assess whether algorithms played a role.

Accusations of fueling political polarization and social division prompted Facebook to create a “playbook” to help its employees rebut criticism, BuzzFeed news reported in early March. In one example, Facebook CEO Mark Zuckerberg and COO Sheryl Sandberg have sought to deflect blame for the Capitol Hill riot in the U.S., with Sandberg noting the role of smaller, right-leaning platforms despite the circulation of hashtags on Facebook promoting the pro-Trump rally in the days and weeks beforehand.

Facebook doesn’t perform systematic audits of its algorithms today, even though the step was recommended by a civil rights audit of Facebook completed last summer.

“The whole [Fairness Flow] toolkit can basically be summarised as, ‘We did that thing people were suggesting three years ago, we don’t even make everyone do the thing, and the whole world knows the thing is completely insufficient,’” Keyes said. “If [the blog post] is an attempt to respond to [recent criticism], it reads as more of an effort to pretend it never happened than actually address it.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Amazon’s AI-powered ‘distance assistants’ will warn workers when they get too close

Amazon, which is currently being sued for allegedly failing to protect workers from COVID-19, has unveiled a new AI tool it says will help employees follow social distancing rules.

The company’s “Distance Assistant” combines a TV screen, depth sensors, and AI-enabled camera to track employees’ movements and give them feedback in real time. When workers come closer than six feet to one another, circles around their feet flash red on the TV, indicating to employees that they should move to a safe distance apart. The devices are self-contained, meaning they can be deployed quickly where needed and moved about.

Amazon compares the system to radar speed checks which give drivers instant feedback on their driving. The assistants have been tested at a “handful” of the company’s buildings, said Brad Porter, vice president of Amazon Robotics, in a blog post, and the firm plans to roll out “hundreds” more to new locations in the coming weeks.

Importantly, Amazon also says it will be open-sourcing the technology, allowing other companies to quickly replicate and deploy these devices in a range of locations.

Amazon isn’t the only company using machine learning in this way. A large number of firms offering AI video analytics and surveillance have created similar social-distancing tools since the coronavirus outbreak began. Some startups have also turned to physical solutions, like bracelets and pendants which use Bluetooth signals to sense proximity and then buzz or beep to remind workers when they break social distancing guidelines.

Although these solutions will be necessary for workers to return to busy facilities like warehouses, many privacy experts worry their introduction will normalize greater levels of surveillance. Many of these solutions will produce detailed data of workers’ movements throughout the day, allowing managers to hound employees in the name of productivity. Workers will also have no choice but to be tracked in this way if they want to keep their job.

Amazon’s involvement in this sort of technology will raise suspicions as the company is often criticized for the grueling working conditions in its facilities. In 2018, it even patented a wristband that would track workers’ movements in real time, directing not just which task they should do next, but if their hands are moving towards the wrong shelf or bin.

The company’s description of the Distance Assistant as a ”standalone unit” that only requires power suggests it’s not storing any data about worker’s movement, but we’ve contacted the company to confirm what information, if any, might be retained.

Repost: Original Source and Author Link

Categories
Security

A cybercrime group is targeting US hospitals, federal agencies warn

Federal agencies warned hospitals, health care providers, and public health groups Wednesday that they were at risk of an “increased and imminent cybercrime threat” from ransomware, which could paralyze their computer systems and make it hard for them to deliver care. At least four hospitals have reported cyberattacks this week, and hundreds more could be at risk.

This could be “the biggest attack we’ve ever seen,” Allan Liska, an intelligence analyst for the firm Recorded Future, told CNN.

The attacks come as hospitals across the country are struggling to handle spikes in COVID-19 cases. Ransomware attacks shut down hospital computer systems, often forcing them to turn to pen and paper charts and sometimes locking them out of systems they need to run tests or scans on patients. If surges in coronavirus patients are already slowing down hospital operations and forcing some places to send patients away, a cyberattack could only make things worse.

These types of attacks have steadily increased over the past few years, and experts consistently warn that the systems health care organizations use are vulnerable.

Security experts believe a Russian-speaking group known as UNC1878 is behind the current attack. They’re financially motivated, and “one of most brazen, heartless, and disruptive threat actors I’ve observed over my career,” Charles Carmakal, chief technical officer of the cybersecurity firm Mandiant, told Reuters.

Despite pledges from some cybercrime groups to avoid hospitals during the COVID-19 pandemic, attacks have continued. Universal Health Services, a chain of hundreds of hospitals across the US, was struck by a cyberattack last month. In Germany, a woman died in what is believed to be the first fatality directly attributed to a hospital cyberattack.



Repost: Original Source and Author Link