Categories
AI

Facebook removes ‘deepfake’ of Ukrainian President Zelenskyy

On Wednesday, Facebook’s parent company, Meta, removed a deepfake video of Ukrainian President Volodymyr Zelenskyy issuing a statement that he never made, asking Ukrainians to “lay down arms.”

The deepfake appears to have been first broadcasted on a Ukrainian news website for TV24 after an alleged hack, as first reported by Sky News on Wednesday. The video shows an edited Zelenskyy speaking behind a podium declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s war efforts had failed.

In the video, Zelenskyy’s head is comically larger than in real life and is more pixelated than his surrounding body. The fake voice is much deeper than his real voice as well.

Meta’s head of security policy, Nathaniel Gleicher, put out a tweet thread on Wednesday announcing that the video had been removed from the company’s platforms. “Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did. It appeared on a reportedly compromised website and then started showing across the internet,” Gleicher said.

Earlier this month, the Ukrainian government issued a statement warning soldiers and civilians to take pause when they encounter videos of Zelenskyy online, especially if he announces a surrender to Russian invasion. In the statement, the Ukrainian Center for Strategic Communications said that the Russian government would likely use deepfakes to convince Ukrainians to surrender.

“Videos made through such technologies are almost impossible to distinguish from the real ones. Be aware – this is a fake! His goal is to disorient, sow panic, disbelieve citizens and incite our troops to retreat,” the statement said. “Rest assured – Ukraine will not capitulate!”

After the deepfake started to circulate across the internet, Zelenskyy posted a video to his official Instagram account debunking the video. “As for the latest childish provocation with advice to lay down arms, I only advise that the troops of the Russian Federation lay down their arms and return home,” he said. “We are at home and defending Ukraine.”

Facebook banned deepfakes and other manipulated videos from its platforms in 2020 ahead of the US presidential election. The policy includes content created by artificial intelligence or machine learning algorithms that could “likely mislead” users.



Repost: Original Source and Author Link

Categories
Security

Liveness tests used by banks to verify ID are ‘extremely vulnerable’ to deepfake attacks

Automated “liveness tests” used by banks and other institutions to help verify users’ identity can be easily fooled by deepfakes, demonstrates a new report.

Security firm Sensity, which specializes in spotting attacks using AI-generated faces, probed the vulnerability of identity tests provided by 10 top vendors. Sensity used deepfakes to copy a target face onto an ID card to be scanned and then copied that same face onto a video stream of a would-be attacker in order to pass vendors’ liveness tests.

Liveness tests generally ask someone to look into a camera on their phone or laptop, sometimes turning their head or smiling, in order to prove both that they’re a real person and to compare their appearance to their ID using facial recognition. In the financial world, such checks are often known as KYC, or “know your customer” tests, and can form part of a wider verification process that includes document and bill checks.

“We tested 10 solutions and we found that nine of them were extremely vulnerable to deepfake attacks,” Sensity’s chief operating officer, Francesco Cavalli, told The Verge.

“There’s a new generation of AI power that can pose serious threats to companies,” says Cavalli. “Imagine what you can do with fake accounts created with these techniques. And no one is able to detect them.”

Sensity shared the identity of the enterprise vendors it tested with The Verge, but it requested that the names not be published for legal reasons. Cavalli says Sensity signed non-disclosure agreements with some of the vendors and, in other cases, fears it may have violated companies’ terms of service by testing their software in this way.

Cavalli also says he was disappointed by the reaction from vendors, who did not seem to consider the attacks significant. “We told them ‘look you’re vulnerable to this kind of attack,’ and they said ‘we do not care,’” he says. “We decided to publish it because we think, at a corporate level and in general, the public should be aware of these threats.”

The vendors Sensity tested sell these liveness checks to a range of clients, including banks, dating apps, and cryptocurrency startups. One vendor was even used to verify the identity of voters in a recent national election in Africa. (Though there’s no suggestion from Sensity’s report that this process was compromised by deepfakes.)

Cavalli says such deepfake identity spoofs are primarily a danger to the banking system where they can be used to facilitate fraud. “I can create an account; I can move illegal money into digital bank accounts of crypto wallets,” says Cavalli. “Or maybe I can ask for a mortgage because today online lending companies are competing with one another to issue loans as fast as possible.”

This is not the first time deepfakes have been identified as a danger to facial recognition systems. They’re primarily a threat when the attacker can hijack the video feed from a phone or camera, a relatively simple task. However, facial recognition systems that use depth sensors — like Apple’s Face ID — cannot be fooled by these sorts of attacks, as they verify identity not only based on visual appearance but also the physical shape of a person’s face.

Repost: Original Source and Author Link

Categories
AI

Adobe has built a deepfake tool, but it doesn’t know what to do with it

Deepfakes have made a huge impact on the world of image, audio, and video editing, so why isn’t Adobe, corporate behemoth of the content world, getting more involved? Well, the short answer is that it is — but slowly and carefully. At the company’s annual Max conference today, it unveiled a prototype tool named Project Morpheus that demonstrates both the potential and problems of integrating deepfake techniques into its products.

Project Morpheus is basically a video version of the company’s Neural Filters, introduced in Photoshop last year. These filters use machine learning to adjust a subject’s appearance, tweaking things like their age, hair color, and facial expression (to change a look of surprise into one of anger, for example). Morpheus brings all those same adjustments to video content while adding a few new filters, like the ability to change facial hair and glasses. Think of it as a character creation screen for humans.

The results are definitely not flawless and are very limited in scope in relation to the wider world of deepfakes. You can only make small, pre-ordained tweaks to the appearance of people facing the camera, and can’t do things like face swaps, for example. But the quality will improve fast, and while the feature is just a prototype for now with no guarantee it will appear in Adobe software, it’s clearly something the company is investigating seriously.

What Project Morpheus also is, though, is a deepfake tool — which is potentially a problem. A big one. Because deepfakes and all that’s associated with them — from nonconsensual pornography to political propaganda — aren’t exactly good for business.

Now, given the looseness with which we define deepfakes these days, Adobe has arguably been making such tools for years. These include the aforementioned Neural Filters, as well as more functional tools like AI-assisted masking and segmentation. But Project Morpheus is obviously much more deepfakey than the company’s earlier efforts. It’s all about editing video footage of humans — in ways that many will likely find uncanny or manipulative.

Changing someone’s facial expression in a video, for example, might be used by a director to punch up a bad take, but it could also be used to create political propaganda — e.g. making a jailed dissident appear relaxed in court footage when they’re really being starved to death. It’s what policy wonks refer to as a “dual-use technology,” which is a snappy way of saying that the tech is “sometimes maybe good, sometimes maybe shit.”

This, no doubt, is why Adobe didn’t once use the word “deepfake” to describe the technology in any of the briefing materials it sent to The Verge. And when we asked why this was, the company didn’t answer directly but instead gave a long answer about how seriously it takes the threats posed by deepfakes and what it’s doing about them.

Adobe’s efforts in these areas seem involved and sincere (they’re mostly focused on content authentication schemes), but they don’t mitigate a commercial problem facing the company: that the same deepfake tools that would be most useful to its customer base are those that are also potentially most destructive.

Take, for example, the ability to paste someone’s face onto someone else’s body — arguably the ur-deepfake application that started all this bother. You might want such a face swap for legitimate reasons, like licensing Bruce Willis’ likeness for a series of mobile ads in Russia. But you might also be creating nonconsensual pornography to harass, intimidate, or blackmail someone (by far the most common malicious application of this technology).

Regardless of your intent, if you want to create this sort of deepfake, you have plenty of options, none of which come from Adobe. You can hire a boutique deepfake content studio, wrangle with some open-source software, or, if you don’t mind your face swaps being limited to preapproved memes and gifs, you can download an app. What you can’t do is fire up Adobe Premiere or After Effects. So will that change in the future?

It’s impossible to say for sure, but I think it’s definitely a possibility. After all, Adobe survived the advent of “Photoshopped” becoming shorthand for digitally edited images in general, and often with negative connotations. And for better or worse, deepfakes are slowly losing their own negative associations as they’re adopted in more mainstream projects. Project Morpheus is a deepfake tool with some serious guardrails (you can only make prescribed changes and there’s no face-swapping, for example), but it shows that Adobe is determined to explore this territory, presumably while gauging reactions from the industry and public.

It’s fitting that as “deepfake” has replaced “Photoshopped” as the go-to accusation of fakery in the public sphere, Adobe is perhaps feeling left out. Project Morpheus suggests it may well catch up soon.

Repost: Original Source and Author Link

Categories
AI

Deepfake dubs could help translate film and TV without losing an actor’s original performance

What exactly is lost in translation when TV shows and films are subbed or dubbed into a new language? It’s a hard question to answer, but for the team at AI startup Flawless, it may be one we don’t have to think about in the future. The company claims it has the solution to this particular language barrier; a technical innovation that could help TV shows and films effortlessly reach new markets around the world: deepfake dubs.

We often think of deepfakes as manipulating the entire image of a person or scene, but Flawless’ technology focuses on just a single element: the mouth. Customers feed the company’s software with video from a film or TV show along with dubbed dialogue recorded by humans. Flawless’ machine learning models then create new lip movements that match the translated speech and paste them automatically onto the actor’s head.

“When someone’s watching this dubbed footage, they’re not jolted out of the performance by a jarring word or a mistimed mouth movement,” Flawless’ co-founder Nick Lynes tells The Verge. “It’s all about retaining the performance and retaining the original style.”

The results — despite the company’s name — aren’t 100 percent flawless, but they are pretty good. You can see and hear how they look in the demo reel below, which features a French dub of the classic 1992 legal drama A Few Good Men, starring Jack Nicholson and Tom Cruise. We asked a native French speaker what they made of the footage, and they said it was off in a few places but still a lot smoother than traditional dubbing.

What makes Flawless’ technology particularly interesting is its potential to scale. Flawless’ pitch is that deepfake dubs offer tremendous value for money: they’re cheap and quick to create, especially when compared to the cost of full remakes. And, with the advent of global streaming platforms like Netflix, Disney Plus, and Amazon Prime Video, it’s easier than ever for such dubbed content to reach international markets.

As a recent report in The Wall Street Journal highlighted, demand for streaming services in the US is saturated and companies are now looking abroad for future growth. In the first quarter of 2021, for example, 89 percent of new Netflix users came from outside the US and Canada, while the service’s most watched show, Lupin, is a Parisian thriller.

“What you’re seeing is more and more streamers come online realizing the vast majority of their consumers are going to be outside the US, over time,” Erik Barmack, a former Netflix executive responsible for the company’s international productions, told the WSJ. “The question is how international does your content need to be to be successful.”

As Barmack suggests, there are different ways to answer this demand. You can create shows with local flavor that still entertain domestic viewers. You can do remakes of local hits for new audiences. And you can roll out the subs and dubs. But Flawless is betting that its technology provides a new option that will be particularly enticing for filmmakers.

This is because the company’s deepfake dubs preserve, to some degree, the performance of the original actor, says Lynes. Flawless’ technology is based on research from the Max Planck Institute for Informatics first published in 2019. As you can see in a showcase video below, the dubs it produces are somewhat sensitive to the facial expressions of the performers, retaining their emotion and line delivery.

Flawless has developed these techniques over the past three years, says Lynes, speeding up production time and reducing the amount of input footage. The end results are still a balance of automated dubbing and manual retouching (about 85 percent to 15 percent) but speedy to edit. “If something comes out we don’t particularly like we’ll do a few iterations; resubmit the training data in different forms and get another result,” says Lynes.

The company hopes that preserving the original performance will be appealing to filmmakers who want to retain the magic of their original casting. Lynes gives the example of the 2020 Oscar award-wining Danish film Another Round, which stars Mads Mikkelsen as one of a group of teachers who experiment with low-level alcoholism to see if it improves their lives. After its success at home and on the international award circuit, the film is set to be remade for English-language audiences with Leonardo DiCaprio in the main role.

The news sparked discussion about the value of such remakes. Is the Danish drinking culture that forms the film’s backbone really so alien to American audiences that a remake is required? Is Mikkelsen, an actor who’s appeared in such mainstream fare as Hannibal, Doctor Strange, and Rogue One, such an unknown that he can’t attract viewers in the US? And is the “one-inch barrier” of subtitles (to quote Parasite director Bong Joon Ho) simply too much for audiences to overcome?

From Lynes’ point of view, a deepfake dub would at least be a cheaper way to bring Another Round to English-language audiences while retaining its original flavor. “If we’re offering something that’s two percent the cost of the remake, we only need to be half as appealing to offer 10 times better value,” he says.

Those in charge of the remake will have concerns other than money, of course. No matter how beloved Mikkelsen is, he’s not as bankable as DiCaprio. But Lynes hopes that as deepfake dubs become common it’ll change the calculations for such remakes in future. Much more than that, he says, it’ll could even reshape the international film landscape, allowing actors and directors to reach new audiences with minimal effort.

“I think the pulling power of actors will change globally as a consequence of this technology,” he says. “Different people’s performances and directors’ choices will be better recognized, because a wider audience will be able to see them.”

Perhaps so, but for the moment, Flawless needs to prove that audiences actually want its technology. The company, which launched earlier this month, says it’s already got a first contract with a client it can’t name, but there’s no timeline for when we might see its wares in a commercial TV show or film and that will be the real test. The proof is in the dubbing.

Repost: Original Source and Author Link

Categories
AI

Deepfake satellite imagery poses a not-so-distant threat, warn geographers

When we think of deepfakes, we tend to imagine AI-generated people. This might be lighthearted, like a deepfake Tom Cruise, or malicious, like nonconsensual pornography. What we don’t imagine is deepfake geography: AI-generated images of cityscapes and countryside. But that’s exactly what some researchers are worried about.

Specifically, geographers are concerned about the spread of fake, AI-generated satellite imagery. Such pictures could mislead in a variety of ways. They could be used to create hoaxes about wildfires or floods, or to discredit stories based on real satellite imagery. (Think about reports on China’s Uyghur detention camps that gained credence from satellite evidence. As geographic deepfakes become widespread, the Chinese government can claim those images are fake, too.) Deepfake geography might even be a national security issue, as geopolitical adversaries use fake satellite imagery to mislead foes.

The US military warned about this very prospect in 2019. Todd Myers, an analyst at the National Geospatial-Intelligence Agency, imagined a scenario in which military planning software is fooled by fake data that shows a bridge in an incorrect location. “So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,” said Myers.

The first step to tackling these issues is to make people aware there’s a problem in the first place, says Bo Zhao, an assistant professor of geography at the University of Washington. Zhao and his colleagues recently published a paper on the subject of “deep fake geography,” which includes their own experiments generating and detecting this imagery.

Bo Zhao and his colleagues at the University of Washington were able to create their own AI-generated satellite imagery (above).
Image: ‘Deep fake geography? When geospatial data encounter Artificial Intelligence,’ Zhao et al

The aim, Zhao tells The Verge over email, “is to demystify the function of absolute reliability of satellite images and to raise public awareness of the potential influence of deep fake geography.” He says that although deepfakes are widely discussed in other fields, his paper is likely the first to touch upon the topic in geography.

“While many GIS [geographic information system] practitioners have been celebrating the technical merits of deep learning and other types of AI for geographical problem solving, few have publicly recognized or criticized the potential threats of deep fake to the field of geography or beyond,” write the authors.

Far from presenting deepfakes as a novel challenge, Zhao and his colleagues locate the technology in a long history of fake geography that dates back millennia. Humans have been lying with maps for pretty much as long as maps have existed, they say, from mythological geographies devised by ancient civilizations like the Babylonians, to modern propaganda maps distributed during wartime “to shake the enemy’s morale.”

One particularly curious example comes from so-called “paper towns” and “trap streets.” These are fake settlements and roads inserted by cartographers into maps in order to catch rivals stealing their work. If anyone produces a map which includes your very own Fakesville, Ohio, you know — and can prove — that they’re copying your cartography.

“It is a centuries-old phenomenon,” says Zhao of fake geography, though new technology produces new challenges. “It is novel partially because the deepfaked satellite images are so uncannily realistic. The untrained eyes would easily consider they are authentic.”

It’s certainly easier to produce fake satellite imagery than fake videos of humans. Lower resolutions can be just as convincing and satellite imagery as a medium is inherently believable. This may be due to what we know about the expense and origin of these pictures, says Zhao. “Since most satellite images are generated by professionals or governments, the public would usually prefer to believe they are authentic.”

As part of their study, Zhao and his colleagues created software to generate deepfake satellite images, using the same basic AI method (a technique known as generative adversarial networks, or GANs) used in well-known programs like ThisPersonDoesNotExist.com. They then created detection software that was able to spot the fakes based on characteristics like texture, contrast, and color. But as experts have warned for years regarding deepfakes of people, any detection tool needs constant updates to keep up with improvements in deepfake generation.

For Zhao, though, the most important thing is to raise awareness so geographers aren’t caught off-guard. As he and his colleagues write: “If we continue being unaware of an unprepared for deep fake, we run the risk of entering a ‘fake geography’ dystopia.”

Repost: Original Source and Author Link

Categories
AI

TikTok Tom Cruise deepfake creator: public shouldn’t worry about ‘one-click fakes’

When a series of spookily convincing Tom Cruise deepfakes went viral on TikTok, some suggested it was a chilling sign of things to come — harbinger of an era where AI will let anyone make fake videos of anyone else. The video’s creator, though, Belgium VFX specialist Chris Ume, says this is far from the case. Speaking to The Verge about his viral clips, Ume stresses the amount of time and effort that went into making each deepfake, as well as the importance of working with a top-flight Tom Cruise impersonator, Miles Fisher.

“You can’t do it by just pressing a button,” says Ume. “That’s important, that’s a message I want to tell people.” Each clip took weeks of work, he says, using the open-source DeepFaceLab algorithm as well as established video editing tools. “By combining traditional CGI and VFX with deepfakes, it makes it better. I make sure you don’t see any of the glitches.”

Ume has been working with deepfakes for years, including creating the effects for the “Sassy Justice” series made by South Park’s Trey Parker and Matt Stone. He started working on Cruise when he saw a video by Fisher announcing a fictitious run for president by the Hollywood star. The pair then worked together on a follow-up and decided to put a series of “harmless” clips up on TikTok. Their account, @deeptomcruise, quickly racked up tens of thousands of followers and likes. Ume pulled the videos briefly but then restored them.

“It’s fulfilled its purpose,” he says of the account. “We had fun. I created awareness. I showed my skills. We made people smile. And that’s it, the project is done.” A spokesperson from TikTok told The Verge that the account was well within its rules for parody uses of deepfakes, and Ume notes that Cruise — the real Tom Cruise — has since made his own official account, perhaps as a result of seeing his AI doppelgänger go viral.

Deepfake technology has been developing for years now, and there’s no doubt that the results are getting more realistic and easier to make. Although there has been much speculation about the potential harm such technology could cause in politics, so far these effects have been relatively nonexistent. Where the technology is definitely causing damage is in the creation of revenge porn or nonconsensual pornography of women. In those cases, the fake videos or images don’t have to be realistic to create tremendous damage. Simply threatening someone with the release of fake imagery, or creating rumors about the existence of such content, can be enough to ruin reputations and careers.

The Tom Cruise fakes, though, show a much more beneficial use of the technology: as another part of the CGI toolkit. Ume says there are so many uses for deepfakes, from dubbing actors in film and TV, to restoring old footage, to animating CGI characters. What he stresses, though, is the incompleteness of the technology operating by itself.

Creating the fakes took two months to train the base AI models (using a pair of NVIDIA RTX 8000 GPUs) on footage of Cruise, and days of further processing for each clip. After that, Ume had to go through each video, frame by frame, making small adjustments to sell the overall effect; smoothing a line here and covering up a glitch there. “The most difficult thing is making it look alive,” he says. “You can see it in the eyes when it’s not right.”

Ume says a huge amount of credit goes to Fisher; a TV and film actor who captured the exaggerated mannerisms of Cruise, from his manic laugh to his intense delivery. “He’s a really talented actor,” says Ume. “I just do the visual stuff.” Even then, if you look closely, you can still see moments where the illusion fails, as in the clip below where Fisher’s eyes and mouth glitch for a second as he puts the sunglasses on.

Blink and you’ll miss it: look closely and you can see Fisher’s mouth and eye glitch.
GIF: The Verge

Although Ume’s point is that his deepfakes take a lot of work and a professional impersonator, it’s also clear that the technology will improve over time. Exactly how easy it will be to make seamless fakes in the future is difficult to predict, and experts are busy developing tools that can automatically identify fakes or verify unedited footage.

Ume, though, says he isn’t too worried about the future. We’ve developed such technology before and society’s conception of truth has more or less survived. “It’s like Photoshop 20 years ago, people didn’t know what photo editing was, and now they know about these fakes,” he says. As deepfakes become more and more of a staple in TV and movies, people’s expectations will change, as they did for imagery in the age of Photoshop. One thing’s for certain, says Ume, and it’s that the genie can’t be put back in the bottle. “Deepfakes are here to stay,” he says. “Everyone believes in it.”

Update March 5th, 12:11PM ET: Updated to note that Ume and Fisher has now restored the videos to the @deeptomcruise TikTok account.

Repost: Original Source and Author Link

Categories
AI

Project Gucciberg offers classic audiobooks read by an AI deepfake of Gucci Mane

Ever wanted to have Leo Tolstoy’s Anna Karenina or Franz Kafka’s Metamorphosis read to you by trap god Gucci Mane, creator of such hits as “Lemonade” and “Wasted”? Well, a) that’s an awfully specific desire, and b) it’s your lucky day.

Project Gucciberg is the latest drop from viral factory MSCHF, and it does exactly that. Using machine learning, MSCHF created an audio deepfake of Gucci Mane reading a selection of classic texts from Little Women to Beowulf. They’re all free to listen to and come with book covers that blend in perfectly with the artwork of Gucci Mane’s prolific discography.

The what of Project Gucciberg is luridly straightforward, but the why is harder to answer. If you’re not familiar with MSCHF, I recommend our profile of the outfit from last year. Essentially, they’re a group of VC-funded creators who make weird things designed to go viral online, like squeaky chicken bongs and Air Max 97 sneakers filled with water from the River Jordan, some of which are sold for a nominal fee. Then they ??? and profit (presumably by selling their services to companies who want things they made to go viral online).

Speaking to The Verge, MSCHF’s Dan Greenberg didn’t go into the motivation behind Project Gucciberg but was more than happy to talk about the mechanics. Audio deepfakes are now pretty common (listen to this clone of Joe Rogan for a good example), to the point where they’ve been used to commit fraud. To make one, you just need a lot of sample data of your target speaking and the right neural networks to learn and copy their mannerisms.

Greenberg says MSCHF collected around six hours of audio of Gucci Mane talking from podcasts, interviews, and the like. They then created transcriptions of the clips to help with the text-to-speech (TTS) process. This required creating a “Gucci pronunciation key/dictionary to better capture the idiosyncrasies of Gucci Mane’s particular argot.”

The redesigned book covers of Project Gucciberg are a delight to behold.

“Gucci’s pronunciation follows a very particular cadence — he uses a much greater variety of vowel sounds, for instance, than your average TTS reader would,” says Greenberg. “The dictionary breaks words up into phonemes (discrete vocal gestures) that our model then uses as building blocks … So for a simple example, we need our model to know what syllables to elide, or flow into each other across words: it needs to know to say “talm ‘bout” not ‘talking about,’ and the Gucci dictionary { T AH1 L M B AW1 T } gets us there where the written words ‘talking about’ do not.”

The results are impressive: the deepfake certainly sounds like the man himself, though the results are not always totally coherent or of the greatest quality. “Our fake Gucci Mane often sounds like he’s speaking through a bad mic, or over a low-quality internet stream, and part of this is because in the training data he often is doing exactly that,” says Greenberg.

Exactly why Gucci was chosen for this project came down to two factors, says Greenberg: one, the rapper has a distinctive voice, and two, the Project Gucciberg pun was too delicious to ignore.

Greenberg adds that MSCHF didn’t approach Gucci to ask for permission to use his voice. As a disclaimer on the site slyly points out, the whole project raises interesting questions about copyright in the age of AI fakes. ”We didn’t write the books, and we deepfaked the voice,” it says. “Is this copyright infringement? Is it identity theft? All of the training data (recordings) used to make Project Gucciberg were publicly available on the web. Gucciberg lives in that lovely grey area where everything’s new and anything goes.” It certainly is! The Verge has attempted to reach out to Gucci Mane via his record label for a response, and we’ll update this story if we hear back.

Is Project Gucciberg anything more than a quick click and a lol? Well, not really. But that’s MSCHF’s business, and they’re very good at it. While listening to more than a few minutes of the resulting audio is a little disorientating, Greenberg suggests there may be unique benefits to the coming world of on-demand deepfake celebrity audiobooks.

“Every once in a while … the extreme casualness of Gucci Mane’s narration really does put the text in a new light,” he says, speaking about the benefits of listening to the deepfake version of Kafka’s Metamorphosis. “Gregor Samsa really comes across as just another guy who doesn’t want to get out of bed, you know?”

Repost: Original Source and Author Link

Categories
AI

Deepfake detectors and datasets exhibit racial and gender bias, USC study shows

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Some experts have expressed concern that machine learning tools could be used to create deepfakes, or videos that take a person in an existing video and replace them with someone else’s likeness. The fear is that these fakes might be used to do things like sway opinion during an election or implicate a person in a crime. Already, deepfakes have been abused to generate pornographic material of actors and defraud a major energy producer.

Fortunately, efforts are underway to develop automated methods to detect deepfakes. Facebook — along with Amazon  and Microsoft, among others — spearheaded the Deepfake Detection Challenge, which ended last June. The challenge’s launch came after the release of a large corpus of visual deepfakes produced in collaboration with Jigsaw, Google’s internal technology incubator, which was incorporated into a benchmark made freely available to researchers for synthetic video detection system development. More recently, Microsoft launched its own deepfake-combating solution in Video Authenticator, a system that can analyze a still photo or video to provide a score for its level of confidence that the media hasn’t been artificially manipulated.

But according to researchers at the University of Southern California, some of the datasets used to train deepfake detection systems might underrepresent people of a certain gender or with specific skin colors. This bias can be amplified in deepfake detectors, the coauthors say, with some detectors showing up to a 10.7% difference in error rate depending on the racial group.

Biased deepfake detectors

The results, while surprising, are in line with previous research showing that computer vision models are susceptible to harmful, pervasive prejudice. A paper last fall by University of Colorado, Boulder researchers demonstrated that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time. Independent benchmarks of major vendors’ systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) have demonstrated that facial recognition technology exhibits racial and gender bias and have suggested that current facial recognition programs can be wildly inaccurate, misclassifying people upwards of 96% of the time.

The University of Southern California group looked a three deepfake detection models with “proven success in detecting deepfake videos.” All were trained on the FaceForensics++ dataset, which is commonly used for deepfake detectors, as well as corpora including Google’s DeepfakeDetection, CelebDF, and DeeperForensics-1.0.

In a benchmark test, the researchers found that all of the detectors performed worst on videos with darker Black faces, especially male Black faces. Videos with female Asian faces had the highest accuracy, but depending on the dataset, the detectors also performed well on Caucasian (particularly male) and Indian faces. .

According to the researchers, the deepfake detection datasets were “strongly” imbalanced in terms of gender and racial groups, with FaceForensics++ sample videos showing over 58% (mostly white) women compared with 41.7% men. Less than 5% of the real videos showed Black or Indian people, and the datasets contained “irregular swaps,” where a person’s face was swapped onto another person of a different race or gender.

These irregular swaps, while intended to mitigate bias, are in fact to blame for at least a portion of the bias in the detectors, the coauthors hypothesize. Trained on the datasets, the detectors learned correlations between fakeness and, for example, Asian facial features. One corpus used Asian faces as foreground faces swapped onto female Caucasian faces and female Hispanic faces.

“In a real-world scenario, facial profiles of female Asian or female African are 1.5 to 3 times more likely to be mistakenly labeled as fake than profiles of the male Caucasian … The proportion of real subjects mistakenly identified as fake can be much larger for female subjects than male subjects,” the researchers wrote.

Real-world risks

The findings are a stark reminder that even the “best” AI systems aren’t necessarily flawless. As the coauthors note, at least one deepfake detector in the study achieved 90.1% accuracy on a test dataset, a metric that conceals the biases within.

“[U]sing a single performance metrics such as … detection accuracy over the entire dataset is not enough to justify massive commercial rollouts of deepfake detectors,” the researchers wrote. “As deepfakes become more pervasive, there is a growing reliance on automated systems to combat deepfakes. We argue that practitioners should investigate all societal aspects and consequences of these high impact systems.”

The research is especially timely in light of growth in the commercial deepfake video detection market. Amsterdam-based Deeptrace Labs offers a suite of monitoring products that purport to classify deepfakes uploaded on social media, video hosting platforms, and disinformation networks. Dessa has proposed techniques for improving deepfake detectors trained on data sets of manipulated videos. And Truepic raised an $8 million funding round in July 2018 for its video and photo deepfake detection services. In December 2018, the company acquired another deepfake “detection-as-a-service” startup — Fourandsix — whose fake image detector was licensed by DARPA.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

This is what a deepfake voice clone used in a failed fraud attempt sounds like

One of the stranger applications of deepfakes — AI technology used to manipulate audiovisual content — is the audio deepfake scam. Hackers use machine learning to clone someone’s voice and then combine that voice clone with social engineering techniques to convince people to move money where it shouldn’t be. Such scams have been successful in the past, but how good are the voice clones being used in these attacks? We’ve never actually heard the audio from a deepfake scam — until now.

Security consulting firm NISOS has released a report analyzing one such attempted fraud, and shared the audio with Motherboard. The clip below is part of a voicemail sent to an employee at an unnamed tech firm, in which a voice that sounds like the company’s CEO asks the employee for “immediate assistance to finalize an urgent business deal.”

The quality is certainly not great. Even under the cover of a bad phone signal, the voice is a little robotic. But it’s passable. And if you were a junior employee, worried after receiving a supposedly urgent message from your boss, you might not be thinking too hard about audio quality. “It definitely sounds human. They checked that box as far as: does it sound more robotic or more human? I would say more human,” Rob Volkert, a researcher at NISOS, told Motherboard. “But it doesn’t sound like the CEO enough.”

The attack was ultimately unsuccessful, as the employee who received the voicemail “immediately thought it suspicious” and flagged it to the firm’s legal department. But such attacks will be more common as deepfake tools become increasingly accessible.

All you need to create a voice clone is access to lots of recordings of your target. The more data you have and the better quality the audio, the better the resulting voice clone will be. And for many executives at large firms, such recordings can be easily collected from earnings calls, interviews, and speeches. With enough time and data, the highest-quality audio deepfakes are much more convincing than the example above.

The best known and first reported example of an audio deepfake scam took place in 2019, where the chief executive of a UK energy firm was tricked into sending €220,000 ($240,000) to a Hungarian supplier after receiving a phone call supposedly from the CEO of his company’s parent firm in Germany. The executive was told that the transfer was urgent and the funds had to be sent within the hour. He did so. The attackers were never caught.

Earlier this year, the FTC warned about the rise of such scams, but experts say there’s one easy way to beat them. As Patrick Traynor of the Herbert Wertheim College of Engineering told The Verge in January, all you need to do is hang up the phone and call the person back. In many scams, including the one reported by NISOS, the attackers are using a burner VOIP account to contact their targets.

“Hang up and call them back,” says Traynor. “Unless it’s a state actor who can reroute phone calls or a very, very sophisticated hacking group, chances are that’s the best way to figure out if you were talking to who you thought you were.”

Repost: Original Source and Author Link

Categories
AI

Deepfake bots on Telegram make the work of creating fake nudes dangerously easy

Researchers have discovered a “deepfake ecosystem” on the messaging app Telegram centered around bots that generate fake nudes on request. Users interacting with these bots say they’re mainly creating nudes of women they know using images taken from social media, which they then share and trade with one another in various Telegram channels.

The investigation comes from security firm Sensity, which focuses on what it calls “visual threat intelligence,” particularly the spread of deepfakes. Sensity’s researchers found more than 100,000 images have been generated and shared in public Telegram channels up to July 2020 (meaning the total number of generated images, including those never shared and those made since July, is much higher). Most of the users in these channels, roughly 70 percent, come from Russia and neighboring countries, says Sensity. The Verge was able to confirm that many of the channels investigated by the company are still active.

The bots are free to use, but they generate fake nudes with watermarks or only partial nudity. Users can then pay a fee equal to just a few cents to “uncover” the pictures completely. One “beginner rate” charges users 100 rubles (around $1.28) to generate 100 fake nudes without watermarks over a seven day period. Sensity says “a limited number” of the bot-generated images feature targets “who appeared to be underage.”

Both The Verge and Sensity have contacted Telegram to ask why they permit this content on their app but have yet to receive replies. Sensity says it’s also contacted the relevant law enforcement authorities.

In a poll in one of the main channels for sharing deepfake nudes (originally posted in both Russian and English), most users said they wanted to generate images of women they knew in “real life.”
Image: Sensity

The software being used to generate these images is known as DeepNude. It first appeared on the web last June, but its creator took down its website hours after it received mainstream press coverage, saying “the probability that people will misuse it is too high.” However, the software has continued to spread over backchannels, and Sensity says DeepNude “has since been reverse engineered and can be found in enhanced forms on open source repositories and torrenting websites.” It’s now being used to power Telegram bots, which handle payments automatically to generate revenue for their creators.

DeepNude uses an AI technique known as generative adversarial networks, or GANs, to generate fake nudes, with the resulting images varying in quality. Most are obviously fake, with smeared or pixellated flesh, but some can easily be mistaken for real pictures.

Since before the arrival of Photoshop, people have created nonconsensual fake nudes of women. There are many forums and websites currently dedicated to this activity using non-AI tools, with users sharing nudes of both celebrities and people they know. But deepfakes have led to the faster generation of more realistic images. Now, automating this process via Telegram bots makes generating fake nudes as easy as sending and receiving pictures.

“The key difference is accessibility of this technology,” Sensity’s CEO and co-author of the report, Giorgio Patrini, told The Verge. “It’s important to notice that other versions of the AI core of this bot, the image processing and synthesis, are freely available on code repositories online. But you need to be a programmer and have some understanding of computer vision to get them to work, other than powerful hardware. Right now, all of this is irrelevant as it is taken care of by the bot embedded into a messaging app.”

Sensity’s report says it’s “reasonable to assume” that most of the people using these bots “are primarily interested in consuming deepfake pornography” (which remains a popular category on porn sites). But these images and videos can also be used for extortion, blackmail, harassment, and more. There have been a number of documented cases of women being targeted using AI-generated nudes, and it’s possible some of those creating nudes using the bots on Telegram are doing so with these motives in mind.

Patrini told The Verge that Sensity’s researchers had not seen direct evidence of the bot’s creations being used for these purposes, but said the company believed this was happening. He added that while the political threat of deepfakes had been “miscalculated” (“from the point of view of perpetrators, it is easier and cheaper to resort to photoshopping images and obtain a similar impact for spreading disinformation, with less effort”), it’s clear the technology poses “a series threat for personal reputation and security.”

Repost: Original Source and Author Link