Categories
Computing

Nvidia Just Proved How Easy it is to Fool Everyone with CG

Deepfakes aren’t all video sleuths need to worry about in the future. This week, Nvidia announced that its GPU Technology Conference (GTC) was made almost entirely with its own Omniverse CG platform, and the event happened in April. For months, Nvidia fooled everyone into believing its GTC 2021 conference was real — and we’ll see a lot more of that in the coming years.

Since the start of the pandemic, Nvidia CEO Jensen Huang has delivered keynotes from his kitchen. GTC 2021 still featured a kitchen keynote, but this time with an entirely virtual kitchen made in Omniverse. Even more impressive, the Nvidia team managed to create a CG model of Huang that delivered part of the keynote.

Of course, the conference wasn’t entirely fake. Huang still spoke, and the CG model was only on screen for a brief time. “To be sure, you can’t have a keynote without a flesh and blood person at the center. Through all but 14 seconds of the hour and 48-minute presentation — from 1:02:41 to 1:02:55 — Huang himself spoke in the keynote,” Nvidia wrote in a blog post.

Omniverse is Nvidia’s platform for creating and animating 3D models in a virtual space. It uses simulations, material assets, and lighting like other 3D programs, but accelerates them with Nvidia RTX graphics cards. That gives designers a chance to view ray-traced lighting in real-time to adjust the scene accordingly.

As the name implies, Omniverse connects artists and the tools they use. The platform itself supports real-time collaboration, and it brings together assets from multiple 3D applications. In the Connecting in the Metaverse documentary (above), Nvidia specifically calls out Unreal Engine and Autodesk Maya, which some designers used to make the GTC conference along with Omniverse.

Virtual conferences are the new normal for many tech companies, and although some boil down to nothing more than a PowerPoint presentation and a speaker, Nvidia showed that they can be much more. What’s surprising about the GTC 2021 keynote isn’t that it was virtual, but that Nvidia was able to hide that fact for months.

It underscores just how easy it is to trick a large audience into believing graphics are real, and it’s something that we’ll continue to see for years to come. “If we do this right, we’ll be working in Omniverse 20 years from now,” Rev Lebaredian, vice president of Omniverse engineering and simulation at Nvidia, said.

Still, the technology isn’t perfect. During the brief time CG Huang is on screen, it’s easy to see that CG is at work thanks to some stiff animation and a slightly out-of-sync voiceover. The kitchen is a different story. Even after rewatching GTC 2021 knowing that the kitchen is fake, it’s almost impossible to spot the difference between the Omniverse model and the real thing.

And now, tools for developing these kinds of models are easier than ever to access. In addition to free programs like Blender, there are tools like Unreal Engine’s MetaHuman, which can generate a realistic character model in less than an hour.

That’s exciting for the world of CG, but it carries a worry. The rise of deepfakes over the past few years has made it more difficult to tell real from fake, and as Nvidia proved with its GTC 2021 conference, you can trick a large audience into believing something rendered by a computer is real.

Hopefully, those tools will be used for good, like a months-long grift where Nvidia held its tongue about a virtual conference that everyone thought was real.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Study warns deepfakes can fool facial recognition

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Deepfakes, or AI-generated videos that take a person in an existing video and replace them with someone else’s likeness, are multiplying at an accelerating rate. According to startup Deeptrace, the number of deepfakes on the web increased 330% from October 2019 to June 2020, reaching over 50,000 at their peak. That’s troubling not only because these fakes might be used to sway opinion during an election or implicate a person in a crime, but because they’ve already been abused to generate pornographic material of actors and defraud a major energy producer.

Open source tools make it possible for anyone with images of a victim to create a convincing deepfake, and a new study suggests that deepfake-generating techniques have reached the point where they can reliably fool commercial facial recognition services. In a paper published on the preprint server Arxiv.org, researchers at Sungkyunkwan University in Suwon, South Korea demonstrate that APIs from Microsoft and Amazon can be fooled with commonly used deepfake-generating methods. In one case, one of the APIs — Microsoft’s Azure Cognitive Services — was fooled by up to 78% of the deepfakes the coauthors fed it.

“From experiments, we find that some deepfake generation methods are of greater threat to recognition systems than others and that each system reacts to deepfake impersonation attacks differently,” the researchers wrote. “We believe our research findings can shed light on better designing robust web-based APIs, as well as appropriate defense mechanisms, which are urgently needed to fight against malicious use of deepfakes.”

The researchers chose to benchmark facial recognition APIs from Microsoft and Amazon because both companies offer services to recognize celebrity faces. The APIs return a face similarity scoring metric that makes it possible to compare their performance. And because celebrity face images are plentiful compared with those of the average person, the researchers were able to generate deepfakes from them relatively easily. Google offers celebrity recognition via its Cloud Vision API, but the researchers say the company denied their formal request to use it.

To see the extent to which commercial facial recognition APIs could be fooled by deepfakes, the researchers used AI models trained on five different datasets — three publically available and two that they created themselves — containing the faces of Hollywood movie stars, singers, athletes, and politicians. They created 8,119 deepfakes from the datasets in total. Then they extracted faces in the deepfakes’ video frames and had the services attempt to predict which celebrity was pictured.

The researchers found that all of the APIs were susceptible to being fooled by the deepfakes. Azure Cognitive Services mistook a deepfake for a target celebrity 78% of the time, while Amazon’s Rekognition mistook it 68.7% of the time. Rekognition misclassified deepfakes of a celebrity as another real celebrity 40% of the time and gave 902 out of of 3,200 deepfakes higher confidence scores than the same celebrity’s real image. And in an experiment with Azure Cognitive Services, the researchers successfully impersonated 94 out of 100 celebrities in one of the open source datasets.

Deepfake detection

The coauthors attribute the high success rate of their attacks to the fact that deepfakes tend to preserve the same identity as the target video. As a result, when the Microsoft and Amazon services made mistakes, they tended to do so with high confidence, with Amazon’s exhibiting a “considerably” higher susceptibility to being fooled by deepfakes.

“Assuming the underlying face recognition API cannot distinguish the deepfake impersonator from the genuine user, it can cause many privacy, security, and repudiation risks, as well as numerous fraud cases,” the researchers warn. “Voice and video deepfake technologies can be combined to create multimodal deepfakes and used to carry out more powerful and realistic phishing attacks … [And] if the commercial APIs fail to filter the deepfakes on social media, it will allow the propagation of false information and harm innocent individuals.”

Microsoft and Amazon declined to comment.

The study’s findings show that the fight against deepfakes is likely to remain challenging, especially as media generation techniques continue to improve. Just this week, deepfake footage of Tom Cruise posted to an unverified TikTok account racked up 11 million views on the app and millions more on other platforms. And when scanned through several of the best publicly available deepfake detection tools, they avoided discovery, according to Vice.

In an attempt to fight the spread of deepfakes, Facebook — along with Amazon  and Microsoft, among others — spearheaded the Deepfake Detection Challenge, which ended last June. The challenge’s launch came after the release of a large corpus of visual deepfakes produced in collaboration with Jigsaw, Google’s internal technology incubator, which was incorporated into a benchmark made freely available to researchers for synthetic video detection system development.

More recently, Microsoft launched its own deepfake-combating solution in Video Authenticator, a tool that can analyze a still photo or video to provide a score for its level of confidence that the media hasn’t been artificially manipulated. The company also developed a technology built into Microsoft Azure that enables a content producer to add metadata to a piece of content, as well as a reader that checks the metadata to let people know that the content is authentic.

“We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods,” Microsoft CVP of customer security and trust Tom Burt wrote in a blog post last September. “Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
AI

Cloak your photos with this AI privacy tool to fool facial recognition

Ubiquitous facial recognition is a serious threat to privacy. The idea that the photos we share are being collected by companies to train algorithms that are sold commercially is worrying. Anyone can buy these tools, snap a photo of a stranger, and find out who they are in seconds. But researchers have come up with a clever way to help combat this problem.

The solution is a tool named Fawkes, and was created by scientists at the University of Chicago’s Sand Lab. Named after the Guy Fawkes masks donned by revolutionaries in the V for Vendetta comic book and film, Fawkes uses artificial intelligence to subtly and almost imperceptibly alter your photos in order to trick facial recognition systems.

The way the software works is a little complex. Running your photos through Fawkes doesn’t make you invisible to facial recognition exactly. Instead, the software makes subtle changes to your photos so that any algorithm scanning those images in future sees you as a different person altogether. Essentially, running Fawkes on your photos is like adding an invisible mask to your selfies.

Scientists call this process “cloaking” and it’s intended to corrupt the resource facial recognition systems need to function: databases of faces scraped from social media. Facial recognition firm Clearview AI, for example, claims to have collected some three billion images of faces from sites like Facebook, YouTube, and Venmo, which it uses to identify strangers. But if the photos you share online have been run through Fawkes, say the researchers, then the face the algorithms know won’t actually be your own.

According to the team from the University of Chicago, Fawkes is 100 percent successful against state-of-the-art facial recognition services from Microsoft (Azure Face), Amazon (Rekognition), and Face++ by Chinese tech giant Megvii.

“What we are doing is using the cloaked photo in essence like a Trojan Horse, to corrupt unauthorized models to learn the wrong thing about what makes you look like you and not someone else,” Ben Zhao, a professor of computer science at the University of Chicago who helped create the Fawkes software, told The Verge. “Once the corruption happens, you are continuously protected no matter where you go or are seen.”

You’d hardly recognize her. Photos of Queen Elizabeth II before (left) and after (right) being run through Fawkes cloaking software.
Image: The Verge

The group behind the work — Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Y. Zhao — published a paper on the algorithm earlier this year. But late last month they also released Fawkes as free software for Windows and Macs that anyone can download and use. To date they say it’s been downloaded more than 100,000 times.

In our own tests we found that Fawkes is sparse in its design but easy enough to apply. It takes a couple of minutes to process each image, and the changes it makes are mostly imperceptible. Earlier this week, The New York Times published a story on Fawkes in which it noted that the cloaking effect was quite obvious, often making gendered changes to images like giving women mustaches. But the Fawkes team says the updated algorithm is much more subtle, and The Verge’s own tests agree with this.

But is Fawkes a silver bullet for privacy? It’s doubtful. For a start, there’s the problem of adoption. If you read this article and decide to use Fawkes to cloak any photos you upload to social media in future, you’ll certainly be in the minority. Facial recognition is worrying because it’s a society-wide trend and so the solution needs to be society-wide, too. If only the tech-savvy shield their selfies, it just creates inequality and discrimination.

Secondly, many firms that sell facial recognition algorithms created their databases of faces a long time ago, and you can’t retroactively take that information back. The CEO of Clearview, Hoan Ton-That, told the Times as much. “There are billions of unmodified photos on the internet, all on different domain names,” said Ton-That. “In practice, it’s almost certainly too late to perfect a technology like Fawkes and deploy it at scale.”

Comparisons of uncloaked and cloaked faces using Fawkes.
Image: SAND Lab, University of Chicago

Naturally, though, the team behind Fawkes disagree with this assessment. They note that although companies like Clearview claim to have billions of photos, that doesn’t mean much when you consider they’re supposed to identify hundreds of millions of users. “Chances are, for many people, Clearview only has a very small number of publicly accessible photos,” says Zhao. And if people release more cloaked photos in the future, he says, sooner or later the amount of cloaked images will outnumber the uncloaked ones.

On the adoption front, however, the Fawkes team admits that for their software to make a real difference it has to be released more widely. They have no plans to make a web or mobile app due to security concerns, but are hopeful that companies like Facebook might integrate similar tech into their own platform in future.

Integrating this tech would be in these companies’ interest, says Zhao. After all, firms like Facebook don’t want people to stop sharing photos, and these companies would still be able to collect the data they need from images (for features like photo tagging) before cloaking them on the public web. And while integrating this tech now might only have a small effect for current users, it could help convince future, privacy-conscious generations to sign up to these platforms.

“Adoption by larger platforms, e.g. Facebook or others, could in time have a crippling effect on Clearview by basically making [their technology] so ineffective that it will no longer be useful or financially viable as a service,” says Zhao. “Clearview.ai going out of business because it’s no longer relevant or accurate is something that we would be satisfied [with] as an outcome of our work.”

Repost: Original Source and Author Link