Meta has started notifying users of its standalone Facebook Gaming app that it will soon no longer be available. In an in-app notification (as shared by social media consultant Matt Navarra and other publications), the company has announced that both iOS and Android versions of the application will stop working on October 28th. Meta is also giving users the chance to download their search data and reminding them that Facebook Gaming isn’t going away entirely. Users will merely have to go to the Gaming tab in the main Facebook app to watch their favorite creators’ livestreams.
The company released the dedicated Gaming app in 2020 to better compete with Twitch and YouTube. Meta (still known as Facebook back then) designed the app to highlight content from streamers and to provide users with a group chat and other community features. It didn’t say why it decided to shut down the standalone app, but it could be part of its cost-cutting efforts meant to help it weather what Mark Zuckerberg calls “one of the worst downturns [the company has seen] in recent history.”
Over the past year, streaming tool providers such as StreamElements reported that Facebook Gaming comes only second to Twitch when it comes to hours watched on a game streaming platform. However, we examined data from CrowdTangle, Meta’s analytics service, and found that the platform is flooded with spam and pirated content masquerading as gaming livestreams. Back then, a spokesperson told Engadget that Meta was “working to improve [its] tools to identify violating content” so that users can have “the best experience.”
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
As of today, Facebook and IG creators have six new features they can use for their Reels content. But of the six, the most intriguing feature is support for a sticker prompt that was first used and popularized in Instagram Stories.
Meta announced via a Facebook video post that, in addition to all of its other new Reels-focused features, it would now offer support for its Add Yours sticker prompt in Reels for both Instagram and Facebook.
Yes, that’s right, IG and Facebook users are now able to add the Add Yours sticker to their Reels to create an interactive element for their posts. The sticker essentially allows viewers of a given Reel to add their own response to a prompt given by the Add Yours sticker.
Once other users begin adding their responses to your Add Yours sticker prompt, both Facebook and IG will generate a page filled with the responses to your sticker and that page shows who started the sticker prompt trend at the top of it.
We’ve tested the new support for the Add Yours sticker on both the Instagram and Facebook mobile apps (on an Android device) and it appears the new Reels functionality is already live. On Instagram, you should be able to use it by creating a Reel as usual and then selecting Preview > Sticker icon > Add Yours sticker icon. You can then write your own prompt or select the Dice icon to generate a random prompt.
On Facebook, you can use the feature by creating a Reel as you normally would and then choosing the Stickers icon > Add Yours sticker icon. From there, you can write your own prompt. We didn’t see the Dice icon for the random prompt generator functionality in the Facebook mobile app.
As for the other five new Reels features Meta announced today, you can expect to see: Creator Studio analytics for Facebook Reels, the ability for creators to earn money on Facebook Reels via being sent Stars, support for remixing Reels, support for sharing Instagram Reels to Facebook, and the ability to “auto-create” Facebook Reels from existing Stories or Memories.
After a high-profile incident in which subpoenaed Facebook messages led to felony charges for a 17-year-old girl and her mother in a Nebraska abortion case, Meta said Thursday that it would expand testing of end-to-end encryption in Messenger ahead of a planned global rollout.
This week, the company will automatically begin to add end-to-end encryption in Messenger chats for more people. In the coming weeks, it will also increase the number of people who can begin using end-to-end encryption on direct messages in Instagram.
Meanwhile, the company has begun to test a feature called “secure storage” that will allow users to restore their chat history when they install Messenger on a new device. Backups can be locked by a PIN, and the feature is designed to prevent the company or anyone else from being able to read their contents.
The global rollout is expected to be completed next year.
Meta told Wired that it had long planned to make these announcements, and that the fact that they came so soon after the abortion case came to light was a coincidence. I’m less interested in the timing, though, than the practical challenges of making encrypted messaging the default for hundreds of millions of people. In recent conversations with Meta employees, I’ve come to understand more about what’s taking so long — and how consumer apathy toward encryption has created challenges for the company as it works to create a secure messaging app that its user base will actually use.
It has now been three years since Mark Zuckerberg announced, amid an ongoing shift away from public feeds toward private chats, that going forward the company’s products would embrace encryption and privacy. At the time, WhatsApp was already encrypted end to end; the next step was to bring the same level of protection to Messenger and Instagram. Doing so required that the apps be rebuilt almost from scratch — and teams have encountered a number of roadblocks along the way.
The first is that end-to-end encryption can be a pain to use. This is often the tradeoff we make in exchange for more security, of course. But average people may be less inclined to use a messaging app that requires them to set a PIN to restore old messages, or displays information about the security of their messages that they find confusing or off-putting.
The second, related challenge is that most people don’t know what end-to-end encryption is. Or, if they’re heard of it, they might not be able to distinguish it from other, less secure forms of encryption. Gmail, among many other platforms, encrypts messages only when a message is in transit between Google’s servers and your device. This is known as transport layer security, and it offers most users good protection, but Google — or law enforcement — can still read the contents of your messages.
Meta’s user research has shown that people grow concerned when you tell them you’re adding end-to-end encryption, one employee told me, because it scares them that the company might have been reading their messages before now. Users also sometimes assume new features are added for Meta’s benefit, rather than their own — that’s one reason the company labeled stored-message feature “secure storage,” rather than “automatic backups,” so as to emphasize security in the branding.
When they company surveyed users earlier this year, only a minority identified as being significantly concerned about their privacy, I’m told.
On the contrary, I’m told, access to old messages is a high priority for many Messenger users. Messing with that too much could send users scrambling for communications apps like the ones they’re used to — the kind that keep your chat history stored on a server, where law enforcement may be able to request and read it.
A third challenge is that end-to-end encryption can be difficult to maintain even within Facebook, I’m told. Messenger is integrated into the product in ways that can break encryption — Watch Together, for example, lets people message each other while watching live video. But that inserts a third person into the chat, making encryption much more difficult.
There’s more. Encryption won’t work unless everyone is using an up-to-date version of Messenger; lots of people don’t update their apps. It’s also tough to pack encryption into a sister app like Messenger Lite, which is designed to have a small file size so it can be used by users with older phones or limited data access. End-to-end encryption technology takes up a lot of megabytes.
I bring all this up not to excuse Meta for failing to roll out end-to-end encryption up to now. The company has been working on the project steadily for three years, and while I wish it were moving faster, I’m sympathetic to some of the concerns that employees raised with me over the past few days.
At the same time, I think Meta’s challenges in bringing encryption to the masses in its messaging app raise real questions about the appetite for security in these products. Activists and journalists take it for granted that they should be using encrypted messaging apps already, ideally one with no server-side storage of messages, such as Signal.
But Meta’s research shows that average people still haven’t gotten — well, the message. And it’s an open question how the events of 2022, as well as whatever we’re in for in the next few years, may change that.
For all the attention the Nebraska case got, it had almost nothing to do with the overturning of Roe vs. Wade: Nebraska already banned abortion after 20 weeks, and the medical abortion at the heart of this case — which took place at 28 weeks — would have been illegal under state law even had Roe been upheld.
Yes, Meta turned over the suspects’ messages upon being subpoenaed, but there’s nothing surprising about that, either: the company got 214,777 requests in the second half of last year, about 364,642 different accounts; it produced at least some data 72.8 percent of the time. Facebook cooperating with law enforcement is the rule, not the exception.
In another way, though, this has everything to do with Roe. Untold numbers of women will now be seeking abortion care out of state, possibly violating state law to do so, and they’ll need to communicate about it with their partners, family, and friends. The coming months and years will bring many more stories like the Kansas case, drawing fresh attention each time to how useful tech platforms are to law enforcement in gathering evidence.
It’s possible the general apathy toward encryption of most Facebook users will survive the coming storm of privacy invasions. But it strikes me as much more likely that the culture will shift to demand that companies collect and store less data, and do a better job educating people about how to use their products safely.
If there’s a silver lining in any of this, it’s that the rise in criminal prosecutions for abortion could create a massive new constituency organized to defend encryption. From India to the European Union to the United States, lawmakers and regulators have been working to undermine secure messages for many years now. To date, it has been preserved thanks in part to a loose coalition of activists, academics, civil society groups, tech platforms, and journalists: in short, some of the people who rely upon it most.
But with Roe overturned, the number of people for whom encrypted messaging is now a necessity has grown markedly. A cultural shift toward encryption could help preserve and expand access to secure messaging, both in the United States and around the world.
That shift will take time. But there’s much that tech platforms can do now, and here’s hoping they will.
If you’ve ever sent a message to a friend on Facebook Messenger, you’ve probably noticed a little check mark icon next to the message you sent.
They’re nothing to worry about, but these check mark icons do offer up a little information on the status of the Messenger messages you send. Want to know what each of these check mark icons means? Keep reading to find out.
What does a check mark mean on Messenger?
Those check mark icons that you see after you’ve sent a message on Messenger aren’t just random symbols. They are icons that are part of a larger series of message indicators that Messenger uses to tell you what the current status of your sent message is. These statuses could be one of the following:
Your message is still sending.
Your message has successfully sent.
The message has been successfully delivered to your intended recipient.
The message has been viewed by your intended recipient.
But not all of these statuses are denoted by check mark icons in Facebook Messenger. Only two of them are. In the next section, we’ll go over all of the icons used to indicate the above statuses, including the check mark icons.
What does each Messenger message status icon mean?
Note: The colors of each icon listed below may vary beyond blue and white. You may also see purple and white. Or even gray and white. We mentioned the color blue below because that is what’s currently listed on Facebook’s own Help Center guide on the matter.
If you see an empty blue circle …
Your message is still processing. It’s still sending. This means Messenger hasn’t received your message yet.
If you see a blue circle with a white middle and a blue check mark …
The message is officially considered “sent” but not delivered. And there is a difference between the two. For Messenger, “sent” just means Messenger has gotten your message and will deliver it to your friend. And “delivered” means that your friend has received the message, but hasn’t viewed it yet.
If you see a blue circle with a blue middle and white check mark …
This is the icon that means your message has been successfully delivered to your friend. Your message is now sitting in your friend’s Messenger inbox waiting to be read.
If you see your friend’s tiny profile picture …
Then that means your friend has viewed your message. Congrats! Your Facebook Messenger message was sent, delivered, and viewed successfully!
Are you interested in cultivating an online community about one of your hobbies? Do you just need a way to organize a family event or book club? If so, you may want to consider creating a Facebook group. Facebook groups can provide a central, online location for gathering and communicating with your friends and family or for meeting new people to discuss your shared interests or plan an event together. It can be a great way to cultivate a sense of community online.
Plus, creating a Facebook group is incredibly easy.
What is the difference between a page and a group on Facebook?
The main difference between a Facebook page and a group comes down to privacy and visibility.
A Facebook page is essentially designed to maximize visibility, and you can’t really make them private. That’s because they’re more for businesses and public figures. Pages are for use cases in which a person, brand, or company wants to be seen and wants to attract as many customers and fans as possible. Usually, anyone can like or follow a page to keep up with the goings on of the brand or person that page represents.
Facebook groups are different. Groups actually offer the option to be made private and/or given limited visibility because not all groups want to attract lots of attention from everyone on Facebook. Some groups are for small, specific interests and some groups may want to limit membership to select people. Groups are more about cultivating community rather than promoting a brand, and sometimes setting privacy limits can help keep those communities safe.
How do I start a Facebook group: on desktop web
Starting a Facebook group is actually a fairly easy process. Here’s how to do it on a PC via the desktop website version of Facebook.
Step 2: Select the Menu icon in the top right. This icon looks like series of nine dots arranged in a square.
Step 3: From the menu that appears, under the Create header, choose Group.
Step 4: On the Create group screen, add your group’s name, choose a privacy level, and invite your friends (if you’d like).
For privacy levels, you can choose between Public and Private. Public means anyone on Facebook can view the posts in your group and see who is in your group. Private means that only members of that group can view the posts in it and see who the other members are.
If you choose Private, you’ll then have to select the level of visibility of the group: Visible or Hidden. Visible means anyone on Facebook can find this group, and Hidden means only group members can find it.
Step 5: Then choose the Create button at the bottom of the Create group screen. That’s it! You’ve now created a Facebook group!
How do I start a Facebook group: on the mobile app
Alternatively, if you’d rather use the Facebook mobile app to create a Facebook group, you can do that too. It’s pretty similar to the desktop web method. These instructions should work for both Android and iOS devices. Here are the basics of creating a Facebook group via the mobile app:
Open the Facebook mobile app on your device and then select the Menu icon (three lines) > Groups > Plus sign icon > Create group.
Then, on the Create group screen, you’ll add a group name, choose your privacy level, and choose your visibility level if needed. Select the Create group button at the bottom of the screen. At this point, your group will have been created, and you’ll be prompted to invite people to join and start setting up your group page.
Hacking group LAPSUS$ has revealed its latest target: Globant, an IT and software development company whose clientele includes the likes of technology giant Facebook.
In a Telegram update where the hackers affirmed they’re “back from a vacation,” — potentially referring to alleged members of the group getting arrested in London — LAPSUS$ stated that they’ve acquired 70GB of data from the cyber security breach.
Not only have they seemingly obtained sensitive information belonging to several large organizations, the group decided to release the entire 70GB via a torrent link.
As reported by Computing, the group shared evidence of the hack via an image displaying folders that are named after Facebook, DHL, Stifel, and C-Span, to name but a few.
Although there is a folder titled “apple-health-app,” it is not directly related to the iPhone maker.
Instead, The Verge highlights how the data it contains is actually associated with Globant’s BeHealthy app, which was developed in partnership with Apple due to its use of the Apple Watch.
Meanwhile, LAPSUS$ posted an additional message on its Telegram group listing all of the passwords of Globant’s system admins and the company’s DevOps platforms. Vx-underground, which has conveniently documented all of the group’s recent hacks, confirmed the passwords are extremely weak.
LAPSUS$ also threw their System Admins under the bus exposing their passwords to confluence (among other things). We have censored the passwords they displayed. However, it should be noted these passwords are very easily guessable and used multiple times… pic.twitter.com/gT7skg9mDw
Notably, login credentials for one of those platforms seemingly offered access to “3,000 spaces of customer documents.”
Following the Telegram message and subsequent leak on March 30, Globant itself confirmed it was compromised in a press release.
“We have recently detected that a limited section of our company’s code repository has been subject to unauthorized access. We have activated our security protocols and are conducting an exhaustive investigation.
According to our current analysis, the information that was accessed was limited to certain source code and project-related documentation for a very limited number of clients. To date, we have not found any evidence that other areas of our infrastructure systems or those of our clients were affected.
We are taking strict measures to prevent further incidents.”
Earlier in March, seven alleged members of the group, reportedly aged 16 to 21, were arrested in London, before being released pending further investigations. According to reports, the alleged ringleader of the group, a 16-year-old from Oxford, U.K., has also apparently been outed by rival hackers and researchers. “Our inquiries remain ongoing,” City of London police stated.
Security researchers have suggested other members of LAPSUS$ could be based out of South America.
Hacking scene’s newcomer causing a lot of noise
LAPSUS$ has gained a reputation by injecting activity into the hacking scene in an extremely short span of time.
It’s understandable when an average user from home is subjected to a hack due to weak passwords, but we’re not talking about individuals here. LAPSUS$ has successfully infiltrated some of the largest corporations in history without the apparent need to resort to complicated and sophisticated hacking methods.
Moreover, hackers are now even exploiting weak passwords that make your PC’s own power supply vulnerable to a potential attack, which could lead to threat actors causing it to burn up and start a fire. With this in mind, be sure to strengthen your passwords.
LAPSUS$ has already leaked the source codes for Microsoft’s Cortana and Bing search engine. That incident was preceded by a massive 1TB Nvidia hack. Other victims include Ubisoft, as well as the more recent cyber security breach of Okta, which prompted the latter to issue a statement acknowledging a mistake in how it reported the situation.
Facebook is pouring a lot of time and money into augmented reality, including building its own AR glasses with Ray-Ban. Right now, these gadgets can only record and share imagery, but what does the company think such devices will be used for in the future?
A new research project led by Facebook’s AI team suggests the scope of the company’s ambitions. It imagines AI systems that are constantly analyzing peoples’ lives using first-person video; recording what they see, do, and hear in order to help them with everyday tasks. Facebook’s researchers have outlined a series of skills it wants these systems to develop, including “episodic memory” (answering questions like “where did I leave my keys?”) and “audio-visual diarization” (remembering who said what when).
Right now, the tasks outlined above cannot be achieved reliably by any AI system, and Facebook stresses that this is a research project rather than a commercial development. However, it’s clear that the company sees functionality like these as the future of AR computing. “Definitely, thinking about augmented reality and what we’d like to be able to do with it, there’s possibilities down the road that we’d be leveraging this kind of research,” Facebook AI research scientist Kristen Grauman told The Verge.
Such ambitions have huge privacy implications. Privacy experts are already worried about how Facebook’s AR glasses allow wearers to covertly record members of the public. Such concerns will only be exacerbated if future versions of the hardware not only record footage, but analyze and transcribe it, turning wearers into walking surveillance machines.
The name of Facebook’s research project is Ego4D, which refers to the analysis of first-person, or “egocentric,” video. It consists of two major components: an open dataset of egocentric video and a series of benchmarks that Facebook thinks AI systems should be able to tackle in the future.
The dataset is the biggest of its kind ever created, and Facebook partnered with 13 universities around the world to collect the data. In total, some 3,205 hours of footage were recorded by 855 participants living in nine different countries. The universities, rather than Facebook, were responsible for collecting the data. Participants, some of whom were paid, wore GoPro cameras and AR glasses to record video of unscripted activity. This ranges from construction work to baking to playing with pets and socializing with friends. All footage was de-identified by the universities, which included blurring the faces of bystanders and removing any personally identifiable information.
Grauman says the dataset is the “first of its kind in both scale and diversity.” The nearest comparable project, she says, contains 100 hours of first-person footage shot entirely in kitchens. “We’ve open up the eyes of these AI systems to more than just kitchens in the UK and Sicily, but [to footage from] Saudi Arabia, Tokyo, Los Angeles, and Colombia.”
The second component of Ego4D is a series of benchmarks, or tasks, that Facebook wants researchers around the world to try and solve using AI systems trained on its dataset. The company describes these as:
Episodic memory: What happened when (e.g., “Where did I leave my keys?”)?
Forecasting: What am I likely to do next (e.g., “Wait, you’ve already added salt to this recipe”)?
Hand and object manipulation: What am I doing (e.g., “Teach me how to play the drums”)?
Audio-visual diarization: Who said what when (e.g., “What was the main topic during class?”)?
Social interaction: Who is interacting with whom (e.g., “Help me better hear the person talking to me at this noisy restaurant”)?
Right now, AI systems would find tackling any of these problems incredibly difficult, but creating datasets and benchmarks are tried-and-tested methods to spur development in the field of AI.
Indeed, the creation of one particular dataset and an associated annual competition, known as ImageNet, is often credited with kickstarting the recent AI boom. The ImagetNet datasets consists of pictures of a huge variety of objects which researchers trained AI systems to identify. In 2012, the winning entry in the competition used a particular method of deep learning to blast past rivals, inaugurating the current era of research.
Facebook is hoping its Ego4D project will have similar effects for the world of augmented reality. The company says systems trained on Ego4D might one day not only be used in wearable cameras but also home assistant robots, which also rely on first-person cameras to navigate the world around them.
“The project has the chance to really catalyze work in this field in a way that hasn’t really been possible yet,” says Grauman. “To move our field from the ability to analyze piles of photos and videos that were human-taken with a very special purpose, to this fluid, ongoing first-person visual stream that AR systems, robots, need to understand in the context of ongoing activity.”
Although the tasks that Facebook outlines certainly seem practical, the company’s interest in this area will worry many. Facebook’s record on privacy is abysmal, spanning data leaks and $5 billion fines from the FTC. It’s also beenshownrepeatedly that the company values growth and engagement above users’ well-being in many domains. With this in mind, it’s worrying that benchmarks in this Ego4D project do not include prominent privacy safeguards. For example, the “audio-visual diarization” task (transcribing what different people say) never mentions removing data about people who don’t want to be recorded.
When asked about these issues, a spokesperson for Facebook told The Verge that it expected that privacy safeguards would be introduced further down the line. “We expect that to the extent companies use this dataset and benchmark to develop commercial applications, they will develop safeguards for such applications,” said the spokesperson. “For example, before AR glasses can enhance someone’s voice, there could be a protocol in place that they follow to ask someone else’s glasses for permission, or they could limit the range of the device so it can only pick up sounds from the people with whom I am already having a conversation or who are in my immediate vicinity.”
Facebook is testing a way to give its users more profiles per account, ostensibly to give users more opportunities for sharing posts and keeping up with the platform’s content.
On Thursday, Bloomberg reported that Meta (Facebook’s parent company) would begin experimenting with letting some Facebook users generate up to four other profiles in addition to their main account’s profile.
That’s right: The experimental functionality will let users have multiple profiles linked to a main account and those additional profiles aren’t required to have real names. Plus, each profile is expected to have its own separate feed. However, though each profile has its own feed, Bloomberg says that only one profile will “be able to comment or like another post.”
Interestingly, Bloomberg also reports that since all of a user’s profiles are still subject to Facebook’s rules and they’re all connected to a main Facebook account, then “rule violations on one profile will affect the others.” And TechCrunch reports that repeat violations committed with an additional profile can lead to the removal of the offending additional profile as well as all other connected profiles, which can include main accounts. So it seems that while this functionality may offer more ways to connect on Facebook, it could also come with more opportunities to lose access to those profiles.
But besides setting up a Finsta-like Facebook profile, what could these additional profiles be used for? According to TechCrunch’s reporting, it was suggested that these extra profiles could be curated to focus on specific interests or hobbies. As in, each individual profile would be created to focus on a specific topic like music or food.
Digital Trends also contacted Meta regarding this experimental feature and received a response. In the response, a Meta company spokesperson did confirm the existence of the feature and the testing of it:
“To help people tailor their experience based on interests and relationships, we’re testing a way for people to have more than one profile tied to a single Facebook account. Anyone who uses Facebook must continue to follow our rules.”
The multiple-profiles functionality is expected to allow users to switch between profiles pretty easily. It’s also worth noting that those who elect to use this feature are still not allowed to use their extra profiles to impersonate others. Even though you can pick out any name you want for these profiles, those names are still subject to Facebook’s rules, which means they can’t contain any special characters or numbers, and they have to be unique names.
Lastly, if you plan on doing things like running a Facebook Page or using Facebook Dating, just know that you can’t use your additional profiles for these. You’ll need your main account’s profile for those features.
If you applied for financial aid through Free Application for Federal Student Aid (FAFSA) in the US in early 2022, there’s a very good chance some personal information was provided to a platform that’s completely irrelevant to the process: Facebook.
This report from The Markup exposed that, as early as January 2022, the US Department of Education sent data from website visitors to Facebook, potentially including information submitted on forms like first and last name, country, phone number, and email address, via the “Meta Pixel” tracking pixel — even if the person didn’t have a Facebook account. The Markup also notes that this data collection began “even before the user logged in to studentaid.gov.”
When asked about this tracking, a spokesperson for the Department of Education initially denied that it was taking place, despite The Markup finding code that clearly indicates otherwise. Federal Student Aid COO Richard Cordray then fessed up, telling the publication that the data gathering was “part of a March 22 advertising campaign,” which had “inadvertently” sent the personal data to Facebook. The data-sharing feature was then turned off. Cordray also said the data “was automatically anonymized and neither FSA nor Facebook used any of it for any purpose,” without explaining how they were able to verify that.
On Wednesday, Facebook’s parent company, Meta, removed a deepfake video of Ukrainian President Volodymyr Zelenskyy issuing a statement that he never made, asking Ukrainians to “lay down arms.”
The deepfake appears to have been first broadcasted on a Ukrainian news website for TV24 after an alleged hack, as first reported by Sky News on Wednesday. The video shows an edited Zelenskyy speaking behind a podium declaring that Ukraine has “decided to return Donbas” to Russia and that his nation’s war efforts had failed.
In the video, Zelenskyy’s head is comically larger than in real life and is more pixelated than his surrounding body. The fake voice is much deeper than his real voice as well.
As a matter of principle, I never post or link to fake or false content. But @MikaelThalen has helpfully whacked a label on this Zelensky one, so here goes.
Meta’s head of security policy, Nathaniel Gleicher, put out a tweet thread on Wednesday announcing that the video had been removed from the company’s platforms. “Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did. It appeared on a reportedly compromised website and then started showing across the internet,” Gleicher said.
Earlier this month, the Ukrainian government issued a statement warning soldiers and civilians to take pause when they encounter videos of Zelenskyy online, especially if he announces a surrender to Russian invasion. In the statement, the Ukrainian Center for Strategic Communications said that the Russian government would likely use deepfakes to convince Ukrainians to surrender.
“Videos made through such technologies are almost impossible to distinguish from the real ones. Be aware – this is a fake! His goal is to disorient, sow panic, disbelieve citizens and incite our troops to retreat,” the statement said. “Rest assured – Ukraine will not capitulate!”
After the deepfake started to circulate across the internet, Zelenskyy posted a video to his official Instagram account debunking the video. “As for the latest childish provocation with advice to lay down arms, I only advise that the troops of the Russian Federation lay down their arms and return home,” he said. “We are at home and defending Ukraine.”
Facebook banned deepfakes and other manipulated videos from its platforms in 2020 ahead of the US presidential election. The policy includes content created by artificial intelligence or machine learning algorithms that could “likely mislead” users.