Bungie sues ‘Destiny 2’ YouTuber who issued almost 100 fake DMCA claims

In December of last year, a YouTuber by the name of Lord Nazo received copyright takedown notices from CSC Global — the brand protection vendor contracted by game creator Bungie — for uploading tracks from their game Destiny 2’s original soundtrack. While some content creators might remove the offending material or appeal the copyright notice, Nazo, whose real name is Nicholas Minor, allegedly made the ill-fated decision to impersonate CSC Global and issue dozens of fake DMCA notices to his fellow creators. As first spotted by The Game Post, Bungie is now suing him for a whopping $7.6 million.

“Ninety-six times, Minor sent DMCA takedown notices purportedly on behalf of Bungie, identifying himself as Bungie’s ‘Brand Protection’ vendor in order to have YouTube instruct innocent creators to delete their Destiny 2 videos or face copyright strikes,” the lawsuit claims, “disrupting Bungie’s community of players, streamers, and fans. And all the while, ‘Lord Nazo’ was taking part in the community discussion of ‘Bungie’s’ takedowns.” Bungie is seeking “damages and injunctive relief” that include $150,000 for each fraudulent copyright claim: a total penalty of $7,650,000, not including attorney’s fees.

The game developer is also accusing Minor of using one of his fake email aliases to send harassing emails to the actual CSC Global with the subject lines such as “You’re in for it now” and “Better start running. The clock is ticking.” Minor also allegedly authored a “manifesto” that he sent to other members of the Destiny 2 community — again, under an email alias — in which he “took credit” for some of his activities. The recipients promptly forwarded the email to Bungie.

As detailed in the lawsuit, Minor appears to have done the bare minimum to cover his tracks: the first batch of fake DMCA notices used the same residential IP address he used to log-in to both his Destiny and Destiny 2 accounts, the latter of which shared the same Lord Nazo username as his YouTube, Twitter and Reddit accounts. He only switched to a VPN on March 27th — following media coverage of the fake DMCA notices. Meanwhile, Minor allegedly continued to log-in to his Destiny account under his original IP address until May.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link


Hacking group posted fake Ukrainian surrender messages, says Meta in new report

A Belarus-aligned hacking group has attempted to compromise the Facebook accounts of Ukrainian military personnel and posted videos from hacked accounts calling on the Ukrainian army to surrender, according to a new security report from Meta (the parent company of Facebook).

The hacking campaign, previously labeled “Ghostwriter” by security researchers, was carried out by a group known as UNC1151, which has been linked to the Belarusian government in research conducted by Mandiant. A February security update from Meta flagged activity from the Ghostwriter operation, but since that update, the company said that the group had attempted to compromise “dozens” more accounts, although it had only been successful in a handful of cases.

Where successful, the hackers behind Ghostwriter had been able to post videos that appeared to come from the compromised accounts, but Meta said that it had blocked these videos from being shared further.

The spreading of fake surrender messages has already been a tactic of hackers who compromised television networks in Ukraine and planted false reports of a Ukrainian surrender into the chyrons of live broadcast news. Though such statements can quickly be disproved, experts have suggested that their purpose is to erode Ukrainians’ trust in media overall.

The details of the latest Ghostwriter hacks were published in the first installment of Meta’s quarterly Adversarial Threat Report, a new offering from the company that builds on a similar report from December 2021 that detailed threats faced throughout that year. While Meta has previously published regular reports on coordinated inauthentic behavior on the platform, the scope of the new threat report is wider and encompasses espionage operations and other emerging threats like mass content reporting campaigns.

Besides the hacks against military personnel, the latest report also details a range of other actions conducted by pro-Russian threat actors, including covert influence campaigns against a variety of Ukrainian targets. In one case from the report, Meta alleges that a group linked to the Belarusian KGB attempted to organize a protest event against the Polish government in Warsaw, although the event and the account that created it were quickly taken offline.

Although foreign influence operations like these make up some of the most dramatic details of the report, Meta says that it has also seen an uptick in influence campaigns conducted domestically by repressive governments against their own citizens. In a conference call with reporters Wednesday, Facebook’s president for global affairs, Nick Clegg, said that attacks on internet freedom had intensified sharply.

“While much of the public attention in recent years has been focused on foreign interference, domestic threats are on the rise globally,” Clegg said. “Just as in 2021, more than half the operations we disrupted in the first three months of this year targeted people in their own countries, including by hacking people’s accounts, running deceptive campaigns and falsely reporting content to Facebook to silence critics.”

Authoritarian regimes generally looked to control access to information in two ways, Clegg said: firstly by pushing propaganda through state-run media and influence campaigns, and secondly by trying to shut down the flow of credible alternative sources of information.

Per Meta’s report, the latter approach has also been used to restrict information about the Ukraine conflict, with the company removing a network of around 200 Russian-operated accounts that engaged in coordinated reporting of other users for fictitious violations, including hate speech, bullying, and inauthenticity, in an attempt to have them and their posts removed from Facebook.

Echoing an argument taken from Meta’s lobbying efforts, Clegg said that the threats outlined in the report showed “why we need to protect the open internet, not just against authoritarian regimes, but also against fragmentation from the lack of clear rules.”

Repost: Original Source and Author Link


Hackers hijacked the OpenSea Discord with a fake YouTube NFT scam

Around 4:30AM ET on Friday, the official Discord channel for OpenSea, the world’s largest NFT marketplace, joined the growing list of NFT communities that have exposed participants to phishing attacks.

In this case, a bot made a fake announcement about OpenSea partnering with YouTube, enticing users to click on a “YouTube Genesis Mint Pass” link to snag one of 100 free NFTs with “insane utility” before they’d be gone forever, as well as a few follow-up messages. Blockchain security tracking company PeckShield tagged the URL the attackers linked, “youtubenft[.]art” as a phishing site, which is now unavailable.

While the messages and phishing site are already gone, one person who said they lost NFTs in the incident pointed to this address on the blockchain as belonging to the attacker, so we can see more information about what happened next. While that identity has been blocked on OpenSea’s site, viewing it via or a competing NFT marketplace, Rarible, shows 13 NFTs were transferred to it from five sources around the time of the attack. They’re now also reported on OpenSea for “suspicious activity” and, based on their prices when last sold, appear to be worth a little over $18,000.

The phishing message, as seen on Discord.
Image: Richard Lawler / Discord

A screenshot of the thief’s haul as seen on Rarible

A screenshot of the thief’s haul as seen on Rarible.
Image: Richard Lawler /

This kind of intermediary attack in which scammers exploit NFT traders who are looking to capitalize on “airdrops” has become common for prominent Web3 organizations. It’s common for announcements to appear out of the blue, and the nature of the blockchain may give some users reasons to click first and consider the consequences later.

Beyond the desire to snag rare items, there’s the knowledge that waiting can make minting your NFT amid a rush much slower, more expensive, or even impossible (if you run out of funds during the process). If they’ve left any items or cryptocurrency in their hot wallet that’s connected to the internet, then coughing up login details to a phisher could give them away in seconds.

In a statement to The Verge, OpenSea spokesperson Allie Mack confirmed the incident, saying, “Last night, an attacker was able to post malicious links in several of our Discord channels. We noticed the malicious links soon after they were posted and took immediate steps to remedy the situation, including removing the malicious bots and accounts. We also alerted our community via our Twitter support channel to not click any links in our Discord. We have not seen any new malicious posts since 4:30am ET.”

“We continue to actively investigate this attack, and will keep our community apprised of any relevant new information. Our preliminary analysis indicates that the attack had limited impact. We are currently aware of fewer than 10 impacted wallets and stolen items amounting to less than 10 ETH,” says Mack.

OpenSea has not made a statement about how the channel was hacked, but as we explained in December, one entry point for this style of attack is the webhooks feature that organizations often use to control the bots in their channels to make posts. If a hacker gains access or compromises the account of someone authorized, then they can use it to send a message and / or URL that appears to come from an official source.

Recent attacks have included one that stole $800k worth of the blockchain trinkets from the “Rare Bears” Discord, and the Bored Ape Yacht Club announced its channel had been compromised on April 1st. On April 25th, the BAYC Instagram served as a conduit for a similar heist that snagged more than $1 million worth of NFTs just by sending out a phishing link.

Repost: Original Source and Author Link


Facebook took down a fake Swiss scientist account that was part of an international misinfo campaign

Buried deep within Facebook’s November report on Coordinated Inauthentic Behavior is a tale of international intrigue that seems more like a Netflix drama than an attempted disinformation campaign (although the way Netflix mines social media for ideas these days, maybe stay tuned). On July 24th, a Swiss biologist named Wilson Edwards claimed on Facebook and Twitter that the US was pressuring World Health Organization (WHO) scientists studying the origins of COVID-19.

His claims spread quickly on social media, as such claims are wont to do, and within a week’s time, the Global Times and People’s Daily, two state-run Chinese media outlets, were denouncing Wilson Edwards’ claims as “intimidation” by the US. Wilson Edwards created his Facebook account two days after China refused to accept a plan by the WHO for a second phase study into the origins of the coronavirus.

Have you guessed the plot twist yet? Turns out, according to the Swiss Embassy in Beijing, that there is no such Swiss citizen by the name Wilson Edwards. “If you exist, we would like to meet you! But it is more likely that this is a fake news, and we call on the Chinese press and netizens to take down the posts,” the embassy tweeted from its official account on August 10th.

Facebook investigated and removed the Wilson Edwards account the same day the Swiss embassy tweeted. Ben Nimmo, global IO threat intel lead (excellent title for our drama) at Facebook parent company Meta, writes that the Wilson Edwards account was part of a misinformation campaign that originated in China.

Faked profile picture of one of the fake accounts Meta says liked the post by “Wilson Edwards”
Photo: Meta

“In essence, this campaign was a hall of mirrors, endlessly reflecting a single fake persona,” Nimmo says. Meta’s investigation found that nearly the entire initial spread of the Wilson Edwards story on Facebook was inauthentic: “the work of a multi-pronged, largely unsuccessful influence operation,” which brought together hundreds of fake accounts as well as some authentic accounts that belonged to employees of “Chinese state infrastructure companies across four continents.”

Only a handful of real people engaged with Wilson Edwards, Meta says, despite the 524 Facebook accounts, 20 Facebook pages, four Facebook groups, and 86 Instagram accounts that the company has removed as part of its investigation. The scammers spent less than $5,000 on Facebook and Instagram ads as part of the campaign and used VPNs to conceal the accounts’ origins.

“This is consistent with what we’ve seen in our research of covert influence operations over the past four years: we haven’t seen successful IO campaigns built on fake engagement tactics,” Nimmo says. “Unlike elaborate fictitious personas that put work into building authentic communities to influence them, the content liked by these crude fake accounts would typically be only seen by their ‘fake friends.’” (And we all know what happens to sham friends.)

The cluster of fake accounts that Meta connected to the Wilson Edwards scheme, along with some people associated with information security firm Silence in China, apparently has made (unsuccessfully, Meta says) other attempts at influence operations that were “typically small-scale and of negligible impact.”

It’s not the most exciting end to our story, but at least Wilson Edwards won’t try to catfish any other international health organizations. Now, if we could just get someone to rein in the tenacious people who keep calling about the car warranty I didn’t know I had…

Repost: Original Source and Author Link


The FBI’s email system was hacked to send out fake cybersecurity warnings

Hackers targeted the Federal Bureau of Investigation’s (FBI) email servers, sending out thousands of phony messages that say its recipients have become the victims of a “sophisticated chain attack,” first reported by Bleeping Computer. The emails were initially uncovered by The Spamhaus Project, a nonprofit organization that investigates email spammers.

The emails claim that Vinny Troia was behind the fake attacks and also falsely state that Troia is associated with the infamous hacking group, The Dark Overlord — the same bad actors who leaked the fifth season of Orange Is the New Black. In reality, Troia is a prominent cybersecurity researcher who runs two dark web security companies, NightLion and Shadowbyte.

As noted by Bleeping Computer, the hackers managed to send out emails to over 100,000 addresses, all of which were scraped from the American Registry for Internet Numbers (ARIN) database. A report by Bloomberg says that hackers used the FBI’s public-facing email system, making the emails seem all the more legitimate. Cybersecurity researcher Kevin Beaumont also attests to the email’s legitimate appearance, stating that the headers are authenticated as coming from FBI servers using the Domain Keys Identified Mail (DKIM) process that’s part of the system Gmail uses to stick brand logos on verified corporate emails.

The FBI responded to the incident in a press release, noting that it’s an “ongoing situation” and that “the impacted hardware was taken offline.” Aside from that, the FBI says it doesn’t have any more information it can share at this time.

According to Bleeping Computer, the spam campaign was likely carried out as an attempt to defame Troia. In a tweet, Troia speculates that an individual who goes by the name “Pompompurin” may have launched the attack. As Bleeping Computer notes, that same person has allegedly tried damaging Troia’s reputation in similar ways in the past.

A report by computer security reporter Brian Krebs also connects Pompompurin to the incident — the individual allegedly messaged him from an FBI email address when the attacks were launched, stating, “Hi its pompompurin. Check headers of this email it’s actually coming from FBI server.” KrebsOnSecurity even got a chance to speak with Pompompurin, who claims that the hack was meant to highlight the security vulnerabilities within the FBI’s email systems.

“I could’ve 1000 percent used this to send more legit looking emails, trick companies into handing over data etc.,” Pompompurin said in a statement to KrebsOnSecurity. The individual also told the outlet that they exploited a security gap on the FBI’s Law Enforcement Enterprise (LEEP) portal and managed to sign up for an account using a one-time password embedded in the page’s HTML. From there, Pompompurin claims they were able to manipulate the sender’s address and email body, executing the massive spam campaign.

With that kind of access, the attack could’ve been much worse than a false alert that put system administrators on high alert. Earlier this month, President Joe Biden mandated a bug fix that calls for civilian federal agencies to patch any known threats. In May, Biden signed an executive order that aims to improve the nation’s cyber defenses in the wake of detrimental attacks on the Colonial Pipeline and SolarWinds.

Repost: Original Source and Author Link


Scammers use Google Ads to siphon off hundreds of thousands of dollars from fake crypto wallets

The crypto world is full of dangers, with scammers lying in wait for newbies and novices. A recent report from security outfit Check Point Research highlights a potent form of attack: using Google Ads to direct users to fake crypto wallets. In its report, CPR said it has seen roughly half a million dollars siphoned off through these methods in just the last few days.

Here’s how the scam works. Attacker buys Google Ads in response to searches for popular crypto wallets (that’s the software used to store cryptocurrency, NFTs, and the like). CPR says it’s noticed scams targeting Phantom and MetaMask wallets, which are the most popular wallets for the Solana and Ethereum ecosystems.

When an unsuspecting user Googles “phantom,” the Google Ad result (which appears above actual search results) directs them to a phishing website that looks like the real thing. Then, one of two things happens: either the user enters their credentials which the attacker keeps. Or, much weirder, if they try to create a new wallet they’re told to use a recovery password which actually logs them into a wallet controlled by the attacker, not their own. “This means if they transfer any funds, the attacker will get that immediately,” says CPR.

The attackers use fake URLs to trick users into thinking they’re logging into their crypto wallets.
Image: CPR

As with other phishing scams, the fake sites are designed to look as similar as possible to the real ones.
Image: CPR

As with phishing scams more generally, the attackers rely on making their fake log-in pages look as much as possible like the real thing. CPR notes that they’ve seen attackers use fake URLs to trick users, directing them to or, for example, instead of the correct The group has also seen similar phishing scams used to direct users to fake crypto currency exchanges, including PancakeSwap and UniSwap.

CPR’s researchers say they started noticing these scams after seeing crypto users complain about their losses on Reddit and other forums. They estimate that “at least half a million dollars” have been stolen over the past few days.

“I believe we’re at the advent of a new cyber crime trend, where scammers will use Google Search as a primary attack vector to reach crypto wallets, instead of traditionally phishing through email,” said CPR’s Oded Vanunu in a press statement. “The phishing websites where victims were directed to reflected meticulous copying and imitation of wallet brand messaging. And what’s most alarming is that multiple scammer groups are bidding for keywords on Google Ads, which is likely a signal of the success of these new phishing campaigns that are geared to heist crypto wallets.”

The group offers a few words of wisdom for users hoping to avoid these pitfalls, including never clicking on Google Ads results but instead looking at search results, and always checking the URL of the site you’re visiting.

Repost: Original Source and Author Link

Tech News

Amazon boots another tech brand to the curb seemingly over fake reviews

Anyone who frequently shops on Amazon knows a wide range of companies place little cards into the package promising a gift card, sometimes worth more than the product itself, for users who leave five-star reviews. To say it’s disconcerting to purchase a product based on a horde of high reviews and then find out they’re being paid for would be an understatement. As a result, Amazon has been stomping out brands, many of them from China, that are paying for reviews relentlessly, and another major company has now been delisted.

The latest tech company to get the boot from Amazon is called Choetech. The company is a Chinese tech accessory brand, and it appears that Choetech has been completely delisted from the platform. While the exact reason the company has been removed is unknown, it has likely been caught in the crackdown on paid reviews. Other major tech firms, including Aukey, Ravpower and Mpow, were removed from Amazon in the last few months.

Amazon has very strict guidelines for product reviews prohibiting sellers from posting reviews of their products, paying for reviews, or offering money and gift cards to incentivize users to post positive reviews. Amazon has always been clear about its zero-tolerance policy for violations of those guidelines.

Amazon’s guidelines state that companies caught running afoul of those guidelines will see their products immediately and permanently removed from the platform. Removing products from the store offered by companies paying for fake reviews is a clear win for consumers.

It’s good to see Amazon cracking down hard on companies that aren’t playing by the rules. I frequently shop from Amazon, and the number of products that come with cards offering gift certificates for 5-star reviews on Amazon is staggering. The fake reviews certainly influence my purchase decisions.

Repost: Original Source and Author Link

Tech News

Fake science is getting faker — thanks, AI

The practice of science involves trying to find things out about the world by using rigid logic and testing every assumption. Researchers then write up any important findings in papers and submit them for possible publication. After a peer-review process, in which other scientists check that the research is sound, journals publish papers for public consumption.

You might therefore reasonably believe that published papers are quite reliable and meet high-quality standards. You might expect small mistakes that got overlooked during peer review, but no major blunders. It’s science, after all!

You’d be wrong in expecting this, though. Real and good science does exist, but there’s a worrying amount of bogus research out there, too. And in the last few years, it has increased in volume at lightning speed, as evidenced by the skyrocketing number of paper retractions.

Fake science

A number of practices currently threaten to undermine the legitimacy of scientific research. They include made-up authors, the addition of scientists who had nothing to do with a paper as a co-writer and even more nefarious practices like swamping journals with submissions from low-quality, AI-written junk.

This process is similar to a recall at the grocery store. If a previously sold product is bad or dangerous for some reason, the store might decide to recall it and ask all customers not to use it. Similarly, a journal can recall a published paper that, in hindsight, turned out to be bogus.

Of course, sometimes papers get retracted because the authors made an honest mistake in their research. In more than half the cases, however, it’s because of academic misconduct or fraud. Up until a decade ago, this sort of behavior was more or less limited to researchers falsifying experimental data or skewing results to favor their theory. The more sophisticated technology has become, however, the more things have gotten a lot more complicated.

One simple solution would be to just ignore bogus papers. The problem, though, is that they’re often hard to identify. Also, once a paper is retracted from a publication, that tarnishes the entire journal a bit. Let this happen often enough, and the public’s confidence in science as a whole goes down. Therefore, the scientific community as a whole needs to take this problem seriously.

Camille Noûs

Some of the problem is analog. Camille Noûs doesn’t have much to do with AI, but it deserves a mention nevertheless. Born in March 2020, Noûs has already co-authored more than 180 papers in fields as diverse as astrophysics, computer science and biology

I’m saying “it” because Noûs is not a real person; rather, it’s an artifact invented by French research advocacy group RogueESR. It carries the gender-neutral French first name Camille and a conflation of the ancient Greek word “νοῦς,” meaning reason or cognition, and the French word “nous,” meaning “us.”

Noûs was created in response to a heavily criticized new law (source in French) to reorganize academic research in France. Although the law’s objective was to make research better, its critics think that scientists’ jobs will be unfairly precarious and dependent on external funding under its requirements. In particular, the funding a scientist gets must depend on their own previous achievements, although research is often a community effort.

To make this concern visible, many researchers chose to add Noûs as a co-author. The journals and peer reviewers who were in charge of checking those papers weren’t always informed, however, that Noûs isn’t a real person.

Although the research portion of all these papers so far seems legitimate, it’s cause for concern that one can so easily add a co-author that doesn’t even have an ID card. Although highlighting communal efforts with authors like Noûs is an honorable goal, the idea that scientists can be invented out of thin air in this day and age is quite alarming.

Credit: Author provided