In December of last year, a YouTuber by the name of Lord Nazo received copyright takedown notices from CSC Global — the brand protection vendor contracted by game creator Bungie — for uploading tracks from their game Destiny 2’s original soundtrack. While some content creators might remove the offending material or appeal the copyright notice, Nazo, whose real name is Nicholas Minor, allegedly made the ill-fated decision to impersonate CSC Global and issue dozens of fake DMCA notices to his fellow creators. As first spotted by The Game Post, Bungie is now suing him for a whopping $7.6 million.
“Ninety-six times, Minor sent DMCA takedown notices purportedly on behalf of Bungie, identifying himself as Bungie’s ‘Brand Protection’ vendor in order to have YouTube instruct innocent creators to delete their Destiny 2 videos or face copyright strikes,” the lawsuit claims, “disrupting Bungie’s community of players, streamers, and fans. And all the while, ‘Lord Nazo’ was taking part in the community discussion of ‘Bungie’s’ takedowns.” Bungie is seeking “damages and injunctive relief” that include $150,000 for each fraudulent copyright claim: a total penalty of $7,650,000, not including attorney’s fees.
The game developer is also accusing Minor of using one of his fake email aliases to send harassing emails to the actual CSC Global with the subject lines such as “You’re in for it now” and “Better start running. The clock is ticking.” Minor also allegedly authored a “manifesto” that he sent to other members of the Destiny 2 community — again, under an email alias — in which he “took credit” for some of his activities. The recipients promptly forwarded the email to Bungie.
As detailed in the lawsuit, Minor appears to have done the bare minimum to cover his tracks: the first batch of fake DMCA notices used the same residential IP address he used to log-in to both his Destiny and Destiny 2 accounts, the latter of which shared the same Lord Nazo username as his YouTube, Twitter and Reddit accounts. He only switched to a VPN on March 27th — following media coverage of the fake DMCA notices. Meanwhile, Minor allegedly continued to log-in to his Destiny account under his original IP address until May.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
A Belarus-aligned hacking group has attempted to compromise the Facebook accounts of Ukrainian military personnel and posted videos from hacked accounts calling on the Ukrainian army to surrender, according to a new security report from Meta (the parent company of Facebook).
The hacking campaign, previously labeled “Ghostwriter” by security researchers, was carried out by a group known as UNC1151, which has been linked to the Belarusian government in research conducted by Mandiant. A February security update from Meta flagged activity from the Ghostwriter operation, but since that update, the company said that the group had attempted to compromise “dozens” more accounts, although it had only been successful in a handful of cases.
Where successful, the hackers behind Ghostwriter had been able to post videos that appeared to come from the compromised accounts, but Meta said that it had blocked these videos from being shared further.
The spreading of fake surrender messages has already been a tactic of hackers who compromised television networks in Ukraine and planted false reports of a Ukrainian surrender into the chyrons of live broadcast news. Though such statements can quickly be disproved, experts have suggested that their purpose is to erode Ukrainians’ trust in media overall.
The details of the latest Ghostwriter hacks were published in the first installment of Meta’s quarterly Adversarial Threat Report, a new offering from the company that builds on a similar report from December 2021 that detailed threats faced throughout that year. While Meta has previously published regular reports on coordinated inauthentic behavior on the platform, the scope of the new threat report is wider and encompasses espionage operations and other emerging threats like mass content reporting campaigns.
Besides the hacks against military personnel, the latest report also details a range of other actions conducted by pro-Russian threat actors, including covert influence campaigns against a variety of Ukrainian targets. In one case from the report, Meta alleges that a group linked to the Belarusian KGB attempted to organize a protest event against the Polish government in Warsaw, although the event and the account that created it were quickly taken offline.
Although foreign influence operations like these make up some of the most dramatic details of the report, Meta says that it has also seen an uptick in influence campaigns conducted domestically by repressive governments against their own citizens. In a conference call with reporters Wednesday, Facebook’s president for global affairs, Nick Clegg, said that attacks on internet freedom had intensified sharply.
“While much of the public attention in recent years has been focused on foreign interference, domestic threats are on the rise globally,” Clegg said. “Just as in 2021, more than half the operations we disrupted in the first three months of this year targeted people in their own countries, including by hacking people’s accounts, running deceptive campaigns and falsely reporting content to Facebook to silence critics.”
Authoritarian regimes generally looked to control access to information in two ways, Clegg said: firstly by pushing propaganda through state-run media and influence campaigns, and secondly by trying to shut down the flow of credible alternative sources of information.
Per Meta’s report, the latter approach has also been used to restrict information about the Ukraine conflict, with the company removing a network of around 200 Russian-operated accounts that engaged in coordinated reporting of other users for fictitious violations, including hate speech, bullying, and inauthenticity, in an attempt to have them and their posts removed from Facebook.
Echoing an argument taken from Meta’s lobbying efforts, Clegg said that the threats outlined in the report showed “why we need to protect the open internet, not just against authoritarian regimes, but also against fragmentation from the lack of clear rules.”
Around 4:30AM ET on Friday, the official Discord channel for OpenSea, the world’s largest NFT marketplace, joined the growing list of NFT communities that have exposed participants to phishing attacks.
In this case, a bot made a fake announcement about OpenSea partnering with YouTube, enticing users to click on a “YouTube Genesis Mint Pass” link to snag one of 100 free NFTs with “insane utility” before they’d be gone forever, as well as a few follow-up messages. Blockchain security tracking company PeckShield tagged the URL the attackers linked, “youtubenft[.]art” as a phishing site, which is now unavailable.
While the messages and phishing site are already gone, one person who said they lost NFTs in the incident pointed to this address on the blockchain as belonging to the attacker, so we can see more information about what happened next. While that identity has been blocked on OpenSea’s site, viewing it via Etherscan.io or a competing NFT marketplace, Rarible, shows 13 NFTs were transferred to it from five sources around the time of the attack. They’re now also reported on OpenSea for “suspicious activity” and, based on their prices when last sold, appear to be worth a little over $18,000.
This kind of intermediary attack in which scammers exploit NFT traders who are looking to capitalize on “airdrops” has become common for prominent Web3 organizations. It’s common for announcements to appear out of the blue, and the nature of the blockchain may give some users reasons to click first and consider the consequences later.
Beyond the desire to snag rare items, there’s the knowledge that waiting can make minting your NFT amid a rush much slower, more expensive, or even impossible (if you run out of funds during the process). If they’ve left any items or cryptocurrency in their hot wallet that’s connected to the internet, then coughing up login details to a phisher could give them away in seconds.
In a statement to The Verge, OpenSea spokesperson Allie Mack confirmed the incident, saying, “Last night, an attacker was able to post malicious links in several of our Discord channels. We noticed the malicious links soon after they were posted and took immediate steps to remedy the situation, including removing the malicious bots and accounts. We also alerted our community via our Twitter support channel to not click any links in our Discord. We have not seen any new malicious posts since 4:30am ET.”
“We continue to actively investigate this attack, and will keep our community apprised of any relevant new information. Our preliminary analysis indicates that the attack had limited impact. We are currently aware of fewer than 10 impacted wallets and stolen items amounting to less than 10 ETH,” says Mack.
OpenSea has not made a statement about how the channel was hacked, but as we explained in December, one entry point for this style of attack is the webhooks feature that organizations often use to control the bots in their channels to make posts. If a hacker gains access or compromises the account of someone authorized, then they can use it to send a message and / or URL that appears to come from an official source.
Recent attacks have included one that stole $800k worth of the blockchain trinkets from the “Rare Bears” Discord, and the Bored Ape Yacht Club announced its channel had been compromised on April 1st. On April 25th, the BAYC Instagram served as a conduit for a similar heist that snagged more than $1 million worth of NFTs just by sending out a phishing link.
Buried deep within Facebook’s November report on Coordinated Inauthentic Behavior is a tale of international intrigue that seems more like a Netflix drama than an attempted disinformation campaign (although the way Netflix mines social media for ideas these days, maybe stay tuned). On July 24th, a Swiss biologist named Wilson Edwards claimed on Facebook and Twitter that the US was pressuring World Health Organization (WHO) scientists studying the origins of COVID-19.
His claims spread quickly on social media, as such claims are wont to do, and within a week’s time, the Global Times and People’s Daily, two state-run Chinese media outlets, were denouncing Wilson Edwards’ claims as “intimidation” by the US. Wilson Edwards created his Facebook account two days after China refused to accept a plan by the WHO for a second phase study into the origins of the coronavirus.
Have you guessed the plot twist yet? Turns out, according to the Swiss Embassy in Beijing, that there is no such Swiss citizen by the name Wilson Edwards. “If you exist, we would like to meet you! But it is more likely that this is a fake news, and we call on the Chinese press and netizens to take down the posts,” the embassy tweeted from its official account on August 10th.
Facebook investigated and removed the Wilson Edwards account the same day the Swiss embassy tweeted. Ben Nimmo, global IO threat intel lead (excellent title for our drama) at Facebook parent company Meta, writes that the Wilson Edwards account was part of a misinformation campaign that originated in China.
“In essence, this campaign was a hall of mirrors, endlessly reflecting a single fake persona,” Nimmo says. Meta’s investigation found that nearly the entire initial spread of the Wilson Edwards story on Facebook was inauthentic: “the work of a multi-pronged, largely unsuccessful influence operation,” which brought together hundreds of fake accounts as well as some authentic accounts that belonged to employees of “Chinese state infrastructure companies across four continents.”
Only a handful of real people engaged with Wilson Edwards, Meta says, despite the 524 Facebook accounts, 20 Facebook pages, four Facebook groups, and 86 Instagram accounts that the company has removed as part of its investigation. The scammers spent less than $5,000 on Facebook and Instagram ads as part of the campaign and used VPNs to conceal the accounts’ origins.
“This is consistent with what we’ve seen in our research of covert influence operations over the past four years: we haven’t seen successful IO campaigns built on fake engagement tactics,” Nimmo says. “Unlike elaborate fictitious personas that put work into building authentic communities to influence them, the content liked by these crude fake accounts would typically be only seen by their ‘fake friends.’” (And we all know what happens to sham friends.)
The cluster of fake accounts that Meta connected to the Wilson Edwards scheme, along with some people associated with information security firm Silence in China, apparently has made (unsuccessfully, Meta says) other attempts at influence operations that were “typically small-scale and of negligible impact.”
It’s not the most exciting end to our story, but at least Wilson Edwards won’t try to catfish any other international health organizations. Now, if we could just get someone to rein in the tenacious people who keep calling about the car warranty I didn’t know I had…
Hackers targeted the Federal Bureau of Investigation’s (FBI) email servers, sending out thousands of phony messages that say its recipients have become the victims of a “sophisticated chain attack,” first reported by Bleeping Computer. The emails were initially uncovered by The Spamhaus Project, a nonprofit organization that investigates email spammers.
The emails claim that Vinny Troia was behind the fake attacks and also falsely state that Troia is associated with the infamous hacking group, The Dark Overlord — the same bad actors who leaked the fifth season of Orange Is the New Black. In reality, Troia is a prominent cybersecurity researcher who runs two dark web security companies, NightLion and Shadowbyte.
As noted by Bleeping Computer, the hackers managed to send out emails to over 100,000 addresses, all of which were scraped from the American Registry for Internet Numbers (ARIN) database. A report by Bloomberg says that hackers used the FBI’s public-facing email system, making the emails seem all the more legitimate. Cybersecurity researcher Kevin Beaumont also attests to the email’s legitimate appearance, stating that the headers are authenticated as coming from FBI servers using the Domain Keys Identified Mail (DKIM) process that’s part of the system Gmail uses to stick brand logos on verified corporate emails.
The email was sent from these FBI internal servers, per the headers (which validate with DKIM).
dap00025.str0.eims.cjis – 10.67.35.50
dap00040.str0.eims.cjis – 10.66.2.72
Before anybody runs off the Russia cliff, I would check webapps.
The FBI responded to the incident in a press release, noting that it’s an “ongoing situation” and that “the impacted hardware was taken offline.” Aside from that, the FBI says it doesn’t have any more information it can share at this time.
According to Bleeping Computer, the spam campaign was likely carried out as an attempt to defame Troia. In a tweet, Troia speculates that an individual who goes by the name “Pompompurin” may have launched the attack. As Bleeping Computer notes, that same person has allegedly tried damaging Troia’s reputation in similar ways in the past.
A report by computer security reporter Brian Krebs also connects Pompompurin to the incident — the individual allegedly messaged him from an FBI email address when the attacks were launched, stating, “Hi its pompompurin. Check headers of this email it’s actually coming from FBI server.” KrebsOnSecurity even got a chance to speak with Pompompurin, who claims that the hack was meant to highlight the security vulnerabilities within the FBI’s email systems.
“I could’ve 1000 percent used this to send more legit looking emails, trick companies into handing over data etc.,” Pompompurin said in a statement to KrebsOnSecurity. The individual also told the outlet that they exploited a security gap on the FBI’s Law Enforcement Enterprise (LEEP) portal and managed to sign up for an account using a one-time password embedded in the page’s HTML. From there, Pompompurin claims they were able to manipulate the sender’s address and email body, executing the massive spam campaign.
With that kind of access, the attack could’ve been much worse than a false alert that put system administrators on high alert. Earlier this month, President Joe Biden mandated a bug fix that calls for civilian federal agencies to patch any known threats. In May, Biden signed an executive order that aims to improve the nation’s cyber defenses in the wake of detrimental attacks on the Colonial Pipeline and SolarWinds.
The crypto world is full of dangers, with scammers lying in wait for newbies and novices. A recent report from security outfit Check Point Research highlights a potent form of attack: using Google Ads to direct users to fake crypto wallets. In its report, CPR said it has seen roughly half a million dollars siphoned off through these methods in just the last few days.
Here’s how the scam works. Attacker buys Google Ads in response to searches for popular crypto wallets (that’s the software used to store cryptocurrency, NFTs, and the like). CPR says it’s noticed scams targeting Phantom and MetaMask wallets, which are the most popular wallets for the Solana and Ethereum ecosystems.
When an unsuspecting user Googles “phantom,” the Google Ad result (which appears above actual search results) directs them to a phishing website that looks like the real thing. Then, one of two things happens: either the user enters their credentials which the attacker keeps. Or, much weirder, if they try to create a new wallet they’re told to use a recovery password which actually logs them into a wallet controlled by the attacker, not their own. “This means if they transfer any funds, the attacker will get that immediately,” says CPR.
As with phishing scams more generally, the attackers rely on making their fake log-in pages look as much as possible like the real thing. CPR notes that they’ve seen attackers use fake URLs to trick users, directing them to phanton.app or phantonn.app, for example, instead of the correct phantom.app. The group has also seen similar phishing scams used to direct users to fake crypto currency exchanges, including PancakeSwap and UniSwap.
CPR’s researchers say they started noticing these scams after seeing crypto users complain about their losses on Reddit and other forums. They estimate that “at least half a million dollars” have been stolen over the past few days.
“I believe we’re at the advent of a new cyber crime trend, where scammers will use Google Search as a primary attack vector to reach crypto wallets, instead of traditionally phishing through email,” said CPR’s Oded Vanunu in a press statement. “The phishing websites where victims were directed to reflected meticulous copying and imitation of wallet brand messaging. And what’s most alarming is that multiple scammer groups are bidding for keywords on Google Ads, which is likely a signal of the success of these new phishing campaigns that are geared to heist crypto wallets.”
The group offers a few words of wisdom for users hoping to avoid these pitfalls, including never clicking on Google Ads results but instead looking at search results, and always checking the URL of the site you’re visiting.
Anyone who frequently shops on Amazon knows a wide range of companies place little cards into the package promising a gift card, sometimes worth more than the product itself, for users who leave five-star reviews. To say it’s disconcerting to purchase a product based on a horde of high reviews and then find out they’re being paid for would be an understatement. As a result, Amazon has been stomping out brands, many of them from China, that are paying for reviews relentlessly, and another major company has now been delisted.
The latest tech company to get the boot from Amazon is called Choetech. The company is a Chinese tech accessory brand, and it appears that Choetech has been completely delisted from the platform. While the exact reason the company has been removed is unknown, it has likely been caught in the crackdown on paid reviews. Other major tech firms, including Aukey, Ravpower and Mpow, were removed from Amazon in the last few months.
Amazon has very strict guidelines for product reviews prohibiting sellers from posting reviews of their products, paying for reviews, or offering money and gift cards to incentivize users to post positive reviews. Amazon has always been clear about its zero-tolerance policy for violations of those guidelines.
Amazon’s guidelines state that companies caught running afoul of those guidelines will see their products immediately and permanently removed from the platform. Removing products from the store offered by companies paying for fake reviews is a clear win for consumers.
It’s good to see Amazon cracking down hard on companies that aren’t playing by the rules. I frequently shop from Amazon, and the number of products that come with cards offering gift certificates for 5-star reviews on Amazon is staggering. The fake reviews certainly influence my purchase decisions.
The practice of science involves trying to find things out about the world by using rigid logic and testing every assumption. Researchers then write up any important findings in papers and submit them for possible publication. After a peer-review process, in which other scientists check that the research is sound, journals publish papers for public consumption.
You might therefore reasonably believe that published papers are quite reliable and meet high-quality standards. You might expect small mistakes that got overlooked during peer review, but no major blunders. It’s science, after all!
You’d be wrong in expecting this, though. Real and good science does exist, but there’s a worrying amount of bogus research out there, too. And in the last few years, it has increased in volume at lightning speed, as evidenced by the skyrocketing number of paperretractions.
A number of practices currently threaten to undermine the legitimacy of scientific research. They include made-up authors, the addition of scientists who had nothing to do with a paper as a co-writer and even more nefarious practices like swamping journals with submissions from low-quality, AI-written junk.
This process is similar to a recall at the grocery store. If a previously sold product is bad or dangerous for some reason, the store might decide to recall it and ask all customers not to use it. Similarly, a journal can recall a published paper that, in hindsight, turned out to be bogus.
Of course, sometimes papers get retracted because the authors made an honest mistake in their research. In more than half the cases, however, it’s because of academic misconduct or fraud. Up until a decade ago, this sort of behavior was more or less limited to researchers falsifying experimental data or skewing results to favor their theory. The more sophisticated technology has become, however, the more things have gotten a lot more complicated.
One simple solution would be to just ignore bogus papers. The problem, though, is that they’re often hard to identify. Also, once a paper is retracted from a publication, that tarnishes the entire journal a bit. Let this happen often enough, and the public’s confidence in science as a whole goes down. Therefore, the scientific community as a whole needs to take this problem seriously.
Some of the problem is analog. Camille Noûs doesn’t have much to do with AI, but it deserves a mention nevertheless. Born in March 2020, Noûs has already co-authored more than 180 papers in fields as diverse as astrophysics, computer science and biology
I’m saying “it” because Noûs is not a real person; rather, it’s an artifact invented by French research advocacy groupRogueESR. It carries the gender-neutral French first name Camille and a conflation of the ancient Greek word “νοῦς,” meaning reason or cognition, and the French word “nous,” meaning “us.”
Noûs was created in response to a heavily criticizednew law(source in French) to reorganize academic research in France. Although the law’s objective was to make research better, its criticsthinkthat scientists’ jobs will be unfairly precarious and dependent on external funding under its requirements. In particular, the funding a scientist gets must depend on their own previous achievements, although research is often a community effort.
To make this concern visible, many researchers chose to add Noûs as a co-author. The journals and peer reviewers who were in charge of checking those papers weren’t always informed, however, that Noûs isn’t a real person.
Although the research portion of all these papers so far seems legitimate, it’s cause for concern that one can so easily add a co-author that doesn’t even have an ID card. Although highlighting communal efforts with authors like Noûs is an honorable goal, the idea that scientists can be invented out of thin air in this day and age is quite alarming.
Adding authors where they don’t belong
Highlighting the flaws of the peer-review system and academia isn’t the only place this problem manifests, though. Especially inpapers about AI, cases of fake co-authorship have been mounting. This deception includes the practice of adding a high-profile scientist as a co-author without their knowledge or consent. Another way to carry this out is by adding a fictitious co-author, kind of like Camille Noûs, but with the goal of feigning international collaboration or broader scientific discourse.
In addition to giving the illusion of international collaboration, adding fake authors with respectable credentials may contribute to a paper’s credibility. Many scientists will Google all authors’ names before reading a paper or citing it in their work. But seeing a co-author from a prestigious institution may sway them to give a closer look at a paper, especially if it hasn’t been peer-reviewed yet. The prestige of an institution can then function like a proxy for credibility until the peer-review, which can take many months, is completed.
It’s unclear how many fake authors have been added to date. For one thing, some scientists may choose to ignore the fact that their name is on a paper they didn’t write, especially as the content of the papers in question often isn’t terrible (though not great) and legal action can get too expensive and time consuming. Moreover, no standard method currently exists to verify a scientist’s identity prior to publishing a paper. This gives fake authors a free pass.
All these issues show the necessity of some type of ID-verification process. Nothing formal is currently in place, and that’s a shame. In a day and age where every bank can verify your ID online and match it with the face on your webcam, science can’t even protect its most valuable contributors from scammers.
Algorithms are producing bad articles
In 1994, physicist Alan Sokal got the itch to write a bogus paper about some subject related to the humanities and submit it to a journal. Itgot accepted, although no one, including the author himself, understood what he was saying. Not only is this ridiculous, but it also goes to show how lazy peer reviewers can get. In this case, they literally accepted what was essentially an article of gibberish.
Along similar lines, in 2005, a trio of computer science students decided to developSCIgenas a prank on the research world. This program churns out completely nonsensical papers complete with graphs, figures and citations, peppered with lots of buzzwords from computer science. One of their gibberish papers was accepted for a conference at the time. What’s more, in 2013, 120 paperswere retractedby various publishers after they found out that SCIgen had written them. In 2015, the site still got 600,000 page visits per year.
Unfortunately, though, fake papers aren’t only generated as pranks. Entire companies make money writing gibberish papers and submitting them to predatory journals that hardly reject anything because they charge a fee for publishing. Such companies, also dubbedpaper mills, are getting more and more sophisticated in their methods. Although fraud detection is also getting better, experts have legitimate fears that these unscrupulous actors, having honed their craft targeting low-quality journals, may try to swamp real ones next. This could lead to an arms race between paper mills and journals that don’t want to publish bogus work.
Of course, there’s another question on the horizon: How much longer will humans be the only ones writing research papers? Could it be that in 10 or 20 years, AI-powered algorithms are able to automatically sift through swaths of literature and put their conclusions in a new paper that reaches the highest standards of research? How are we going to give credit to these algorithms or their creators?
Today, though, we’re dealing with a far sillier question: How can we identify papers that have been written by relatively unsophisticated algorithms and don’t produce any sensible content? And how do we deal with them? Apart from volunteer efforts and forcing fraudulent authors to retract their papers, the scientific community has surprisingly few answers to that question.
Act against fake science
Most journals with a good reputation to lose have, at least, a basic email verification process for researchers looking to submit a paper.Here, for example, is the verification system for the journalScience. Despite this, setting up a fake email address and going through the process with it is quite easy. This type of fraud still happens a lot, as illustrated by thesheer amountof papers that get retracted even from prestigious journals each year. So, we’re in need of stronger systems.
One good approach to verifying the identity of a scientist isORCID. Basically, through this system, every researcher can get a unique identifier, which is then automatically linked to their track record. Using ORCID throughout a journal’s peer-review and publication processes would make it much harder to create a fake identity or use other researchers’ credentials without their knowledge or consent. Although this is a very good initiative, no major journal has yet rendered identifiers from ORCID or elsewhere mandatory for all authors. That’s a shame, in my opinion, and something that could be fixed pretty easily.
Finally, AI might be itself useful in this struggle. Some journals aredeployingAI modelsto detect fake contributions. As of now, however, journals have been unable to agree on a common standard. As a consequence, journals that lack the resources or the expertise can’t apply the same quality measures as higher-ranking publications.
This widens the perceived gap between high- and low-tier journals and is, to me, clear proof that journals across the board should get together and find a way to share resources for fraud detection. Of course, high-tier journals might profit from the lack of competition in the short term. In the long term, however, having more journals with low standards might reduce confidence in scientific publishing as a whole.
It’s not that researchers and science journals are sitting on their lazy asses instead of tracking down fraudulent authors, though. Individual publications are, in fact,doing a lotto track down fake papers. But if some journals have the means and others don’t, publications aren’t operating on a level playing field. Plus, scammers will always be able to target some underfunded journals with their fake papers. Journals need to act collectively to find a way to track down paper mills and verify the identity of all their authors.
Beyond science: fake news is getting faker
If you think that fake content is a problem limited to science, you’re mistaken. Only a few years back, during the height of the Trump era, “fake news” was the buzzword of the season. The methods to generate content to sway public opinion have only gotten more sophisticated since then, and they’re jarringly similar to those of fake science papers.
For example,fake journalistswere the apparent authors of op-eds in various conservative outlets. Their headshots were generated with AI-algorithms. Their LinkedIn and Twitter accounts are entirely fake, and it’s still unclear who’s really behind these articles.
There are also severalfake newsarticlegeneratorsthat make creating fake headlines easy. Although you might not be able to convince an experienced fact-checker with such content, you might be able to impress the average Facebook user with it enough to convince them to share the article.
That’s why I myself tend to trust only news and science from established sources, or content that I can cross-check enough to determine that it’s true. I totally disregard other sources because I know that most of them range from “a little bit wrong” to “totally made-up.”
I didn’t have that attitude a few years back. Neither did the people around me. Trust in news has eroded dramatically, and I have no idea how we’ll be able to restore it. Now, what’s already been happening with news is happening with science. It’s bad enough that it’s difficult to find out the truth about what’s happening in the world. But if the very foundations of human knowledge erode, that would be an even bigger disaster.
Although the debate around fake news has died down since the 2020 election, it’s far from over. Since the tools for faking content are still getting more and more sophisticated, I believe the conversation will get more fuel in the years to come. Hopefully, by then, we’ll have reached a consensus on how to fight against fake content — and fake research, too.
War is coming. Later this year the US military will fight its most advanced war campaign ever as it faces off against a fictionalized version of China.
The battles will be fake, but the results should provide the government with everything it needs to justify the mass development of lethal autonomous weapons systems (LAWS).
The era of government-controlled killer robots is upon us.
Up front: US military leaders have increasingly come out in support of taking humans out of the loop when it comes to AI-controlled weapons. And there’s nothing in the current US policy to stop that from happening.
Contrary to a number of news reports, U.S. policy does not prohibit the development or employment of LAWS. Although the United States does not currently have LAWS in its inventory, some senior military and defense leaders have stated that the United States may be compelled to develop LAWS in the future if potential U.S. adversaries choose to do so. At the same time, a growing number of states and nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns.
The Army has a program called “Project Convergence.” It’s mission is to tie the various military data, information, command, and control domains together in order to facilitate a streamlined battlefield.
A deep-dive into modern military tactics is beyond the scope of this article – but a short explanation is in order.
Background: Modern command and control is dominated by something called “the OODA loop.” OODA stands for “observe, orient, decide, and act.”
The OODA loop stops commanders from following the enemy into traps, it keeps us from firing on civilians, and it’s our strongest shield against friendly fire incidents.
The big idea: US military leaders fear the traditional human decision-making process may become obsolete because we can’t react as fast as an AI. The OODA Loop, theoretically, can be automated.
And that’s why Project Convergence will conduct a series of wargames this fall against a fictional country meant to represent China.
Some US military leaders fear China is developing LAWS technology and they assert that the People’s Republic won’t have the same ethical concerns as its potential adversaries.
In other words: The US military is planning to test our current military forces and AI systems – which require a human in the loop – against forces with AI systems that don’t.
Quick take: Project Convergence is playing chess against itself here. The fictional country US forces will wargame against in the fall may resemble China, but it was developed and simulated by the Pentagon.
What’s most important here is that you don’t have to be a military genius to know the country that skips OODA and just sends out entire fleets, armies, and squadrons of hair-trigger LAWS is likely to dominate the battlespace.
This is exactly what every AI ethicist has been warning about. Taking humans out of the loop and allowing LAWS to make the kill decision is more than just a slippery slope. It’s the next atomic bomb.
But when we “lose” the fight against the fake China, it’ll certainly be easier to sell Congress on taking humans and OODA out of the loop.
When access to a popular resource suddenly disappears, people are likely to search for an alternative source, no matter where it comes from. That’s true for websites but even more so for software and apps which can carry some unfortunate consequences. That might be the case with MSI’s popular Afterburner tool that suddenly became unavailable without much warning and was, at least briefly, imitated by an almost convincing fraudulent website that could have caused unwitting users to download some malware instead.
MSI’s Afterburner tool is quite popular among PC and gaming enthusiasts who want to squeeze out the most from their rigs. It offers both system monitoring as well as GPU overclocking tools that don’t discriminate between rivals NVIDIA and AMD. Given its popularity, it’s really no surprise that people went off looking for an alternative download source when MSI’s official server suddenly stopped working.
Unfortunately, one such source not only tried to offer a copy of MSI Afterburner, it also tried to masquerade as MSI itself. As with efforts designed to hide the owner’s true identity, any download coming from this fake MSI website should be held suspect. That’s exactly what MSI’s warning is all about but, it might have come too little too late to undo some damage.
The real problem is that MSI itself only used one such announcement to warn users about this situation. In the meantime, its official page for Afterburner has no warning whatsoever and still has a non-working download button. Without any explanation or clear alternative, users will naturally try to look for other sources.
Fortunately, it seems that the fraudulent website has been taken out of commission. MSI also says that Afterburner would be downloadable again after routine maintenance. Unfortunately, it doesn’t also offer an alternative download link which could have saved people time and trouble right from the start.