Categories
Security

The US government got caught using sock puppets to spread propaganda on social media

While countries like Russia and China have been making headlines for years with their disinformation and propaganda campaigns on platforms like Twitter and Facebook, it turns out that the US and other Western countries have been playing the same game. A recent report (pdf) from social network analysis firm Graphika and the Stanford Internet Observatory has uncovered a series of operations, some covert and some less so, that aimed to “promote pro-Western narratives” in countries like Russia, China, Afghanistan, and Iran (via Gizmodo).

According to the report, Twitter and Meta removed a group of accounts from their platforms earlier this month, citing their platform manipulation and coordinated inauthentic behavior rules. Analyzing the accounts’ activity, researchers found that the accounts have been carrying out campaigns to criticize or support foreign governments (sometimes the same governments, in what feels like an attempt to sow division) and offer takes on culture and politics for years. The report says this was sometimes done by sharing links to news sites backed by the US government and military.

Some of the political cartoons shared by the accounts.
Images: Graphika and the Stanford Internet Observatory

The data analyzed came from 146 Twitter accounts (which tweeted 299,566 times), 39 Facebook profiles, and 26 Instagram accounts, along with 16 Facebook pages and two Facebook groups. Some of the accounts were meant to appear like real people and used AI-generated profile pictures. Meta and Twitter didn’t specifically name any organizations or people behind the campaigns but said their analysis led them to believe they originated in the US and Great Britain.

For anyone who’s ever been within 15 feet of a history book, the news that the US is using covert action to push its interests in other countries won’t come as a surprise. It is, however, interesting that these operations were uncovered just as social media companies are gearing up to deal with a wave of foreign interference and misinformation in our own elections.

The report also comes right on the heels of a bombshell whistleblower report from Peiter “Mudge” Zatko, Twitter’s former head of security, which accused the company of lax security practices and misrepresenting the number of bots on its platform (something the US government is investigating and that Twitter has strongly denied).

Notably, the report didn’t uncover any sophisticated hacking techniques that took advantage of weak security. Speaking to Gizmodo, Internet Observatory staffer Shelby Grossman said that “there was not anything technically interesting about this network,” contrary to how we might imagine the US operating. “You’d think, ‘Oh, this influence operation originated in the US, surely it’s going to be special,’ but that really wasn’t the case,” she said.

The full report is a fascinating read, if you have the time, breaking down how the accounts posted and diving deep into what kind of content they shared. Spoiler alert: there were memes, hashtag campaigns, petitions, and — what else — fake news.

It also reveals a somewhat damning tidbit when talking about the reach and impact of these campaigns; according to the report, “the vast majority of posts and tweets we reviewed received no more than a handful of likes or retweets, and only 19% of the covert assets we identified had more than 1,000 followers.” What’s more, the two accounts with the most followers explicitly said they were tied to the US military. I’ll try not to think about how much all this cost when I’m paying my taxes next year.

Repost: Original Source and Author Link

Categories
Security

Def Con banned a social engineering star — now he’s suing

In February, when the Def Con hacker conference released its annual transparency report, the public learned that one of the most prominent figures in the field of social engineering had been permanently banned from attending.

For years, Chris Hadnagy had enjoyed a high-profile role as the leader of the conference’s social engineering village. But Def Con’s transparency report stated that there had been multiple reports of him violating the conference’s code of conduct. In response, Def Con banned Hadnagy from the conference for life; in 2022, the social engineering village would be run by an entirely new team.

Now, Hadnagy has filed a lawsuit against the conference alleging defamation and infringement of contractual relations.

The lawsuit was filed in the United States District Court for the Eastern District of Pennsylvania on August 3rd and names Hadnagy as the plaintiff, with Def Con Communications Inc. and the conference founder, Jeff Moss, also known as “The Dark Tangent,” as defendants. Papers were served to Jeffrey McNamara, attorney for Moss, at the conference in Las Vegas this year.

There are few public details about the incidents that caused Hadnagy’s ban, as is common in harassment cases. In the transparency report announcing the permanent ban, Def Con organizers were deliberately vague about the reported behavior. “After conversations with the reporting parties and Chris, we are confident the severity of the transgressions merits a ban from DEF CON,” organizers wrote in their post-conference transparency report following the previous year’s conference.

Def Con’s Code of Conduct is minimal, focusing almost entirely on a “no-harassment” policy. “Harassment includes deliberate intimidation and targeting individuals in a manner that makes them feel uncomfortable, unwelcome, or afraid,” the text reads. “Participants asked to stop any harassing behavior are expected to comply immediately. We reserve the right to respond to harassment in the manner we deem appropriate.”

At the conference this year, various people familiar with the matter told The Verge that Hadnagy’s behavior met the definition of harassment as defined by the code of conduct but declined to give more details on the record.

Reached for comment, Melanie Ensign, press lead for Def Con, pointed The Verge to a statement previously posted by Moss in advance of the conference this year. “When we receive a report of a Code of Conduct violation, our leadership team… conducts a review of the substance in consultation with our attorney as needed,” the statement reads. “We then review all the evidence available to us through community reports, news media, and internal investigations to determine whether the allegations are substantiated.”

The infosec community has had a number of high-profile sexual misconduct cases, some implicating the community’s most notable researchers. In 2016, former Tor developer Jacob Appelbaum resigned from the Tor Project after numerous allegations of “sexually aggressive behavior,” which the project’s executive team investigated and confirmed. A year later, The Verge reported news that security researcher Morgan Marquis-Boire had been credibly accused of sexually assaulting women over a period of decades.

Def Con’s commitment to a public transparency report — first announced in 2017 — marked a new push from organizers to create a safer conference by cracking down on harassment in spaces related to the conference.

Even so, Hadnagy’s ban has sent shockwaves through the Def Con community, particularly given his status as a conference insider and coordinator of a popular activity zone. As leader of the SE Village — where attendees learn the art of eliciting sensitive information from targets through psychological tricks — Hadnagy held a celebrated role at the conference year after year, explaining tradecraft and running a crowd-pleasing capture-the-flag competition. As a published author and frequent speaker on the topic of social engineering, Hadnagy’s participation was a big draw for those looking to break into the field.

This year, the village — rebranded as Social Engineering Community — was under new leadership, with JC Carruthers and Stephanie “Snow” Carruthers in charge of events. The new organizers told The Verge that they had stepped in on short notice with a proposal to run the village after news of Hadnagy’s ban broke and that feedback from attendees this year had been positive. Both declined to comment on the specific nature of the accusations against Hadnagy.

Reached by The Verge, Hadnagy claims that conference organizers did not provide details of the accusations against him and denies any wrongdoing.

“My company and I consistently deny and continue to deny any and all allegations of misconduct,” he said in an email statement to The Verge. “To address these false accusations, defamatory statements and innuendos I have filed a lawsuit against both DEF CON Communications and Jeff Moss.”

In the lawsuit, Hadnagy alleges that the statements in the transparency report, combined with the rarity of being barred from the conference, mean that the ban amounts to “severe and irreversible” harm to his reputation, for which he is seeking damages in excess of $75,000. The complaint also includes further counts of interference with contractual relations, infliction of emotional distress, and invasion of privacy — with the same amount of damages being sought for each.

Since the ban, Hadnagy has become a persona non grata at similar events. Recently, one of the main organizers of the BSides Cleveland security conference stepped down after booking Hadnagy as a surprise keynote speaker. Hadnagy was reportedly intending to deliver a talk that included a criticism of “cancel culture.”

As news of the case became public, some notable voices in the infosec community gave a critical response. Alyssa Miller, chief information security officer at business services firm Epiq Global, branded the lawsuit an abuse of the legal system and an attempt to manipulate conference organizers.

“Let’s be clear about what this lawsuit is about,” Miller tweeted. “It’s not about DEFCON or DarkTangent. This is about [Chris Hadnagy] trying to force the names and full details of his accusers into the public sphere so he can go after them, attack them, and try to discredit them.”

Correction August 18, 4:15PM ET: An earlier version of this story claimed that Jeff Moss was served papers directly. In fact, papers were served to Moss’s attorney, Jeffrey McNamara. We regret the error.



Repost: Original Source and Author Link

Categories
Computing

I couldn’t manage my work and social life without Rambox

I’m a work-from-home freelancer, dad to young children, and forgetful socializer. That means that balancing meetings and work talk with my colleagues at various publications with a social life spread over its own range of chat apps can be rather difficult.

I’m not even a big social network person, but even I have to use Microsoft Teams, Slack, email, and Google chat, alongside Telegram, WhatsApp, text messages, and Twitter. Every. Single. Day. That can often feel impossible to keep up with, leaving me feeling stressed, distracted, and paranoid that I might be missing an all-important message.

Fortunately, there’s one app I’ve found that makes it all doable: the workplace organizing tool Rambox.

Work and play simplified, de-stressed

I hate notifications. The constant pings and reminders that someone else needs something from me can feel like a lot sometimes. But every chat app I use demands attention at different points throughout the day, and reaching for a different device because it’s started making a noise or flashing at me while I’m mid-flow can be incredibly disruptive.

Rambox helps me get around that by letting me put every notification in one place. It’s an amalgamating tool that brings together just about every social and communicative application you can think of. It supports a range of instant messaging apps, email clients, social media accounts, and more. Having all of them in a single place, it simplifies their management and means that when I’m at my desk working, I don’t need to stop what I’m doing just to answer a message — it’s right there. I also know I’m not missing anything if I haven’t looked at my phone in a while, and frankly, I can type a response to someone on WhatsApp far faster on my desktop keyboard than I can on a touchscreen.

It also makes adding new chat apps and services to my daily routine much more streamlined. When Digital Trends switched from Slack to Teams for our internal communications, all I had to do was add Teams to Rambox, and everything I needed was right there alongside every other app and tool I use day to day. I didn’t need to set up some entirely separate application, there was no additional window I had to have open every day, nor did I need to make sure I remembered to start the app up in the morning — lest I miss an important communication from a big boss.

The free tier is good enough

Rambox application options.

Better yet, Rambox is completely free. The basic version comes with support for over 700 applications, including all of the most popular and important ones. There’s also real-time synchronization across my devices, so if I do step away from the desk, I can use my phone or laptop and pick up where I left off. Rambox will ping you there if you don’t read a message on your desktop, so it’s easy to jump between the two. If you ever need to have everyone leave you alone for a bit, you can switch to Focus mode.

The paid versions do offer more, like a built-in spell checker and premium support, but they’re more targeted towards organizations and enterprises setting up Rambox for their workers.

For me, though, the free version of Rambox is more than enough, and it’s proved an absolute saving grace that prevents me from feeling buried under an avalanche of applications, notifications, and demands that would otherwise put my toddler’s endless cries of “daddy, daddy” to shame.

It’s not the only option

After gushing about Rambox for a few hundred words, I should confess that, until very recently, I was using an extremely outdated pre-1.0 release of Rambox. The more recent versions kept crashing on me, failing to install, and wouldn’t log in to my accounts; it was a mess. That appears to have been all fixed in the latest release, so I’m back on a secure and up-to-date version of the application, but I needn’t have been quite so stubborn. There are plenty of alternative messaging umbrella applications like it.

If you want a slightly different set of features and pricing tiers, other popular options include Franz, which has a free spellchecker included; All-in-One Messenger, which is entirely free; the versatile Apptorium Workspaces, which lets you create custom collections of apps, files, and folders for certain projects; the online-focused Station; or the entirely open source Hamsket.

I don’t know which one would be right for you, and it’s possible one of these might even be better for me. But for now, Rambox does the trick. Until it breaks or I start to feel swamped again, it’ll remain my go-to for saving time and staying in touch with people. Now, I just need to get better at actually replying.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Computing

FBI zaps darknet marketplace selling Social Security data

The FBI, Department of Justice (DoJ), and Internal Revenue Service (IRS) have worked together to shut down the SSNDOB Marketplace, a collection of darknet sites that listed the personal information of around 24 million U.S. citizens, and which generated more than $19 million in sales revenue.

For the uninitiated, the darknet, also known as the dark web, is an encrypted part of the online world that isn’t indexed by search engines and can only be accessed using specialized browsers. While the darknet is popular with cybercriminals selling illegal products and services online, others such as political activists or whistleblowers might also use the network to share highly sensitive information.

The DoJ said this week that the SSNDOB Marketplace, which had been operating for a number of years, sold personal information such as names, dates of birth, and Social Security numbers belonging to individuals in the U.S.

Efforts to dismantle the service involved working with law enforcement in Cyprus and Latvia, and earlier this week seizure orders were enacted against the domain names used by the SSNDOB Marketplace, leading to its shutdown.

The SSNDOB Marketplace appeared to be an efficiently run business operated by administrators who placed ads on darknet criminal forums for the SSNDOB’s services while also offering customer support, the DoJ said.

It added that the administrators “employed various techniques to protect their anonymity and to thwart detection of their activities, including using online monikers that were distinct from their true identities, strategically maintaining servers in various countries, and requiring buyers to use digital payment methods, such as bitcoin.”

Commenting on the case, Special Agent in Charge Darrell Waldon of the IRS-CI Washington, D.C. Field Office, said: “Identity theft can have a devastating impact on a victim’s long-term emotional and financial health. Taking down the SSNDOB website disrupted ID theft criminals and helped millions of Americans whose personal information was compromised.”

Waldon added that the U.S. and international law enforcement community will continue to work to end what he called “these complex scams.”

With apparently no arrests made in connection with the case, the perpetrators behind SSNDOB remain free to set up a new operation, while other cybercriminals could also come in to try to fill the hole left by the shutdown. In that sense, it’s a game of whack-a-mole for the FBI, though its efforts will stall and disrupt the perpetrators while also sending out a message that it’s on their case.

In another recent win for investigators targeting nefarious online outfits, the “biggest dark web marketplace in the world” was knocked offline in April. The platform, Hydra Market, made its money through sales of drugs and money-laundering services.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Niantic buys gameplay recording app Lowkey to improve its in-game social experience

Niantic has acquired another company to help build out its augmented reality platforms. The company has announced that it’s acquiring the team behind Lowkey, an app you can use to easily capture and share gameplay moments. While you can use any screen capture application — or even your phone’s built-in feature — to record your games, Lowkey was designed with casual gamers or those who don’t want to spend time editing their videos in mind. 

The app can capture videos on your computer, for instance, and sync them with your phone where you can use its simple editing tools to create short clips optimized for mobile viewing. You’re also able to share those clips with friends within the app Snapchat-style or publish it for public viewing like TikTok. Niantic didn’t reveal what the Lowkey team will be doing for its AR games and experiences exactly, but it said the team’s “leadership in this space will accelerate the social experiences [it’s] building in [its] products.” The company added: “We share a common vision for building community around shared experiences, and enabling new ways to connect and play for our explorers.”

The Pokémon Go creator purchased other companies in the past in its quest to build more tools and features for its augmented reality products. In 2017, it purchased social animation startup Evertoon to build a social network for its games. Last year, it bought 3D mapping startup 6D.ai to develop “planet-scale” augmented reality, and just this August, it acquired LiDAR scanning app Scaniverse to create a 3D map of the world.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Game

‘Fortnite’ Party Worlds are purely social experiences made for the metaverse

Epic has made acquisitions and otherwise signalled plans for a Fortnite metaverse, but its latest move is one of the most obvious yet. The developer has introduced Fortnite Party Worlds, or maps that are solely intended as social spaces to meet friends and play mini games. Unlike Hubs, these environments don’t link to other islands — think of them as final destinations.

The company has collaborated with creators fivewalnut and TreyJTH to offer a pair of example Party Worlds (a theme park and a lounge). However, the company is encouraging anyone to create and submit their own so long as they focus on the same goal of peaceful socialization.

This doesn’t strictly represent a metaverse when Party Worlds live in isolation. At the same time, this shows how far Fortnite has shifted away from its original focuses on battle royale and co-op gaming — there are now islands devoted solely to making friends, not to mention other non-combat experiences like virtual museums and trial courses. We wouldn’t expect brawls to disappear any time soon, but they’re quickly becoming just one part of a much larger experience.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Security

Australian PM proposes defamation laws forcing social platforms to unmask trolls

Australian Prime Minister Scott Morrison is introducing new defamation laws that would force online platforms to reveal the identities of trolls, or else pay the price of defamation. As ABC News Australia explains, the laws would hold social platforms, like Facebook or Twitter, accountable for defamatory comments made against users.

Platforms will also have to create a complaint system that people can use if they feel that they’re a victim of defamation. As a part of this process, the person who posted the potentially defamatory content will be asked to take it down. But if they refuse, or if the victim is interested in pursuing legal action, the platform can then legally ask the poster for permission to reveal their contact information.

And if the platform can’t get the poster’s consent? The laws would introduce an “end-user information disclosure order,” giving tech giants the ability to reveal a user’s identity without permission. If the platforms can’t identify the troll for any reason — or if the platforms flat-out refuse — the company will have to pay for the troll’s defamatory comments. Since the law is specific to Australia, it appears that social networks wouldn’t have to identify trolls located in other countries.

“The online world should not be a wild west where bots and bigots and trolls and others are anonymously going around and can harm people,” Morrison said during a press conference. “That is not what can happen in the real world, and there is no case for it to be able to be happening in the digital world”

As noted by ABC News Australia, a draft of the “anti-troll” legislation is expected this week, and it likely won’t reach Parliament until the beginning of next year. It still remains unclear which specific details the platforms would be asked to collect and disclose. Even more concerning, we still don’t know how severe the case of defamation would have to be to warrant revealing someone’s identity. A loose definition of defamation could pose serious threats to privacy.

The proposed legislation is part of a larger effort to overhaul Australia’s defamation laws. In September, Australia’s High Court ruled that news sites are considered “publishers” of defamatory comments made by the public on their social media pages, and should be held liable for them. This has caused outlets like CNN to block Australians from accessing their Facebook page altogether. However, the ruling potentially poses implications for individuals running social pages, as the ruling implies that they can also be held responsible for any defamatory comments left on their pages.

Repost: Original Source and Author Link

Categories
AI

The EU is considering a ban on AI for mass surveillance and social credit scores

The European Union is considering banning the use of artificial intelligence for a number of purposes, including mass surveillance and social credit scores. This is according to a leaked proposal that is circulating online, first reported by Politico, ahead of an official announcement expected next week.

If the draft proposal is adopted, it would see the EU take a strong stance on certain applications of AI, setting it apart from the US and China. Some use cases would be policed in a manner similar to the EU’s regulation of digital privacy under GDPR legislation.

Member states, for example, would be required to set up assessment boards to test and validate high-risk AI systems. And companies that develop or sell prohibited AI software in the EU — including those based elsewhere in the world — could be fined up to 4 percent of their global revenue.

According to a copy of the draft seen by The Verge, the draft regulations include:

  • A ban on AI for “indiscriminate surveillance,” including systems that directly track individuals in physical environments or aggregate data from other sources
  • A ban on AI systems that create social credit scores, which means judging someone’s trustworthiness based on social behavior or predicted personality traits
  • Special authorization for using “remote biometric identification systems” like facial recognition in public spaces
  • Notifications required when people are interacting with an AI system, unless this is “obvious from the circumstances and the context of use”
  • New oversight for “high-risk” AI systems, including those that pose a direct threat to safety, like self-driving cars, and those that have a high chance of affecting someone’s livelihood, like those used for job hiring, judiciary decisions, and credit scoring
  • Assessment for high-risk systems before they’re put into service, including making sure these systems are explicable to human overseers and that they’re trained on “high quality” datasets tested for bias
  • The creation of a “European Artificial Intelligence Board,” consisting of representatives from every nation-state, to help the commission decide which AI systems count as “high-risk” and to recommend changes to prohibitions

Perhaps the most important section of the document is Article 4, which prohibits certain uses of AI, including mass surveillance and social credit scores. Reactions to the draft from digital rights groups and policy experts, though, say this section needs to be improved.

“The descriptions of AI systems to be prohibited are vague, and full of language that is unclear and would create serious room for loopholes,” Daniel Leufer, Europe policy analyst at Access Now, told The Verge. That section, he says, is “far from ideal.”

Leufer notes that a prohibition on systems that cause people to “behave, form an opinion or take a decision to their detriment” is unhelpfully vague. How exactly would national laws decide if a decision was to someone’s detriment or not? On the other hand, says Leufer, the prohibition against AI for mass surveillance is “far too lenient.” He adds that the prohibition on AI social credit systems based on “trustworthiness” is also defined too narrowly. Social credit systems don’t have to assess whether someone is trustworthy to decide things like their eligibility for welfare benefits.

On Twitter, Omer Tene, vice president of nonprofit IAPP (The International Association of Privacy Professionals), commented that the regulation “represents the typical Brussels approach to new tech and innovation. When in doubt, regulate.” If the proposals are passed, said Tene, it will create a “vast regulatory ecosystem,” which would draw in not only the creators of AI systems, but also importers, distributors, and users, and create a number of regulatory boards, both national and EU-wide.

This ecosystem, though, wouldn’t primarily be about restraining “big tech,” says Michael Veale, a lecturer in digital rights and regulations at University College London. “In its sights are primarily the lesser known vendors of business and decision tools, whose work often slips without scrutiny by either regulators or their own clients,” Veale tells The Verge. “Few tears will be lost over laws ensuring that the few AI companies that sell safety-critical systems or systems for hiring, firing, education and policing do so to high standards. Perhaps more interestingly, this regime would regulate buyers of these tools, for example to ensure there is sufficiently authoritative human oversight.”

It’s not known what changes might have been made to this draft proposal as EU policymakers prepare for the official announcement on April 21st. Once the regulation has been proposed, though, it will be subject to changes following feedback from MEPs and will have to be implemented separately in each nation-state.

Update April 14th, 11:03AM ET: Updated story with additional comment from Michael Veale.



Repost: Original Source and Author Link

Categories
Tech News

Social media produces a more diverse news diet — wait, what?!

New research has challenged the very existence of online filter bubbles.

The study found that people who use search engines, social media, and aggregators to access news can actually have more diverse information diets.

Researchers from the universities of Oxford and Liverpool analyzed web tracking data on around 3,000 UK news users.

The team tracked every visit from a desktop or laptop to 21 of the most popular UK news websites over a one-month period. They also recorded the URL that preceded each visit to infer how the site was accessed.

They grouped these visits into three categories:

  1. Direct access, when someone clicked on an article from a news site’s homepage, or from another article on the same site
  2. Search access, when the previous URL was associated with a search page
  3. Facebook, Twitter, and Google News access, when the previous URL was associated with one of those platforms.

They then combined measures of diversity and media outlet slants to compare the variety of news in each category.

They found that people who used search engines, social media, and aggregators to access news received a more diverse mix of information. 

The results also showed older people have less diverse news repertoires than younger people, and that men have less diverse repertoires than women.

However, when people accessed more news directly, the prominence of more partisan outlets was lower.

Per the study paper:

Indeed, it may be that exposure to conflicting partisan views, rather than over-exposure to like-minded views, will offer a better explanation for the negative outcomes — like polarization — that are sometimes associated with distributed media use. Similarly, although consuming news from a variety of outlets may offer some benefits, some may simply be more comfortable with a world where most people only access news from impartial sources like the BBC — where differing views are often recognized, but presented in a certain way.

Researchers should be wary of extrapolating findings from one country to the rest of the world. But the study further challenges the existence of filter bubbles.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Repost: Original Source and Author Link

Categories
Security

10 Years Past Stuxnet, Social Media is Cyberweapon of Choice

A decade ago, the landscape of war changed forever.

On July 15, 2010, researchers discovered a massive malware worm installed in the industrial control systems of Iran’s nuclear development sites, where uranium was being enriched. The worm, more complex than any malware seen before, came to be known as Stuxnet.

But the prohibitive cost and manpower of developing dangerous targeted malware like Stuxnet means that many nation-states have started leaning on a new cyberweapon of choice: Social media.

A complex and dangerous tool

At the time, Stuxnet was revolutionary. It bridged the gap between the digital and physical worlds in a way that hadn’t been done up to that point, said Ryan Olson, vice president of threat intelligence at Palo Alto Networks. “This was a significant event,” he said.

Kim Zetter, a journalist and one of the foremost experts on the Stuxnet virus, said that it wasn’t just the virus’s complexity or sophistication that was impressive, it was what virus targeted and how. “It targeted systems that weren’t connected to the internet,” she told Digital Trends. “And it introduced to the security community, and the world, vulnerabilities that exist in critical infrastructure systems.”

Stuxnet was a totally new paradigm in terms of what could now be accomplished,” said Axel Wirth, chief security strategist at MedCrypt, a cybersecurity company specializing in medical devices. “The methodology used to penetrate its target environment was much better planned than any other piece of malware used before.”

It’s thought that the virus found its way into Iran nuclear facilities via a thumb drive. From there, the virus was able to make a copy of itself and hide in an encrypted folder. It then lay dormant, Wirth told Digital Trends. The worm would activate when a specific configuration of systems only found in Iran was turned on. Ultimately, experts believed the virus caused significant damage to the Natanz nuclear enrichment site in Iran.

Strong evidence points to Stuxnet’s development being a joint effort between the U.S. and Israel, according to the Washington Post, although neither country has ever claimed responsibility.

Cyberweapons, however, always have an unintended side effect when they’re discovered.

“The difference between an offensive cyberweapon and, say, the Manhattan Project, is that a nuclear bomb doesn’t leave defensive schematics scattered all over the landscape,” said Chris Kennedy, former director of cyberdefense at both the Department of Defense and the U.S. Treasury. “Cyberweapons do.”

In other words, once Stuxnet was discovered, it was hard to contain. Experts and hackers could look at the code, dissect the worm, and take out parts of it to use for themselves. Many cyberweapons found since Stuxnet have had parts of the Stuxnet code in them, although these new tools aren’t nearly as sophisticated, Kennedy said.

“Billions of dollars went into creating Stuxnet and became publicly consumable information,” said Kennedy, who is currently the chief information security officer at cybersecurity firm AttackIQ. “That kind of screws with the value of the investment.”

A better return on investment

Social media manipulation can also be effective at destabilizing or attacking foes — and is much cheaper.

“Social media is a lower form of attack,” said Kennedy, “but it’s easier to do. You just get a bunch of not-as-smart people to pump false information into Facebook and the analytics take it away. Now, attacks like Stuxnet will be reserved for specialized goals because they’re so expensive and challenging to create.”

Kennedy said that whatever buzzword could be used to talk about the Russian influence in the 2016 elections, “that’s the new Stuxnet.”

“Rather than attacks on systems or on individual computers, these are attacks on societies and economies.”

“It’s easier, cheaper, and has a much more brand effect,” he said.

Wirth told Digital Trends that cyberattacks are now “broader” in scope.

“Rather than attacks on systems or on individual computers, these are attacks on societies and economies,” he said. “Traditional tools have been augmented by social media attacks and misinformation campaigns.”

“The future is combined,” said Kennedy, in terms of what cyber warfare could look like. “You use a social media campaign for propaganda and influence to shape local populations, then you use cyberweapons to affect specific targets. And if that doesn’t work, then we bring in the troops and start blowing stuff up.”

Editors’ Choice




Repost: Original Source and Author Link