Categories
Security

World’s most sensitive data could be vulnerable to a new hack

A possible security attack has just been revealed by researchers, and while difficult to carry out, it could potentially endanger some of the most sensitive data in the world.

Dubbed “SATAn,” the hack turns a typical SATA cable into a radio transmitter. This permits the transfer of data even from devices that would otherwise not allow it at all.

As data protection measures grow more advanced and cyberattacks become more frequent, researchers and vicious attackers alike reach new heights of creativity in finding possible flaws in software and hardware. Dr. Mordechai Guri from the Ben-Gurion University of the Negev in Israel just published new findings that, once again, show us that even air-gapped systems aren’t completely secure.

An air-gapped system or network is completely isolated from any and all connections to the rest of the world. This means no networks, no internet connections, no Bluetooth — zero connectivity. The systems are purposely built without any hardware that can communicate wirelessly, all in an effort to keep them secure from various cyberattacks. All of these security measures are in place for one reason: To protect the most vulnerable and sensitive data in the world.

Hacking into these air-gapped systems is exceedingly difficult and often requires direct access in order to plant malware. Removable media, such as USB stealers, can also be used. Dr. Guri has now found yet another way to breach the security of an air-gapped system. SATAn relies on the use of a SATA connection, widely used in countless devices all over the globe, in order to infiltrate the targetted system and steal its data.

Through this technique, Dr. Guri was able to turn a SATA cable into a radio transmitter and send it over to a personal laptop located less than 1 meter away. This can be done without making any physical modifications to the cable itself or the rest of the targeted hardware. Feel free to dive into the paper penned by Dr. Guri (first spotted by Tom’s Hardware) if you want to learn the ins and outs of this tech.

In a quick summary of how SATAn is able to extract data from seemingly ultra-secure systems, it all comes down to manipulating the electromagnetic interference generated by the SATA bus. Through that, data can be transmitted elsewhere. The researcher manipulated this and used the SATA cable as a makeshift wireless antenna operating on the 6GHz frequency band. In the video shown above, Dr. Guri was able to steal a message from the target computer and then display it on his laptop.

“The receiver monitors the 6GHz spectrum for a potential transmission, demodulates the data, decodes it, and sends it to the attacker,” said the researcher in his paper.

Dr. Mordechai Guri

The attack can only be carried out if the target device has malicious software installed on it beforehand. This, of course, takes the danger levels down a notch — but not all too much, seeing as USB devices can be used for this. Without that, the attacker would need to obtain physical access to the system to implant the malware before attempting to steal data through SATAn.

Rounding up the paper, Dr. Guri detailed some ways in which this type of attack can be mitigated, such as the implementation of internal policies that strengthen defenses and prevent the initial penetration of the air-gapped system. Making radio receivers forbidden inside facilities where such top-secret data is stored seems like a sensible move right now. Adding electromagnetic shielding to the case of the machine, or even just to the SATA cable itself, is also recommended.

This attack is certainly scary, but we regular folk most likely don’t need to worry. Given the complexity of the attack, it’s only worthy of a high-stakes game with nationwide secrets being the target. On the other hand, for those facilities and their air-gapped systems, alarm bells should be ringing — it’s time to tighten up the security.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

Liveness tests used by banks to verify ID are ‘extremely vulnerable’ to deepfake attacks

Automated “liveness tests” used by banks and other institutions to help verify users’ identity can be easily fooled by deepfakes, demonstrates a new report.

Security firm Sensity, which specializes in spotting attacks using AI-generated faces, probed the vulnerability of identity tests provided by 10 top vendors. Sensity used deepfakes to copy a target face onto an ID card to be scanned and then copied that same face onto a video stream of a would-be attacker in order to pass vendors’ liveness tests.

Liveness tests generally ask someone to look into a camera on their phone or laptop, sometimes turning their head or smiling, in order to prove both that they’re a real person and to compare their appearance to their ID using facial recognition. In the financial world, such checks are often known as KYC, or “know your customer” tests, and can form part of a wider verification process that includes document and bill checks.

“We tested 10 solutions and we found that nine of them were extremely vulnerable to deepfake attacks,” Sensity’s chief operating officer, Francesco Cavalli, told The Verge.

“There’s a new generation of AI power that can pose serious threats to companies,” says Cavalli. “Imagine what you can do with fake accounts created with these techniques. And no one is able to detect them.”

Sensity shared the identity of the enterprise vendors it tested with The Verge, but it requested that the names not be published for legal reasons. Cavalli says Sensity signed non-disclosure agreements with some of the vendors and, in other cases, fears it may have violated companies’ terms of service by testing their software in this way.

Cavalli also says he was disappointed by the reaction from vendors, who did not seem to consider the attacks significant. “We told them ‘look you’re vulnerable to this kind of attack,’ and they said ‘we do not care,’” he says. “We decided to publish it because we think, at a corporate level and in general, the public should be aware of these threats.”

The vendors Sensity tested sell these liveness checks to a range of clients, including banks, dating apps, and cryptocurrency startups. One vendor was even used to verify the identity of voters in a recent national election in Africa. (Though there’s no suggestion from Sensity’s report that this process was compromised by deepfakes.)

Cavalli says such deepfake identity spoofs are primarily a danger to the banking system where they can be used to facilitate fraud. “I can create an account; I can move illegal money into digital bank accounts of crypto wallets,” says Cavalli. “Or maybe I can ask for a mortgage because today online lending companies are competing with one another to issue loans as fast as possible.”

This is not the first time deepfakes have been identified as a danger to facial recognition systems. They’re primarily a threat when the attacker can hijack the video feed from a phone or camera, a relatively simple task. However, facial recognition systems that use depth sensors — like Apple’s Face ID — cannot be fooled by these sorts of attacks, as they verify identity not only based on visual appearance but also the physical shape of a person’s face.

Repost: Original Source and Author Link

Categories
AI

EU report warns that AI makes autonomous vehicles ‘highly vulnerable’ to attack

The dream of autonomous vehicles is that they can avoid human error and save lives, but a new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.

“The attack might be used to make the AI ‘blind’ for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks,” the report reads. “The absence of sufficient security knowledge and expertise among developers and system designers on AI cybersecurity is a major barrier that hampers the integration of security in the automotive sector.”

The range of AI systems and sensors needed to power autonomous vehicles increases the attack surface area, according to the report. To address vulnerabilities, its authors say policymakers and businesses will need to develop a security culture across the automotive supply chain, including for third-party providers. The report urges car manufacturers to take steps to mitigate security risks by thinking of the creation of machine learning systems as part of the automotive industry supply chain.

The report focuses on cybersecurity attacks with adversarial machine learning that carries the risk of malicious attacks undetectable to humans. The report also finds that the use of machine learning in cars will require a continuous review of systems to ensure they haven’t been altered in a malicious way.

“AI cybersecurity cannot just be an afterthought where security controls are implemented as add-ons and defense strategies are of reactive nature,” the paper reads. “This is especially true for AI systems that are usually designed by computer scientists and further implemented and integrated by engineers. AI systems should be designed, implemented, and deployed by teams where the automotive domain expert, the ML expert, and the cybersecurity expert collaborate.”

Scenarios presented in the report include the possibility of attacks on motion planning and decision-making algorithms and spoofing, like the kind that can fool an autonomous vehicle into “recognizing” cars, people, or walls that don’t exist.

In the past few years, a number of studies have shown that physical perturbations can fool autonomous vehicle systems with little effort. In 2017, researchers used spray paint or stickers on a stop sign to fool an autonomous vehicle into misidentifying the sign as a speed limit sign. In 2019, Tencent security researchers used stickers to make Tesla’s Autopilot swerve into the wrong lane. And researchers demonstrated last year that they could lead an autonomous vehicle system to quickly accelerate from 35 mph to 85 mph by strategically placing a few pieces of tape on the road.

 

The report was coauthored by the Joint Research Centre, a science and tech advisor to the European Commission. Weeks ago, ENISA released a separate report detailing cybersecurity challenges created by artificial intelligence.

In other autonomous vehicle news, last week Waymo began testing robo-taxis in San Francisco. But an MIT task force concluded last year that autonomous vehicles could be at least another decade away.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link