Categories
AI

Algorithms that detect cancer can be fooled by hacked images

Artificial intelligence programs that check medical images for evidence of cancer can be duped by hacks and cyberattacks, according to a new study. Researchers demonstrated that a computer program could add or remove evidence of cancer from mammograms, and those changes fooled both an AI tool and human radiologists.

That could lead to an incorrect diagnosis. An AI program helping to screen mammograms might say a scan is healthy when there are actually signs of cancer or incorrectly say that a patient does have cancer when they’re actually cancer free. Such hacks are not known to have happened in the real world yet, but the new study adds to a growing body of research suggesting healthcare organizations need to be prepared for them.

Hackers are increasingly targeting hospitals and healthcare institutions with cyberattacks. Most of the time, those attacks siphon off patient data (which is valuable on the black market) or lock up an organization’s computer systems until that organizations pays a ransom. Both of those types of attacks can harm patients by gumming up the operations at a hospital and making it harder for healthcare workers to deliver good care.

But experts are also growing more worried about the potential for more direct attacks on people’s health. Security researchers have shown that hackers can remotely break into internet-connected insulin pumps and deliver dangerous doses of the medication, for example.

Hacks that can change medical images and impact a diagnosis also fall into that category. In the new study on mammograms, published in Nature Communications, a research team from the University of Pittsburgh designed a computer program that would make the X-ray scans of breasts that originally appeared to have no signs of cancer look like they were cancerous, and that would make mammograms that look cancerous appear to have no signs of cancer. They then fed the tampered images to an artificial intelligence program trained to spot signs of breast cancer and asked five human radiologists to decide if the images were real or fake.

Around 70 percent of the manipulated images fooled that program — that is, the AI wrongly said that images manipulated to look cancer-free were cancer-free, and that the images manipulated to look like they had cancer did have evidence of cancer. As for the radiologists, some were better at spotting manipulated images than others. Their accuracy at picking out the fake images ranged widely, from 29 percent to 71 percent.

Other studies have also demonstrated the possibility that a cyberattack on medical images could lead to incorrect diagnoses. In 2019, a team of cybersecurity researchers showed that hackers could add or remove evidence of lung cancer from CT scans. Those changes also fooled both human radiologists and artificial intelligence programs.

There haven’t been public or high-profile cases where a hack like this has happened. But there are a few reasons a hacker might want to manipulate things like mammograms or lung cancer scans. A hacker might be interested in targeting a specific patient, like a political figure, or they might want to alter their own scans to get money from their insurance company or sign up for disability payments. Hackers might also manipulate images randomly and refuse to stop tampering with them until a hospital pays a ransom.

Whatever the reason, demonstrations like this one show that healthcare organizations and people designing AI models should be aware that hacks that alter medical scans are a possibility. Models should be shown manipulated images during their training to teach them to spot fake ones, study author Shandong Wu, associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, said in a statement. Radiologists might also need to be trained to identify fake images.

“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks,” Wu said.

Repost: Original Source and Author Link

Categories
AI

OpenAI’s state-of-the-art machine vision AI is fooled by handwritten notes

Researchers from machine learning lab OpenAI have discovered that their state-of-the-art computer vision system can be deceived by tools no more sophisticated than a pen and a pad. As illustrated in the image above, simply writing down the name of an object and sticking it on another can be enough to trick the software into misidentifying what it sees.

“We refer to these attacks as typographic attacks,” write OpenAI’s researchers in a blog post. “By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model.” They note that such attacks are similar to “adversarial images” that can fool commercial machine vision systems, but far simpler to produce.

Adversarial images present a real danger for systems that rely on machine vision. Researchers have shown, for example, that they can trick the software in Tesla’s self-driving cars to change lanes without warning simply by placing certain stickers on the road. Such attacks are a serious threat for a variety of AI applications, from the medical to the military.

But the danger posed by this specific attack is, at least for now, nothing to worry about. The OpenAI software in question is an experimental system named CLIP that isn’t deployed in any commercial product. Indeed, the very nature of CLIP’s unusual machine learning architecture created the weakness that enables this attack to succeed.

“Multimodal neurons” in CLIP respond to photos of an object as well as sketches and text.
Image: OpenAI

CLIP is intended to explore how AI systems might learn to identify objects without close supervision by training on huge databases of image and text pairs. In this case, OpenAI used some 400 million image-text pairs scraped from the internet to train CLIP, which was unveiled in January.

This month, OpenAI researchers published a new paper describing how they’d opened up CLIP to see how it performs. They discovered what they’re calling “multimodal neurons” — individual components in the machine learning network that respond not only to images of objects but also sketches, cartoons, and associated text. One of the reasons this is exciting is that it seems to mirror how the human brain reacts to stimuli, where single brain cells have been observed responding to abstract concepts rather than specific examples. OpenAI’s research suggests it may be possible for AI systems to internalize such knowledge the same way humans do.

In the future, this could lead to more sophisticated vision systems, but right now, such approaches are in their infancy. While any human being can tell you the difference between an apple and a piece of paper with the word “apple” written on it, software like CLIP can’t. The same ability that allows the program to link words and images at an abstract level creates this unique weakness, which OpenAI describes as the “fallacy of abstraction.”

Another example of a typographic attack. Do not trust the AI to put your money in the piggy bank.
Image: OpenAI

Another example given by the lab is the neuron in CLIP that identifies piggy banks. This component not only responds to pictures of piggy banks but strings of dollar signs, too. As in the example above, that means you can fool CLIP into identifying a chainsaw as a piggy bank if you overlay it with “$$$” strings, as if it were half-price at your local hardware store.

The researchers also found that CLIP’s multimodal neurons encoded exactly the sort of biases you might expect to find when sourcing your data from the internet. They note that the neuron for “Middle East” is also associated with terrorism and discovered “a neuron that fires for both dark-skinned people and gorillas.” This replicates an infamous error in Google’s image recognition system, which tagged Black people as gorillas. It’s yet another example of just how different machine intelligence is to that of humans’ — and why pulling apart the former to understand how it works is necessary before we trust our lives to AI.

Repost: Original Source and Author Link

Categories
Game

Animal Crossing Sanrio cards sold out: Don’t be fooled

Animal Crossing Sanrio cards were put up for sale at Target today, and promptly sold out. The item is a “Welcome to Animal Crossing Sanrio Collaboration Pack” with “the entire Sanrio Collaboration Series” inside. The pack includes 6 amiibo cards, all of which can be used with Animal Crossing: New Horizons, New Leaf Welcome amiibo, or New Leaf, but NOT Happy Home Designer.

The future is weird. What we’re looking at today is a physical item that enables digital items in a physical gaming device. You need to purchase a pack in order to get the items, and the items can only be enabled VIA an NFC transaction with the card and the user’s gaming device. With the amiibo cards included with this pack, the user can “invite a character to your campsite or to Harvey’s Photopia.”

You cannot get this pack of cards delivered to your home by Target. Or I should say you COULD not – they’re sold out at every location we’ve seen them appear. They’re exclusive to Target, and exclusive to order pickup and/or drive-up purchase.

If you’re looking to buy the pack of cards from a 3rd-party source like Ebay, don’t be surprised to find them bid up 2, 3, or 10x their original price. If you’re absolutely dedicated to the idea that Hello Kitty characters will appear at your Animal Crossing island, by all means – there are a bunch available, so long as you’re willing to pay a bunch!

It’s not immediately clear whether the cards will be available ever again – or if they’ll be reprinted. Be sure to note, though – they can only be applied to one account ONCE. So don’t go buying open packs on the 3rd-party market, because you’ll probably be buying cards that do absolutely nothing for you. Make sure the pack you buy (if you absolutely must buy) is sealed!

Repost: Original Source and Author Link