Categories
AI

AI Weekly: WHO outlines steps for creating inclusive AI health care systems

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


This week, the World Health Organization (WHO) released its first global report on AI in health, along with six guiding principles for design, development, and deployment. The fruit of two years of consultations with WHO-appointed experts, the work cautions against overestimating the benefits of AI while highlighting how it could be used to improve screening for diseases, assist with clinical care, and more.

The health care industry produces an enormous amount of data. An IDC study estimates the volume of health data created annually, which hit over 2,000 exabytes in 2020, will continue to grow at a 48% rate year over year. The trend has enabled significant advances in AI and machine learning, which rely on large datasets to make predictions ranging from hospital bed capacity to the presence of malignant tumors in MRIs. But unlike other domains to which AI has been applied, the sensitivity and scale of health care data makes collecting and leveraging it a formidable challenge.

The WHO report acknowledges this, pointing out that the opportunities brought about by AI are linked with risks. There’s the harms that biases encoded in algorithms could cause patients, communities, and care providers. Systems trained primarily on data from people in high-income countries, for example, may not perform well for low- and middle-income patients. What’s more, unregulated use of AI might undermine the rights of patients in favor of the commercial interests or governments engaged in surveillance.

The datasets used to train AI systems that can predict the onset of conditions like Alzheimer’s, diabetes, diabetic retinopathy, breast cancer, and schizophrenia come from a range of sources. But in many cases, patients aren’t fully aware their information is included. In 2017, U.K. regulators concluded that The Royal Free London NHS Foundation Trust, a division of the U.K.’s National Health Service based in London, provided Google’s DeepMind with data on 1.6 million patients without their consent.

Regardless of the source, this data can contain bias, perpetuating inequalities in AI algorithms trained for diagnosing diseases. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, researchers from the University of Toronto, the Vector Institute, and MIT showed that widely used chest X-ray datasets contain racial, gender, and socioeconomic biases.

Further illustrating the point, Stanford researchers found that some AI-powered medical devices approved by the U.S. Food and Drug Administration (FDA) are vulnerable to data shifts and bias against underrepresented patients. Even as AI becomes embedded in more medical devices — the FDA approved over 65 AI devices last year — the accuracy of these algorithms isn’t necessarily being rigorously studied, because they’re not being evaluated by prospective studies.

Experts argue that prospective studies, which collect test data prior to rather than concurrent with deployment, are necessary, particularly for AI medical devices because their actual use can differ from the intended use. For example, most computer-powered diagnostic systems are designed to be decision-support tools rather than primary diagnostic tools. A prospective study might reveal that clinicians are misusing a device for diagnosis, leading to outcomes that might deviate from what’s expected.

Beyond dataset challenges, models lacking peer review can encounter roadblocks when deployed in the real world. Scientists at Harvard found that algorithms trained to recognize and classify CT scans could become biased toward scan formats from certain CT machine manufacturers. Meanwhile, a Google-published whitepaper revealed challenges in implementing an eye disease-predicting system in Thailand hospitals, including issues with scan accuracy.

To limit the risks and maximize the benefits of AI for health, the WHO recommends taking steps to protect autonomy, ensure transparency and explainability, foster responsibility and accountability, and work toward inclusiveness and equity. The recommendations also include promoting well-being, safety, and the public interest, as well as AI that’s responsive and sustainable.

The WHO says redress should be available to people adversely affected by decisions based on algorithms, and also that designers should “continuously” assess AI apps to determine whether they’re aligning with expectations and requirements. In addition, the WHO recommends both governments and companies address disruptions in the workplace caused by automated systems, including training for health care workers to adapt to the use of AI.

“AI systems should … be carefully designed to reflect the diversity of socioeconomic and health care settings,” the WHO said in a press release. “They should be accompanied by training in digital skills, community engagement, and awareness-raising, especially for millions of healthcare workers who will require digital literacy or retraining if their roles and functions are automated, and who must contend with machines that could challenge the decision making and autonomy of providers and patients.”

As new examples of problematic AI in health care emerge, from widely deployed but untested algorithms to biased dermatological datasets, it’s becoming critical that stakeholders follow accountability steps like those outlined by the WHO. Not only will it foster trust in AI systems, but it could improve care for the millions of people who might be subjected to AI-powered diagnostic systems in the future.

“Machine learning really is a powerful tool, if designed correctly — if problems are correctly formalized and methods are identified to really provide new insights for understanding these diseases,” Dr. Mihaela van der Schaar, a Turing Fellow and professor of machine learning, AI, and health at the University of Cambridge and UCLA, said during a keynote at the ICLR conference in May 2020. “Of course, we are at the beginning of this revolution, and there is a long way to go. But it’s an exciting time. And it’s an important time to focus on such technologies.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Project Zero team outlines changes for 2021

Project Zero is a security research team at Google that spends time discussing and evaluating vulnerability disclosure policies and the consequence of those policies for users, vendors, security researchers, and software security. The team says it wants to be a group of researchers that benefits everyone working across the ecosystem to help make zero-day attacks more difficult. Project Zero has issued a summary of changes that will impact 2021.

In a nutshell, Project Zero won’t share technical details of the vulnerability for 30 days if a vendor patches it before the 90-day or seven-day deadline. The 30-day period is meant to allow for user patch adoption. The team says if an issue remains unpatched after 90 days, technical details will be published immediately. Earlier disclosure can be made with mutual agreement.

Project Zero says a disclosure deadline of seven days for issues that are being actively exploited in-the-wild against users will be made. If an issue remains unpatched after seven days, the technical details will be published immediately. If the issue is fixed within seven days, technical details will be published 30 days after the fix is available.

The researchers will allow vendors to request a 30-day grace period for in-the-wild bugs. Earlier disclosure could happen with mutual agreement. If Project Zero grants a grace period, that grace period uses a portion of the 30-day patch adoption period. That would mean an issue patched on day 100, adding in the grace period would mean disclosure on day 120.

Some elements for 2021 do carry over from 2020. Policy goals include faster patch development, thorough patch development, and improved patch adoption. When a variant of the previously reported bug is discovered, technical details of the variant are added to the existing Project Zero report that could already be public with no new deadline granted.

Repost: Original Source and Author Link