Report: Conversational AI reduces barriers to mental health treatment

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

A new study from Botco AI, a HIPAA-compliant conversational marketing platform, surveyed executives from various types of mental and behavioral health facilities and organizations to understand the impact of the pandemic and how it has increased the need for mental health services. Titled “The State of Conversational Automation and Access to Mental Health Services,” the study also focused on how mental health service providers are using technology to interact with current and prospective patients.

The biggest learning from the study is that conversational AI (artificial intelligence) is a game-changer for behavioral health care. According to the study, “The pandemic’s social-distancing mandates, lockdowns and restrictions prompted the need for online assistance, virtual appointments and digital interactions to facilitate greater accessibility to mental health care.”

Conversational AI eases the pathways to treatment: The overwhelming majority of respondents — 89% — said that automated chat tools make it easier for patients to access the mental health treatment they need. More than three-quarters (76%) confirmed that they used technology for patient coordination and intake, and 61% said that their company’s website is already using conversational AI to answer questions from prospects or their families.

“The findings of this study show how important it is for mental health facilities to embrace new technologies in order to reduce the barriers that prevent millions of Americans from accessing the mental health services they need,” said Anu Shukla, cofounder and executive chairman of Botco AI.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

Increased need for mental health services

While the pandemic disrupted or halted critical in-person mental health services, the demand for mental health treatment actually increased. As a result, 100% of survey respondents expanded their virtual behavioral support during the pandemic.

Approximately 34% of survey respondents acknowledged that their mental health facility, behavioral health organization or addiction treatment center boosted their online assistance 50-75% to help those who were experiencing mental health challenges.

The mental health effects of the pandemic were felt disproportionately: Approximately 41% of respondents believe that the pandemic has made it harder for people in general to access mental health treatment. 55% said that the pandemic has made it harder for African Americans, Hispanics, Asians, Native Americans and LGBTQ+ individuals to access mental health services.

The benefits of conversational AI in behavioral healthcare

“These days, video-conferencing capabilities have become table stakes for mental health facilities,” said Shukla, “and many treatment centers are beginning to leverage AI and automation for crucial functions such as prospect qualification, patient intake, and even the treatment itself.”

For patients, digital interactions, online healthcare services and the use of conversational AI technologies such as chatbots can have many benefits, including:

  • Healthcare without geographical boundaries.
  • 24/7 patient support and resources.
  • Ease of booking appointments with therapists and healthcare professionals.
  • Tailored responses that transform their patient journey and help improve clinical outcomes.
  • A user-friendly, judgment-free environment that eliminates stigma and embarrassment.
  • Overcoming potential language barriers.

For providers, using conversational AI to streamline the intake process can play a critical role in the operational, financial and clinical success of medical practices and health systems. This not only means faster throughput and shorter wait times, but also enables health care providers to use their time to provide better care for their patients, rather than filling out paperwork.

In regard to conversational automation, specifically, Shukla said, “not only does it help patients and their families get faster, more accurate answers to their most pressing questions, but it can also help verify insurance benefits instantly and it can tie into the patient’s electronic health record in order to ensure a higher quality of care.”

Providing on-demand support is critical to improving access to mental health services

Digital channels empower facility discovery: 49% of patients and their families learned about their chosen treatment facilities online using website tools like chatbots, much more frequently than through word of mouth (25%), doctor recommendations (19%), or advertisements (7%).

Chatbots are already popular patient tools: 61% of respondents said that their organizations already had conversational AI tools in place on their websites to answer questions from prospective patients or their families.

AI chat is perfectly positioned to answer FAQs: The most common topics patients and their families broach before enrolling in treatment include cost (68%), insurance coverage (60%), duration of treatment (46%), available types of treatment (43%), and payment options (41%). These are all questions that automated chatbots are designed to answer, providing patients with the support they need without requiring additional employee effort.

“The good news is that the majority of mental health facilities are already embracing chatbots and other technologies,” said Shukla, “but based on our findings, the industry as a whole still has some work to do in order to improve mental health support for those who need it, and for minorities in particular.”

In June 2022, Botco AI surveyed approximately 120 executives, including vice presidents, executive directors, facility managers and admissions directors, employed at mental health facilities, behavioral health organizations, addiction treatment providers and rehabilitation centers.

Read the full report from Botco AI.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Repost: Original Source and Author Link


Microsoft Flight Simulator patch significantly reduces the game’s download size

One of the biggest complaints about modern games today is the massive size of the downloads required. Take the new Microsoft Flight Simulator, for instance. The initial download for the game was 170 gigabytes. Even on a fast Internet connection, a download of that size would take a significant amount of time. Consider that much of the United States has “broadband” speeds that aren’t considered broadband in many parts of the world, and massive downloads are a significant inconvenience for gamers.

Microsoft knows that a massive download isn’t ideal and has issued a new patch for Flight Simulator that reduces the initial full download for the game from 170 gigabytes to 83 gigabytes. The new patch is, also known as SIM Update 4. Microsoft says the team performed optimizations for the initial download of the game to reduce its size. While a reduction of tens of gigabytes is certainly welcome, a download of 86 gigabytes is still a large file.

The official online specifications for the game say you need 150 gigabytes on the hard drive for installation. The Microsoft Store page for the game says it measures 127.01 gigabytes. Microsoft Flight Simulator has been around for months now and has a wealth of game content.

The game has been praised for its photorealistic graphics that allow you to see individual buildings on the ground in many areas. The airports are also incredibly detailed all the way down to signage marking the taxiways. The game is one of the most technologically advanced titles out there, and fans are hoping Microsoft has some new announcements to reveal during E3. With the new update, fans on slower connections will be less frustrated while preparing to play.

Repost: Original Source and Author Link


Adversarial training reduces safety of neural networks in robots: Research

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

There’s a growing interest in employing autonomous mobile robots in open work environments such as warehouses, especially with the constraints posed by the global pandemic. And thanks to advances in deep learning algorithms and sensor technology, industrial robots are becoming more versatile and less costly.

But safety and security remain two major concerns in robotics. And the current methods used to address these two issues can produce conflicting results, researchers at the Institute of Science and Technology Austria, the Massachusetts Institute of Technology, and Technische Universitat Wien, Austria have found.

On the one hand, machine learning engineers must train their deep learning models on many natural examples to make sure they operate safely under different environmental conditions. On the other, they must train those same models on adversarial examples to make sure malicious actors can’t compromise their behavior with manipulated images.

But adversarial training can have a significantly negative impact on the safety of robots, the researchers at IST Austria, MIT, and TU Wien discuss in a paper titled “Adversarial Training is Not Ready for Robot Learning.” Their paper, which has been accepted at the International Conference on Robotics and Automation (ICRA 2021), shows that the field needs new ways to improve adversarial robustness in deep neural networks used in robotics without reducing their accuracy and safety.

Adversarial training

Deep neural networks exploit statistical regularities in data to carry out prediction or classification tasks. This makes them very good at handling computer vision tasks such as detecting objects. But reliance on statistical patterns also makes neural networks sensitive to adversarial examples.

An adversarial example is an image that has been subtly modified to cause a deep learning model to misclassify it. This usually happens by adding a layer of noise to a normal image. Each noise pixel changes the numerical values of the image very slightly, enough to be imperceptible to the human eye. But when added together, the noise values disrupt the statistical patterns of the image, which then causes a neural network to mistake it for something else.

artificial intelligence adversarial example panda

Above: Adding a layer of noise to the panda image on the left turns it into an adversarial example.

Adversarial examples and attacks have become a hot topic of discussion at artificial intelligence and security conferences. And there’s concern that adversarial attacks can become a serious security concern as deep learning becomes more prominent in physical tasks such as robotics and self-driving cars. However, dealing with adversarial vulnerabilities remains a challenge.

One of the best-known methods of defense is “adversarial training,” a process that fine-tunes a previously trained deep learning model on adversarial examples. In adversarial training, a program generates a set of adversarial examples that are misclassified by a target neural network. The neural network is then retrained on those examples and their correct labels. Fine-tuning the neural network on many adversarial examples will make it more robust against adversarial attacks.

Adversarial training results in a slight drop in the accuracy of a deep learning model’s predictions. But the degradation is considered an acceptable tradeoff for the robustness it offers against adversarial attacks.

In robotics applications, however, adversarial training can cause unwanted side effects.

“In a lot of deep learning, machine learning, and artificial intelligence literature, we often see claims that ‘neural networks are not safe for robotics because they are vulnerable to adversarial attacks’ for justifying some new verification or adversarial training method,” Mathias Lechner, Ph.D. student at IST Austria and lead author of the paper, told TechTalks in written comments. “While intuitively, such claims sound about right, these ‘robustification methods’ do not come for free, but with a loss in model capacity or clean (standard) accuracy.”

Lechner and the other coauthors of the paper wanted to verify whether the clean-vs-robust accuracy tradeoff in adversarial training is always justified in robotics. They found that while the practice improves the adversarial robustness of deep learning models in vision-based classification tasks, it can introduce novel error profiles in robot learning.

Adversarial training in robotic applications

autonomous robot in warehouse

Say you have a trained convolutional neural network and want to use it to classify a bunch of images stored in a folder. If the neural network is well trained, it will classify most of them correctly and might get a few of them wrong.

Now imagine that someone inserts two dozen adversarial examples in the images folder. A malicious actor has intentionally manipulated these images to cause the neural network to misclassify them. A normal neural network would fall into the trap and give the wrong output. But a neural network that has undergone adversarial training will classify most of them correctly. It might, however, see a slight performance drop and misclassify some of the other images.

In static classification tasks, where each input image is independent of others, this performance drop is not much of a problem as long as errors don’t occur too frequently. But in robotic applications, the deep learning model is interacting with a dynamic environment. Images fed into the neural network come in continuous sequences that are dependent on each other. In turn, the robot is physically manipulating its environment.

autonomous robot in warehouse

“In robotics, it matters ‘where’ errors occur, compared to computer vision which primarily concerns the amount of errors,” Lechner says.

For instance, consider two neural networks, A and B, each with a 5% error rate. From a pure learning perspective, both networks are equally good. But in a robotic task, where the network runs in a loop and makes several predictions per second, one network could outperform the other. For example, network A’s errors might happen sporadically, which will not be very problematic. In contrast, network B might make several errors consecutively and cause the robot to crash. While both neural networks have equal error rates, one is safe and the other isn’t.

Another problem with classic evaluation metrics is that they only measure the number of incorrect misclassifications introduced by adversarial training and don’t account for error margins.

“In robotics, it matters how much errors deviate from their correct prediction,” Lechner says. “For instance, let’s say our network misclassifies a truck as a car or as a pedestrian. From a pure learning perspective, both scenarios are counted as misclassifications, but from a robotics perspective the misclassification as a pedestrian could have much worse consequences than the misclassification as a car.”

Errors caused by adversarial training

The researchers found that “domain safety training,” a more general form of adversarial training, introduces three types of errors in neural networks used in robotics: systemic, transient, and conditional.

Transient errors cause sudden shifts in the accuracy of the neural network. Conditional errors will cause the deep learning model to deviate from the ground truth in specific areas. And systemic errors create domain-wide shifts in the accuracy of the model. All three types of errors can cause safety risks.

errors caused by adversarial training

Above: Adversarial training causes three types of errors in neural networks employed in robotics.

To test the effect of their findings, the researchers created an experimental robot that is supposed to monitor its environment, read gesture commands, and move around without running into obstacles. The robot uses two neural networks. A convolutional neural network detects gesture commands through video input coming from a camera attached to the front side of the robot. A second neural network processes data coming from a lidar sensor installed on the robot and sends commands to the motor and steering system.

The researchers tested the video-processing neural network with three different levels of adversarial training. Their findings show that the clean accuracy of the neural network decreases considerably as the level of adversarial training increases. “Our results indicate that current training methods are unable to enforce non-trivial adversarial robustness on an image classifier in a robotic learning context,” the researchers write.

adversarial training robot vision

Above: The robot’s visual neural network was trained on adversarial examples to increase its robustness against adversarial attacks.

“We observed that our adversarially trained vision network behaves really opposite of what we typically understand as ‘robust,’” Lechner says. “For instance, it sporadically turned the robot on and off without any clear command from the human operator to do so. In the best case, this behavior is annoying, in the worst case it makes the robot crash.”

The lidar-based neural network did not undergo adversarial training, but it was trained to be extra safe and prevent the robot from moving forward if there was an object in its path. This resulted in the neural network being too defensive and avoiding benign scenarios such as narrow hallways.

“For the standard trained network, the same narrow hallway was no problem,” Lechner said. “Also, we never observed the standard trained network to crash the robot, which again questions the whole point of why we are doing the adversarial training in the first place.”

Adversarial training error profiles

Above: Adversarial training causes a significant drop in the accuracy of neural networks used in robotics.

Future work on adversarial robustness

“Our theoretical contributions, although limited, suggest that adversarial training is essentially re-weighting the importance of different parts of the data domain,” Lechner says, adding that to overcome the negative side-effects of adversarial training methods, researchers must first acknowledge that adversarial robustness is a secondary objective, and a high standard accuracy should be the primary goal in most applications.

Adversarial machine learning remains an active area of research. AI scientists have developed various methods to protect machine learning models against adversarial attacks, including neuroscience-inspired architectures, modal generalization methods, and random switching between different neural networks. Time will tell whether any of these or future methods will become the golden standard of adversarial robustness.

A more fundamental problem, also confirmed by Lechner and his coauthors, is the lack of causality in machine learning systems. As long as neural networks focus on learning superficial statistical patterns in data, they will remain vulnerable to different forms of adversarial attacks. Learning causal representations might be the key to protecting neural networks against adversarial attacks. But learning causal representations itself is a major challenge and scientists are still trying to figure out how to solve it.

“Lack of causality is how the adversarial vulnerabilities end up in the network in the first place,” Lechner says. “So, learning better causal structures will definitely help with adversarial robustness.”

“However,” he adds, “we might run into a situation where we have to decide between a causal model with less accuracy and a big standard network. So, the dilemma our paper describes also needs to be addressed when looking at methods from the causal learning domain.”

Ben Dickson is a software engineer and the founder of TechTalks. He writes about technology, business, and politics.

This story originally appeared on Copyright 2021


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link