Categories
AI

How enterprises can keep fear, uncertainty, and doubt from impeding AI

Enterprise adoption of AI technology is increasing, but there is still a substantial amount of fear, uncertainty, and doubt (FUD) among the knowledge workforce that could hamper its deployment and perhaps the ultimate success of key initiatives.

Resistance to new software and new processes is nothing new in the enterprise, of course, but it could become a crucial factor for AI, considering the radical shift the technology brings to data environments and the level to which it is expected to penetrate the IT stack. AI technology is likely to subsume many of the daily tasks performed by knowledge workers, forcing a rethink of the individual’s proper role in the organization. Meanwhile, top executives worry about losing control of AI and doing real damage to their organization’s core processes, and their careers.

Fear of the unknown

However, it’s safe to say that much of this worry stems from the fact that AI is still largely unknown to the workforce. Headlines about lost jobs and bots run amok certainly don’t help calm the jitters, so it will be incumbent on the enterprise to not only introduce AI technology in a gradual, non-threatening manner but also take precautions to ensure that it integrates into existing environments cleanly before it starts to do any heavy lifting.

One way to do this is to turn today’s AI skeptics into AI evangelists, according to a recent report on Infosys. Research led by the company’s chief solution architect for data and AI, Rajan Padmanabhan, shows how identifying the key employees who will benefit most from AI is an important first step in calming the fears of less enthusiastic workers. By showing how AI improves overall business value and boosts the value of those who know how to use it, organizations can build a groundswell of support that accelerates over time.

Another key factor in building trust and avoiding FUD is transparency. People tend to fear what they don’t understand, so opening up AI to scrutiny and showing how it makes key decisions can greatly improve acceptance in the workforce. And ultimately, the enterprise needs to ensure that AI is being used in an ethical manner, or even the evangelists could lose heart and turn into skeptics.

Also, don’t underestimate the way in which AI poses a threat to both individual and collective identity in the workplace. New research from Germany’s Paderborn University and the University of Duisberg-Essen highlights the ways in which AI can negatively influence workers’ perception of themselves and their role in the enterprise. To counter this, top management must remain acutely aware of how AI changes the nature of work in the organization and how it contributes to the loss of status and position. The irony, however, is that AI will be a valuable tool in assessing its own threat to identity and in predicting the impact this will have on the business model.

Familiarity breeds acceptance

One of the biggest hurdles in overcoming FUD of any kind is dispelling the misconceptions that tend to arise before the technology is deployed. In earlier, less intimidating versions of AI, such as Tamagotchis and the robot dog Aibo, users saw how familiarity allowed people to acknowledge AI for what it really is, according to Woodside Capital Partners’ Jon Shalowitz. Now, we are seeing the same pattern in Japan as people encounter Softbank’s Pepper robots, which are programmed to recognize emotions and facial cues. A few moments of interaction are all it takes for people to overcome trepidation and start eagerly interacting with the bots.

All iterations of AI will not come in the form of smiling robots, however. The fact remains that enterprises still have a lot of work to do to successfully transition AI from finite projects to a refined system, says Mark Montgomery, CEO of AI OS developer KYield.

As the legendary management consultant and writer Peter Drucker said: “Culture eats strategy for breakfast,” meaning that the most well-intentioned initiative is doomed to fail if it cannot overcome the resistance to change that is inherent in any complex organization.

So while not every AI project will succeed, Montgomery advises a light touch with AI at first, since the last thing anyone wants is a confusing, conflicting rollout on a mass scale all at once. But don’t be too timid, either. Once success on a limited scale has been achieved, the focus should shift to expansion and operationalization, because AI enthusiasm will wane if it fails to achieve appreciable results within a set period of time.

AI is essentially the new guy in the office, and the new guy is often under a lot of pressure. He has to first gain the trust of his coworkers and then handle the mounting workload that will inevitably come his way. The problem is that people tend to be a lot less forgiving of technology than real people. Overcoming FUD will only come once AI has proven it can do the job and do it well.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM releases AI model toolkit to help developers measure uncertainty

Elevate your enterprise data technology and strategy at Transform 2021.


At its Digital Developer Conference today, IBM open-sourced Uncertainty Quantification 360 (UQ360), a new toolkit focused on enabling AI to understand and communicate its uncertainty. Following in the footsteps of IBM’s AI Fairness 360 and AI Explainability 360, the goal of UQ360 is to foster community practices across researchers, data scientists, developers, and others that might lead to better understanding and communication around the limitations of AI.

It’s commonly understood that deep learning models are overconfident — even when they make mistakes. Epistemic uncertainty describes what a model doesn’t know because the training data wasn’t appropriate. On the other hand, aleatoric uncertainty is the uncertainty arising from the natural randomness of observations. Given enough training samples, epistemic uncertainty will decrease, but aleatoric uncertainty can’t be reduced even when more data is provided.

UQ360 offers a set of algorithms and a taxonomy to quantify uncertainty, as well as capabilities to measure and improve uncertainty quantification (UQ). For every UQ algorithm provided in the UQ360 Python package, a user can make a choice of an appropriate style of communication by following IBM’s guidance on communicating UQ estimates, from descriptions to visualizations. UQ360 also includes an interactive experience that provides an introduction to producing UQ and ways to use UQ in a house price prediction application. Moreover, UQ360 includes a number of in-depth tutorials to demonstrate how to use UQ across the AI lifecycle.

The importance of uncertainty

Uncertainty is a major barrier standing in the way of self-supervised learning’s success, Facebook chief AI scientist Yann LeCun said at the International Conference on Learning Representation (ICLR) last year. Distributions are tables of values that link every possible value of a variable to the probability the value could occur. They represent uncertainty perfectly well where the variables are discrete, which is why architectures like Google’s BERT are so successful. But researchers haven’t yet discovered a way to usefully represent distributions where the variables are continuous — i.e., where they can be obtained only by measuring.

As IBM research staff members Prasanna Sattigeri and Q. Vera Liao note in a blog post, the choice of UQ method depends on a number of factors, including the underlying model, the type of machine learning task, characteristics of the data, and the user’s goal. Sometimes a chosen UQ method might not produce high-quality uncertainty estimates and could mislead users, so it’s crucial for developers to evaluate the quality of UQ and improve the quantification quality if necessary before deploying an AI system.

In a recent study conducted by Himabindu Lakkaraju, an assistant professor at Harvard University, showing uncertainty metrics to both people with a background in machine learning and non-experts had an equalizing effect on their resilience to AI predictions. While fostering trust in AI may never be as simple as providing metrics, awareness of the pitfalls could go some way toward protecting people from machine learning’s limitations.

“Common explainability techniques shed light on how AI works, but UQ exposes limits and potential failure points,” Sattigeri and Liao wrote. “Users of a house price prediction model would like to know the margin of error of the model predictions to estimate their gains or losses. Similarly, a product manager may notice that an AI model predicts a new feature A will perform better than a new feature B on average, but to see its worst-case effects on KPIs, the manager would also need to know the margin of error in the predictions.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link