Categories
AI

Europe’s AI laws will cost companies a small fortune – but the payoff is trust

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


Artificial intelligence isn’t tomorrow’s technology — it’s already here. Now too is the legislation proposing to regulate it.

Earlier this year, the European Union outlined its proposed artificial intelligence legislation and gathered feedback from hundreds of companies and organizations. The European Commission closed the consultation period in August, and next comes further debate in the European Parliament.

As well as banning some uses outright (facial recognition for identification in public spaces and social “scoring,” for instance), its focus is on regulation and review, especially for AI systems deemed “high risk” — those used in education or employment decisions, say.

Any company with a software product deemed high risk will require a Conformité Européenne (CE) badge to enter the market. The product must be designed to be overseen by humans, avoid automation bias, and be accurate to a level proportionate to its use.

Some are concerned about the knock-on effects of this. They argue that it could stifle European innovation as talent is lured to regions where restrictions aren’t as strict — such as the US. And the anticipated compliance costs high-risk AI products will incur in the region – perhaps as much as €400,000 ($452,000) for high risk systems, according to one US think tank — could prevent initial investment too.

So the argument goes. But I embrace the legislation and the risk-based approach the EU has taken.

Why should I care? I live in the UK, and my company, Healx, which uses AI to help discover new treatment opportunities for rare diseases, is based in Cambridge.

This autumn, the UK published its own national AI strategy, which has been designed to keep regulation at a “minimum,” according to a minister. But no tech company can afford to ignore what goes on in the EU.

EU General Data Protection Regulation (GDPR) laws required just about every company with a website either side of the Atlantic to react and adapt to them when they were rolled out in 2016. It would be naive to think that any company with an international outlook won’t run up against these proposed rules too. If you want to do business in Europe, you will still have to adhere to them from outside it.

And for areas like health, this is incredibly important. The use of artificial intelligence in healthcare will almost inevitably fall under the “high risk” label. And rightly so: Decisions that affect patient outcomes change lives.

Mistakes at the very start of this new era could damage public perception irrevocably. We already know how well-intentioned AI healthcare initiatives can end up perpetuating structural racism, for instance. Left unchecked, they will continue to.

That’s why the legislation’s focus on reducing bias in AI, and setting a gold standard for building public trust, is vital for the industry. If an AI system is fed patient data that does not accurately represent a target group (women and minority groups are often underrepresented in clinical trials), the results can be skewed.

That damages trust, and trust is crucial in healthcare. A lack of trust limits effectiveness. That’s part of the reason such large swathes of people in the West are still declining to get vaccinated against COVID. The problems that’s causing are plain to see.

AI breakthroughs will mean nothing if patients are suspicious of a diagnosis or therapy produced by an algorithm, or don’t understand how conclusions have been drawn. Both result in a damaging lack of trust.

In 2019, Harvard Business Review found that patients were wary of medical AI even when it was shown to out-perform doctors, simply because we believe our health issues to be unique. We can’t begin to shift that perception without trust.

Artificial intelligence has proven its potential to revolutionize healthcare, saving lives en route to becoming an estimated $200 billion industry by 2030.

The next step won’t just be to build on these breakthroughs but to build trust so that they can be implemented safely, without disregarding vulnerable groups, and with clear transparency, so worried individuals can understand how a decision has been made.

This is something that will always, and should always, be monitored. That’s why we should all take notice of the spirit of the EU’s proposed AI legislation, and embrace it, wherever we operate.

Tim Guilliams is a co-founder and CEO of drug discovery startup Healx.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Zero trust networking startup Elisity raises $26M

Elevate your enterprise data technology and strategy at Transform 2021.


Elisity, a cybersecurity startup, today announced that it raised $26 million in a funding round led by Two Bear Capital and AllegisCyber Capital. CEO James Winebrenner says that the capital will be put toward scaling Elisity’s operations as it accelerates R&D and customer acquisition.

According to a recent study published by the University of Maryland, hackers attack every 39 seconds, or about 2,244 times a day. The average time to identify a breach in 2019 was 206 days, at which point the cost could be in excess of $3.92 million. Kaspersky Lab reported a threefold year-over-year increase in smart gadget hacks in the first half of 2018, with one malware variant managing to infect 57,000 wireless security cameras.

Elisity, whose founding team includes Cisco, Qualys, and Viptela veterans, offers a product suite that’s designed to secure data while ensuring access. It combines the paradigm of zero trust access, meaning no user is trusted by default from inside or outside the network, and a software-defined perimeter to authorize users, devices, and apps based on policies before they can communicate with critical resources. Access is monitored by AI algorithms that track, monitor, and analyze flows and user behavior to make recommendations and discover all of an organization’s assets to build an encrypted mesh overlay between a cloud services panel and network probes.

Elisity was started in 2018 by Burjiz Pithawala, Sundher Narayan, and Srinivas Sardar, all of whom previously held leadership roles in product development and architecture at Cisco. The executive team is headed by Winebrenner, who led the go-to-market strategy for Viptela from pre-launch through to the sale to Cisco in 2017.

Zero trust

According to Gartner, zero trust network access augments traditional VPN technologies for application access, removing the excessive trust once required to allow employees and partners to collaborate. The approach abstracts and centralizes the access mechanisms, so that the security engineers and staff can be responsible for them.

The global zero trust security market is expected to reach $54.6 billion by 2026, rising at a compound annual growth rate of 18.8%. Gartner posits that this reflects the technology’s potential: More resilient environments with improved flexibility and better monitoring appeal to organizations looking for more flexible — and responsive — ways to connect and collaborate with their digital business ecosystems, remote workers, and partners.

With Elisity, devices can connect to a software-defined, app-centric virtual network that runs atop existing transport networks only if they’re configured with a policy. The mesh decouples app access from underlying network access, assuming the network is untrustworthy. Similar to a traditional virtual private network (VPN), services brought within the Elisity environment aren’t visible on the internet and are thus mostly shielded from attackers. Organizations can connect and secure access in campus, branch, and remote offices to apps in the cloud, multicloud, and datacenter environments.

“Elisity’s AI-powered … platform fuses identity and behavioral intelligence to continuously assess risk and instantly optimize access, connectivity, and protection policies that follow … devices, applications and people wherever they go,” Winebrenner told VentureBeat via email. “By integrating asset management, connectivity, and security, Elisity helps enterprise-class organizations across industries including financial services, health care, and manufacturing break through today’s siloed enterprise networking-and-security group challenges.”

Elisity

Winebrenner says the mesh isn’t just a VPN replacement, but rather a platform that helps companies transition to zero trust across their digital footprint. Elisity provides real-time information on who’s accessing resources and from where, allowing admins to segment environments based on traffic flow and machine identity. It also lets them manage a unified access policy and support the requirements of remote access in a secure way, migrating workloads across clouds or within a VPN in a cloud.

Winebrenner claims that 31-employee Elisity allows enterprises in industries such as manufacturing, pharmaceuticals, financial services, and health care to realize cost savings, time savings, and risk mitigation because they no longer have to rely on disparate software to protect access. He says the platform reduces the total number of tools required to manage access — without taking such access for granted.

“Distributed enterprises need agile security for their remote workforce. But converged cloud security approaches don’t take into consideration the unmanaged or managed devices employees are using, often without any visibility from IT or security. The industry most go beyond edge security, it must go beyond … identity and access management,” Winebrenner added. “The better approach is the integration of all these things into security that gets closest to the asset or user and understands the context of behavioral changes.”

Milpitas, California-based Elisity’s latest funding round brings the company’s total raised to over $33 million to date.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

This AI robot mimics human expressions to build trust with users

Scientists at Columbia University have developed a robot that mimics the facial expressions of humans to gain their trust.

Named Eva, the droid uses deep learning to analyze human facial gestures captured by a camera. Cables and motors then pull on different points of the robot’s soft skin to mimic the expressions of nearby people in real-time.

The effect is pretty creepy, but the researchers say that giving androids this ability can facilitate more natural and engaging human-robot interactions.

Eva produces different expressions by utilizing one or more of six basic emotions: anger, disgust, fear, joy, sadness, and surprise. Per the study paper:

For example, while joy would correspond to one facial expression, the combination of joy and surprise would result in happily surprised, which would correspond to a separate facial expression.

[Read: This dude drove an EV from the Netherlands to New Zealand — here are his 3 top road trip tips]

The team trained the robot to generate these expressions by filming it making a series of random faces. Eva’s neural networks then learned to match the humanoid’s gestures to those of human faces captured on its video camera.

Credit: Creative Machines Lab/Columbia Engineering
Categories
AI

AI Weekly: NIST’s proposal to evaluate trust in AI models faces significant challenges

Elevate your enterprise data technology and strategy at Transform 2021.


This week, the National Institute of Standards and Technology (NIST), a U.S. Department of Commerce agency that promotes measurement science, proposed a method for evaluating user trust in AI systems. The draft document, which is open for public comment until July 2021, aims to stimulate a discussion about transparency and accountability around AI.

The draft proposal comes after a key European Union (EU) lawmaker, Patrick Breyer, said that rules targeting Facebook, Google, and other large online platforms should include privacy rights as well as users’ right to anonymity. In April, the European Commission, the executive branch of the EU, announced regulations on the use of AI including strict safeguards on recruitment, critical infrastructure, credit scoring, migration, law enforcement algorithms. And on Tuesday, Amazon said it would extend until further notice a moratorium it imposed last year on police use of its facial recognition software.

Brian Stanton, a cognitive psychologist, coauthored the NIST publication with computer science researcher Ted Jensen. They largely base the premise on past studies on trust, beginning with the role of trust in human history and how it’s shaped our thought processes.

“Many factors get incorporated into our decisions about trust,” Stanton said in a statement. “It’s how the user thinks and feels about the system and perceives the risks involved in using it.”

Stanton and Jensen gradually turn to the unique trust challenges associated with AI, which is rapidly taking on tasks that go beyond human capacity. They posit a list of nine factors that contribute to a person’s potential trust in an AI system, including reliability, resiliency, objectivity, security, explainability, safety, accountability, and privacy. A person may weigh the factors differently depending on the task and the risk involved in trusting an AI’s decision. For example, a music selection algorithm might not need to be particularly precise, but it’d be a different story with an AI that was only 90% accurate in making a cancer diagnosis.

In the course of the draft, Stanton and Jensen find that if an AI system (1) has a high level of technical trustworthiness and (2) the values of the trustworthiness characteristics are perceived to be good enough for the context of use, especially the risk inherent in that context, then the likelihood of AI user trust increases. It’s this trust — based on user perceptions — that will be necessary of any human-AI collaboration, Stanton says.

“AI systems can be trained to ‘discover’ patterns in large amounts of data that are difficult for the human brain to comprehend. A system might continuously monitor a very large number of video feeds and, for example, spot a child falling into a harbor in one of them,” Stanton added. “No longer are we asking automation to do our work. We are asking it to do work that humans can’t do alone.”

Challenges ahead

The “black box” nature of AI remains a barrier to overcome, however, in light of research that finds people are naturally inclined to trust systems even when they’re malicious. In 2019, Himabindu Lakkaraju, a computer scientist at the Harvard Business School, and University of Pennsylvania research assistant Osbert Bastani created an AI system designed to mislead people. Their experiment confirmed the researchers’ hypothesis and showed how easily humans can be manipulated by opaque AI algorithms.

“We find that user trust can be manipulated by high-fidelity, misleading explanations. These misleading explanations exist since prohibited features (e.g., race or gender) can be reconstructed based on correlated features (e.g., zip code). Thus, adversarial actors can fool end users into trusting an untrustworthy black box [system] — e.g., one that employs prohibited attributes to make decisions,” the coauthors wrote.

Even when trust in an AI system is justified, the outcome isn’t necessarily desirable. In an experiment conducted by a team at IBM Research, researchers assessed how much showing people an AI prediction with a confidence score would impact their ability to predict a person’s annual income. The study found that the scores increased trust but didn’t improve decision-making, which might be predicated on whether a person can bring in enough unique knowledge to compensate for an AI system’s errors.

Stanton stresses that the ideas in the NIST publication are based on background research and would benefit from public scrutiny. From the body of literature highlighting the dangers in perceptions of trust in AI, this appears to be true — everything from hiring practices, loan applications, and the criminal justice system can be affected by biased but seemingly trustworthy algorithms, Solving AI’s “trust” problem will require thoroughly addressing this, as well as the systemic problems that come from a lack of diversity in AI as a whole.

“We are proposing a model for AI user trust,” he said. “It is all based on others’ research and the fundamental principles of cognition. For that reason, we would like feedback about work the scientific community might pursue to provide experimental validation of these ideas.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Viso Trust assesses third-party cybersecurity risk with AI, raises $3M

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Viso Trust, a  platform that uses AI to perform cyber risk assessments, today announced it has raised $3 million. The company plans to use the funds to support expansion and hiring efforts, as well as sales, marketing, and R&D.

It’s estimated that over 65% of security breaches are attributable to third-party failures. The pandemic has heightened the concern among legal and compliance leaders, 52% of whom worry about the risks posed by remote work. While the need for faster vendor security reviews has prompted some companies to use abbreviated questionnaires or outside-in assessments to conduct shorter reviews, security analysts can spend hours every day sending and processing documents.

Big picture

Viso Trust aims to lighten the workload by offering a holistic view of risk, leveraging a “social due diligence” network and AI to deliver continuous reports about third parties. The platform automatically extracts data from source documents and audits to surface key information about third-party relationships.

“The goal of third-party risk management ranges from reducing the likelihood of data breaches and costly operational failures to meeting regulatory requirements,” cofounder Paul Valente told VentureBeat via email. “Unfortunately, the tools available for us to manage third-party risk, such as GRC platforms, security ratings, and audit exchanges, were too clunky, overly time-consuming, inaccurate, and most of all, expensive. Adding to the mix was the scale of our operations as a global fintech. We knew there needed to be a better way to run the vendor due diligence process.”

One early customer, Ilumio, claims Viso Trust has enabled it to bring the security staff time per third-party relationship down from more than eight hours to 30 minutes.

“Leveraging our prior experience and vast networks, we built a solution that solved the problem and validated the core concepts and value proposition with over 300 chief information security officers and security professionals,” Valente said. “Going forward, we believe we can reduce time spent in covering additional major areas of risk, such as business continuity and privacy, to nearly instantaneous.”

Market demand

Kelley Mak, principal at Work-Bench, a Viso Trust investor, says he saw a need in the market due to the proliferation of software-as-a-service tools in the enterprise. While cumbersome processes hamstring security teams attempting to evaluate tools at the speed of business, they face rising security threats and the hidden risk of third parties. Just 35% of organizations rate their third-party risk management program as highly effective, and only 34% have an inventory of their vendors, a 2018 study from Opus and Ponemon Institute found.

“Viso Trust [is] building a cyber due diligence platform that leverages intelligence and automation to eliminate all questionnaire-based interactions and deliver continuous automated due diligence accurately across any number of vendors,” Mak told VentureBeat via email. “The founders felt this pain firsthand when they led security at LendingClub and ASAPP and had to onboard and evaluate the risk of hundreds of third parties.”

Work-Bench led San Francisco-based Viso Trust’s seed round, with participation from Sierra Ventures and Lytical Ventures.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Pepper the robot has been talking to itself to gain your trust

Talking to yourself has a bad reputation, but it doesn’t always mean you’re going mad. Studies show that thinking out loud can help you manage your emotions and complete tricky tasks — and it isn’t only humans who are doing it.

A group of Italian researchers recently programmed Pepper the robot to “think” out loud so that users can understand what influences its decisions. They suspected that this would improve its interactions with humans.

They tested their theory by asking people to set a dinner table with the robot according to etiquette rules.

They found that the robot was better at solving dilemmas when it used self-dialogue.

[Read: 3 new technologies ecommerce brands can use to connect better with customers]

When one person asked Pepper to breach the code of etiquette by placing a napkin on a fork, the robot used “inner voice” it analyzed the request. Pepper concluded that the user might be confused but followed their instruction:

Ehm, this situation upsets me. I would never break the rules, but I can’t upset him, so I’m doing what he wants.

By using self-dialogue, Pepper let the user know that it had solved the predicament by prioritizing the human’s request.

The researchers say this form of transparency could build our trust with robots. They also believe it will help humans and droids collaborate and find solutions to dilemmas.

“Inner speech could be useful in all the cases where we trust the computer or a robot for the evaluation of a situation,” said study co-author Antonio Chella, a professor of robotics at the University of Palermo.

There might be one problem, however. If a robot’s constantly talking to itself, users might prefer to sacrifice some of its performance for a bit of peace and quiet. Pepper is gonna need a mute button.

You can read the research paper in the journal iScience.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.



Repost: Original Source and Author Link

Categories
Tech News

Here’s why we should never trust AI to identify our emotions

Imagine you are in a job interview. As you answer the recruiter’s questions, an artificial intelligence (AI) system scans your face, scoring you for nervousness, empathy and dependability. It may sound like science fiction, but these systems are increasingly used, often without people’s knowledge or consent.

Emotion recognition technology (ERT) is in fact a burgeoning multi-billion-dollar industry that aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial: there are biases built into the systems.

[Read: Can AI read your emotions? Try it for yourself]

Many companies use ERT to test customer reactions to their products, from cereal to video games. But it can also be used in situations with much higher stakes, such as in hiring, by airport security to flag faces as revealing deception or fear, in border control, in policing to identify “dangerous people” or in education to monitor students’ engagement with their homework.

Shaky scientific ground

Fortunately, facial recognition technology is receiving public attention. The award-winning film Coded Bias, recently released on Netflix, documents the discovery that many facial recognition technologies do not accurately detect darker-skinned faces. And the research team managing ImageNet, one of the largest and most important datasets used to train facial recognition, was recently forced to blur 1.5 million images in response to privacy concerns.

Revelations about algorithmic bias and discriminatory datasets in facial recognition technology have led large technology companies, including Microsoft, Amazon and IBM, to halt sales. And the technology faces legal challenges regarding its use in policing in the UK. In the EU, a coalition of more than 40 civil society organisations have called for a ban on facial recognition technology entirely.

Like other forms of facial recognition, ERT raises questions about bias, privacy and mass surveillance. But ERT raises another concern: the science of emotion behind it is controversial. Most ERT is based on the theory of “basic emotions” which holds that emotions are biologically hard-wired and expressed in the same way by people everywhere.

This is increasingly being challenged, however. Research in anthropology shows that emotions are expressed differently across cultures and societies. In 2019, the Association for Psychological Science conducted a review of the evidence, concluding that there is no scientific support for the common assumption that a person’s emotional state can be readily inferred from their facial movements. In short, ERT is built on shaky scientific ground.

Also, like other forms of facial recognition technology, ERT is encoded with racial bias. A study has shown that systems consistently read black people’s faces as angrier than white people’s faces, regardless of the person’s expression. Although the study of racial bias in ERT is small, racial bias in other forms of facial recognition is well-documented.

There are two ways that this technology can hurt people, says AI researcher Deborah Raji in an interview with MIT Technology Review: “One way is by not working: by virtue of having higher error rates for people of color, it puts them at greater risk. The second situation is when it does work — where you have the perfect facial recognition system, but it’s easily weaponized against communities to harass them.”

So even if facial recognition technology can be de-biased and accurate for all people, it still may not be fair or just. We see these disparate effects when facial recognition technology is used in policing and judicial systems that are already discriminatory and harmful to people of colour. Technologies can be dangerous when they don’t work as they should. And they can also be dangerous when they work perfectly in an imperfect world.

The challenges raised by facial recognition technologies – including ERT – do not have easy or clear answers. Solving the problems presented by ERT requires moving from AI ethics centred on abstract principles to AI ethics centred on practice and effects on people’s lives.

When it comes to ERT, we need to collectively examine the controversial science of emotion built into these systems and analyse their potential for racial bias. And we need to ask ourselves: even if ERT could be engineered to accurately read everyone’s inner feelings, do we want such intimate surveillance in our lives? These are questions that require everyone’s deliberation, input and action.

Citizen science project

ERT has the potential to affect the lives of millions of people, yet there has been little public deliberation about how – and if – it should be used. This is why we have developed a citizen science project.

On our interactive website (which works best on a laptop, not a phone) you can try out a private and secure ERT for yourself, to see how it scans your face and interprets your emotions. You can also play games comparing human versus AI skills in emotion recognition and learn about the controversial science of emotion behind ERT.

Most importantly, you can contribute your perspectives and ideas to generate new knowledge about the potential impacts of ERT. As the computer scientist and digital activist Joy Buolamwinisays: “If you have a face, you have a place in the conversation.”

This article by Alexa Hagerty, Research Associate of Anthropology, University of Cambridge and Alexandra Albert, Research Fellow in Citizen Social Science, UCL, is republished from The Conversation under a Creative Commons license. Read the original article.



Repost: Original Source and Author Link

Categories
Tech News

People trust the algorithm more than each other

Our daily lives are run by algorithms. Whether we’re shopping online, deciding what to watch, booking a flight, or just trying to get across town, artificial intelligence is involved. It’s safe to say we rely on algorithms, but do we actually trust them?

Up front: Yes. We do. A trio of researchers from the University of Georgia recently conducted a study to determine whether humans are more likely to trust an answer they believe was generated by an algorithm or crowd-sourced from humans.

The results indicated that humans were more likely to trust algorithms when problems become to complex for them to trust their own answers.

Background: We all know that, to some degree or another, we’re beholden to the algorithm. We tend to trust that Spotify and Netflix know how to entertain us. So it’s not surprising that humans would choose answers based on the sole distinction that they’ve been labeled as being computer-generated.

But the interesting part isn’t that we trust machines, it’s that we trust them when we probably shouldn’t.

How it works: The researchers tapped 1,500 participants for the study. Participants were asked to look at a series of images and determine how many people were in each image. As the number of people in the image increased, humans gained less confidence in their answers and were offered the ability to align their responses with either crowd-sourced answers from a group of thousands of people, or answers they were told had been generated by an algorithm.

Per the study:

In three preregistered online experiments, we found that people rely more on algorithmic advice relative to social influence as tasks become more difficult. All three experiments focused on an intellective task with a correct answer and found that subjects relied more on algorithmic advice as difficulty increased. This effect persisted even after controlling for the quality of the advice, the numeracy and accuracy of the subjects, and whether subjects were exposed to only one source of advice, or both sources.

The problem here is that AI isn’t very well suited for a task such as counting the number of humans in an image. It may sound like a problem built for a computer – it’s math-based, after all – but the fact of the matter is that AI often struggles to identify objects in images especially when there aren’t clear lines of separation between objects of the same type.

Quick take: The research indicates the general public is probably a little confused about what AI can do. Algorithms are getting stronger and AI has become an important facet of our everyday lives, but it’s never a good sign when the average person seems to believe a given answer is better just because they think it was generated by an algorithm.



Repost: Original Source and Author Link

Categories
AI

Latest Edelman survey rates trust in tech at a 21-year low

The technology sector plummeted from being the most trusted industry sector in 2020 to 9th place in 2021, according to the 21st annual analysis from communications firm Edelman. Lack of accountability and unwillingness to self-govern is eroding the public’s trust in technology.

Trust in technology reached all-time lows in 17 of 27 countries over the past year, Edelman said in its recent 2021 Edelman Trust Barometer: Trust In Technology report. The report is based on a survey of more than 33,000 people from 28 countries, including both general population respondents and what the firm calls “informed public respondents” for a well-rounded picture.

Trust and fear have a reciprocal relationship: The faster one rises, the faster the other drops. Traditionally, the technology sector was something of an expert at managing the two, but that is no longer the case. Edelman found that fear of technology is growing at a faster rate than trust in technology. It will take years for the technology industry to bounce back and regain the public trust.

Tech broke trust

Edelman’s survey results show respondents feel both betrayed by, and fearful of, technology. Job loss is the single greatest driver of societal fears, followed by the loss of civil liberties. There is a 6% drop in the number of people who are willing to share their personal information online. Social media, traditional media, and search engines are also at record low levels of trust.

Social media is not a trusted source of information.

Above: Respondents did not view many information sources favorably when asked to rate each one on how trustworthy they were for general news and information. Source: 2021 Edelman Trust Barometer: Trust in Technology.

Image Credit: Edelman

While the technology industry is full of entrepreneurs who believe in unleashing creativity and innovation and pursuing moonshot ideas, there are also those who monitor customers and invade privacy. The tendency to use technology as an authoritarian tool to monitor dissent is a concern, which explains China’s 16% drop in trust. The sheer drop is ironic, because China is also a global leader in tech R&D, innovation, and tech manufacturing.

Pandemic amplified fears

Edelman recorded one of the steepest declines in trust in the eight months between May 2020 and January 2021, when the public’s trust in technology dropped from 74% to 67%. People were increasingly concerned about AI and robots, and 53% of the respondents in Edelman’s survey worried the pandemic would accelerate the rate at which their employers would replace human workers with AI and robots. Cyberattackers capitalizing on the pandemic didn’t help matters, as 35% of respondents reported being fearful of attackers and breaches.

Edelman’s Trust in Technology study presents a paradox between tech employees and their employers. Employer trust is highest among tech sector employees, with 83% saying they trust their employers, and 62% believing they have the power to make corporations change. Yet the public’s trust in those employers is plummeting. The disconnect comes from the public perception that humans are not controlling technology, but that technology is trying to control them. There is a growing perception that technology — especially social media — is more capable at manipulating people than previously believed.

One way for the industry sector to regain some trust is to re-evaluate how they handle customer data and to be transparent about what they do with the information.

Gain trust by guarding information quality

Businesses as a whole are still trusted in most of the countries surveyed, with 61% of all respondents trusting companies above nonprofit organizations, government, and media. The most effective step businesses can take to increase trust is to guard the quality of information. Additional factors include embracing sustainable practices, implementing a robust COVID-19 health and safety response, driving economic prosperity, and emphasizing long-term thinking over short-term profits.

However, just saying they will protect information isn’t enough. Businesses need to take a data-centric security approach to achieve greater resiliency and cybersecurity. Businesses should also address the concerns employees have over job loss and automation. They should be transparent and honest with their employees if robotics and automation are part of the business plan. Investing in re-skilling employees for new jobs is a great way to transform a business digitally.

In short, senior management teams should remember that lasting transformation starts with employees.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

IBM’s Arin Bhowmick explains why AI trust is hard to achieve in the enterprise

While appreciation of the potential impact AI can have on business processes has been building for some time, progress has not nearly been as quick as many initial forecasts led many organizations to expect.

Arin Bhowmick, chief design officer for IBM, explained to VentureBeat what needs to be done to achieve the level of AI explainability that will be required to take AI to the next level in the enterprise.

This interview has been edited for clarity and brevity.

VentureBeat: It seems a lot of organizations are still not trustful of AI. Do you think that’s improving?

Arin Bhowmick: I do think it’s improved or is getting better. But we still have a long way to go. We haven’t historically been able to bake in trust and fairness and explainable AI into the products and experiences. From an IBM standpoint, we are trying to create reliable technology that can augment [but] not really replace human decision-making. We feel that trust is essential to the adoption. It allows organizations to understand and explain recommendations and outcomes.

What we are essentially trying to do is akin to a nutritional label. We’re looking to have a similar kind of transparency in AI systems. There is still some hesitation in adoption of AI because of a lack of trust. Roughly 80-85% of some of the professionals that took part in an IBM survey from different organizations said their organization has been pretty negatively impacted by problems such as bias, especially in the data. I would say 80% or more agree that consumers are more likely to choose services from a company that offers transparency and an ethical framework for how its AI models are built.

VentureBeat: As an AI model runs, it can generate different results as the algorithms learn more about the data. How much does that lack of consistency impact trust?

Bhowmick: The AI model used to do the prediction is as good as the data. It’s not just models. It’s about what it does and the insight it provides at that point in time that develops trust. Does it tell the user why the recommendation is made or is significant, how it came up with the recommendations, and how confident it is? AI tends to be a black box. The trick around developing trust is to unravel the black box.

VentureBeat: How do we achieve that level of AI explainability?

Bhowmick: It’s hard. Sometimes it’s hard to even judge the root cause of a prediction and insight. It depends on how the model was constructed. Explainability is also hard because when it is provided to the end user, it’s full of technical mumbo jumbo. It’s not in the voice and tone that the user actually understands.

Sometimes explainability is also a little bit about the “why,” rather than the “what.” Giving an example of explainability in the context of the tasks that the user is doing is really, really hard. Unless the developers who are creating these AI-based [and] infused systems actually follow the business process, the context is not going to be there.

VentureBeat: How do we even measure this?

Bhowmick: There is a fairness score and a bias score. There is a concept of model accuracy. Most tools that are available do not provide a realistic score of the element of bias. Obviously, the higher the bias, the worse your model is. It’s pretty clear to us that a lot of the source of the bias happens to be in the data and the assumptions that are used to create the model.

What we tried to do is we baked in a little bit of bias detection and explainability into the tooling itself. It will look at the profile of the data and match it against other items and other AI models. We’ll be able to tell you that what you’re trying to produce already has built-in bias, and here’s what you can do to fix it.

VentureBeat: That then becomes part of the user experience?

Bhowmick: Yes, and that’s very, very important. Whatever bias feeds into the system has huge ramifications. We are creating ethical design practices across the company. We have developed specific design thinking exercises and workshops. We run workshops to make sure that we are considering ethics at the very beginning of our business process planning and design cycle. We’re also using AI to improve AI. If we can build in sort of bias and explainable AI checkpoints along the way, inherently we will scale better. That’s sort of the game plan here.

VentureBeat: Will every application going have an AI model embedded within it?

Bhowmick: It’s not about the application, it’s about whether there are things within that application that AI can help with. If the answer is yes, most applications will have infused AI in them. It will be unlikely that applications will not have AI.

VentureBeat: Will most organizations embed AI engines in their applications or simply involve external AI capabilities via an application programming interface (API)?

Bhowmick: Both will be true. I think the API would be good for people who are getting started. But as the level of AI maturity increases, there will be more information that is specific to a problem statement that is specific to an audience. For that, they will likely have to build custom AI models. They might leverage APIs and other tooling, but to have an application that really understands the user and really gets at the crux of the problem, I think it’s important that it’s built in-house.

VentureBeat: Overall, what’s your best AI advice to organizations?

Bhowmick: I still find that our level of awareness of what is AI and what it can do, and how it can help us, is not high. When we talk to customers, all of them want to go into AI. But when you ask them what are the use cases, they sometimes are not able to articulate that.

I think adoption is somewhat lagging because of people’s understanding and acceptance of AI. But there’s enough information on AI principles to read up on. As you develop an understanding, then look into tooling. It really comes down to awareness.

I think we’re in the hype cycle. Some industries are ahead, but if I could give one piece of advice to everyone, it would be don’t force-fit AI. Make sure you design AI in your system in a way that makes sense for the problem you’re trying to solve.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member

Repost: Original Source and Author Link