Categories
AI

How to approach AI more responsibly, according to a top AI ethicist

All the sessions from Transform 2021 are available on-demand now. Watch now.


Women in the AI field are making research breakthroughs, spearheading vital ethical discussions, and inspiring the next generation of AI professionals. We created the VentureBeat Women in AI Awards to emphasize the importance of their voices, work, and experience, and to shine a light on some of these leaders. In this series, publishing Fridays, we’re diving deeper into conversations with this year’s winners, whom we honored recently at Transform 2021. Check out last week’s interview with the winner of our AI entrepreneur award. 

Enterprise AI platform DataRobot builds 2.5 million models a day, it says, and Haniyeh Mahmoudian is personally invested in making sure they’re all as ethically and responsibly built as possible. Mahmoudian, a winner of VentureBeat’s Women in AI responsibility and ethics award, literally wrote the code for it.

An astrophysicist turned data science researcher turned the company’s first “global AI ethicist,” she’s also raised awareness on the need for responsible AI in the broader community. She speaks on panels, like at the World Economic Forum, but has also been driving change within her organization, too.

“As a coworker, I cannot stress how impactful her work has been in advancing the thinking of our engineers and practitioners to include ethics and bias measures in our software and client engagements,” said Ted Kwartler, VP of trusted AI at DataRobot, who nominated her for the award (he wasn’t the only one, by the way).

In this past year of crisis, Mahmoudian’s work found an even more relevant avenue. The U.S. government tapped her research into risk level modeling to improve its Covid-19 forecasting, and even Moderna used it for vaccine trials. Eric Hargan, the U.S. Department of Health and Human Services’ deputy secretary at the time, said “Dr. Mahmoudian’s work was instrumental in assuring that the simulation was unbiased and fair in its predictions.” He added that the impact statement for her team created for the simulation “broke new ground in AI public policy” and is being considered as a model for legislation.

For all that she’s been working on, VentureBeat is pleased to honor Mahmoudian with this award. We recently sat down (virtually) to further discuss her impact, as well as AI regulation, “ethics” as a buzzword, and her advice for deploying responsible AI.

This interview has been edited for brevity and clarity

VentureBeat: How would you describe your approach to AI? What drives your work?

Mahmoudian: For me, it’s all about learning new things. AI is more and more becoming a part of our day-to-day lives. And when I started working as a data scientist, it was always fascinating to me to learn new use cases and ideas. But at the same time, it kind of gave me the perspective that this area is very great. There’s a lot of potential in there. But at the same time, there are certain areas that you need to be cautious about.

VentureBeat: You wrote the code for statistical parity in DataRobot’s platform, as well as natural language explanations for users. These have helped companies in sectors from banking and insurance to tech, manufacturing, and CPG root out bias and improve their models. What does this look like and why is it important?

Mahmoudian: When I started my journey toward responsible AI, one of the things I noticed was that generally, you can’t really talk to non-technical people about the technical aspects of how the model behaves. They need to have a language that they understand. But just telling them “your model is biased” doesn’t solve anything either. And that’s what the natural language aspect of it helps with — not only telling them the system exhibits some level of bias, but helping you navigate that. Look at the data XYZ. Here is what we found.

This is at the case level, as well as at the general level. There are many various definitions for bias and fairness. It can be really hard to navigate which one you should be using, so we want to make sure you’re using the most relevant definition. In hiring use cases, for example, you’d probably be more interested in having a diverse workforce, so equal representation is what you’re looking for. But in a healthcare scenario, you probably don’t care about representation as much as you do making sure the model isn’t wrongfully denying access for the patients.

VentureBeat: Aside from your work helping mitigate algorithmic bias in models, you’ve also briefed dozens of Congressional offices on the issue  and are committed to helping policymakers get AI regulations right. How important do you believe regulation is in preventing harm caused by AI technologies?

Mahmoudian: I would say that regulations are definitely important. Companies are trying to deal with AI bias specifically, but there are gray areas. There’s no standardization and it’s uncertain. For these types of things, having clarification would be helpful. For example, in the new EU regulations, they tried to clarify what it means to have a high risk use case and, in those use cases, what the expectations are (having confirmatory test assessments, auditing, things like that). So these are the type of clarifications regulations can bring, which would really help companies understand the processes and also reduce risk for them as well.

VentureBeat: There’s so much talk about responsible AI and AI ethics these days, which is great because it’s really, really important. But do you fear — or already feel like — it’s becoming a buzzword? How do we make sure this work is real and not a facade or box to check off?

Mahmoudian: To be honest, it is used as a buzzword in industry. But I would also say that as much as it’s used in a marketing aspect, companies are genuinely starting to think about it. And this is because it’s actually benefiting them. When you’re looking at the surveys around AI bias, one of the fears they have is that they’re going to lose their customers. If a headline about their company were to come out, it’s their brand that would be jeopardized. These types of things are on their minds. So they’re also thinking that having a responsible AI system and framework can actually prevent them from having this type of risk for their business. So I would give them the benefit of the doubt. They are thinking about it and they are working on it. You could say it’s a little bit late, but it’s never too late. So it is a buzzword, but there’s a lot of genuine effort as well.

VentureBeat: What often gets overlooked in the conversations about ethical and responsible AI? What needs more attention?

Mahmoudian: Sometimes when you’re talking with people about ethics, they directly link it to bias and fairness. And sometimes it might be viewed as one group trying to push their ideas on to others. So I think we need to remove this from the process and make sure that ethics is not necessarily about bias; it’s about the whole process. If you’re putting out a model that just doesn’t perform well and your customers are using that, that can affect the people. Some might consider that unethical. So there are many different ways you can include ethics and responsibility in various aspects of the AI and machine learning pipeline. So it’s important for us to have that conversation. It’s not just about the endpoint of the process; responsible AI should be embedded throughout the whole pipeline.

VentureBeat: What advice do you have for enterprises building or deploying AI technologies about how to approach it more responsibly?

Mahmoudian: Have a good understanding of your process, and have a framework in place. Each industry, each company may have its own specific criteria and the type of projects they’re working on. So pick the kind of the processes and dimensions that are relevant to your work and can guide you throughout the process.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI Weekly: How to implement AI responsibly

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Implementing AI responsibly implies adopting AI in a manner that’s ethical, transparent, and accountable as well as consistent with laws, regulations, norms, customer expectations, and organizational values. “Responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable.

But organizations often underestimate the challenges in attaining this. According to Boston Consulting Group (BCG), less than half of enterprises that achieve AI at scale have fully mature, responsible AI deployments. Organizations’ AI programs commonly neglect the dimensions of fairness and equity, social and environmental impact, and human-AI cooperation, BCG analysts found.

The Responsible AI Institute (RAI) is among the consultancies aiming to help companies realize the benefits of AI implemented thoughtfully. An Austin, Texas-based nonprofit founded in 2017 by University of Texas, USAA, Anthem, and CognitiveScale, the firm works with academics, policymakers, and nongovernmental organizations with the goal of “unlocking the potential of AI while minimizing unintended consequences.”

According to chairman and founder Manoj Saxena, adopting AI responsibly requires a wholistic and end-to-end approach, ideally using a multidisciplinary team. There’s multiple ways that AI checks can be put into production, including:

  • Awareness of the context in which AI will be used and could create biased outcomes.
  • Engaging product owners, risk assessors, and users in fact-based conversations about potential biases in AI systems.
  • Establishing a process and methodology to continually identify, test, and fix biases.
  • Continuing investments in new research coming out around bias and AI to make black-box algorithms more responsible and fair.

“[Stakeholders need to] ensure that potential biases are understood and that the data being sourced to feed to these models is representative of various populations that the AI will impact,” Saxena told VentureBeat via email. “[They also need to] invest more to ensure members who are designing the systems are diverse.”

Involving stakeholders

Mark Rolston, founder of global product design consultancy Argodesign and advisor at RAI, anticipates that trust in AI systems will become as paramount as the rule of law has been to the past several hundred years of progress. The future growth for AI into more abstract concept processing capabilities will present even more critical needs around trust and validation of AI, he believes.

“Society is becoming increasingly dependent on AI to support every aspect of modern life. AI is everywhere. And because of this we must build systems to ensure that AI is running as intended — that it is trustworthy. The argument is fundamentally that simple,” Rolston told VentureBeat in an interview. “Today we’re bumping up on the fundamental challenge of AI being too focused on literal problem solving. It’s well-understood that the future lies in teaching AI to think more abstractly … For our part as designers, that will demand the introduction of a whole new class of user interface that convey those abstractions.”

Saxena advocates for AI to be designed, deployed, and managed with “a strong orientation toward human and societal impact,” noting that AI evolves with time as opposed to traditional rules-based computing paradigms. Guardrails need to be established to ensure that the right data is fed into AI systems, he says, and that the right testing is done of various models to guarantee positive outcomes.

Responsible AI practices can bring major business value to bear. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.

“As the adoption of AI continues into all aspects of our personal and professional lives, the need for ensuring that these AI systems are transparent, accountable, bias-free, and auditable is only going to grow exponentially … On the technology and academic front, responsible AI is going to become an important focus for research, innovation, and commercialization by universities and entrepreneurs alike,” Saxena said. “With the latest regulations on the power of data analytics from the FTC and EU, we see hope in the future of responsible AI that will merge the power and promise of AI and machine learning systems with a world that is fair and balanced.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI Weekly: Here’s how enterprises say they’re deploying AI responsibly

To get a sense of the extent to which brands are thinking about — and practicing — the tenets of responsible AI, VentureBeat surveyed executives at companies that claim to be using AI in a tangible capacity. Their responses reveal that a single definition of “responsible AI” remains elusive. At the same time, they show an awareness of the consequences of opting not to deploy AI thoughtfully.

Companies in enterprise automation

ServiceNow was the only company VentureBeat surveyed to admit that there’s no clear definition of what constitutes responsible AI usage. “Every company really needs to be thinking about how to implement AI and machine learning responsible,” ServiceNow chief innovation officer Dave Wright told VentureBeat. “[But] every company has to define it for themselves, which unfortunately means there’s a lot of potential for harm to occur.”

According to Wright, ServiceNow’s responsible AI approach encompasses the three pillars of diversity, transparency, and privacy. When building an AI product, the company brings in a variety of perspectives and has them agree on what counts as fair, ethical, and responsible before development begins. ServiceNow also ensures that its algorithms remain explainable in the sense that it’s clear why they arrive at their predictions. Lastly, the company says it limits and obscures the amount of personally identifiable information it collects to train its algorithms. Toward this end, ServiceNow is investigating “synthetic AI” that could allow developers to train algorithms without handling real data and the sensitive information it contains.

“At the end of the day, responsible AI usage is something that only happens when we pay close attention to how AI is used at all levels of our organization. It has to be an executive-level priority,” Wright said.

Automation Anywhere says it established AI and bot ethical principles to provide guidelines to its employees, customers, and partners. They include monitoring the results of any process automated using AI or machine learning so as to prevent them from producing outputs that might reflect racial, sexist, or other biases.

“New technologies are a two-edged sword. While they can free humans to realize their potential in entirely new ways, sometimes these technologies can also, unfortunately, entrap humans in bad behavior and otherwise lead to negative outcomes,” Automation Anywhere CTO Prince Kohli told VentureBeat via email. “[W]e have made the responsible use of AI and machine learning one of our top priorities since our founding, and have implemented a variety of initiatives to achieve this.”

Beyond the principles, Automation Anywhere created an AI committee charged with challenging employees to consider ethics in their internal and external actions. For example, engineers must seek to address the threat of job loss raised by AI and machine learning technologies and the concerns of customers from an “all-inclusive” range of different minority groups. The committee also reevaluates Automation Anywhere’s principles on a regular basis so that they evolve with emerging AI technologies.

Splunk SVP and CTO Tim Tully, who anticipates the industry will see a renewed focus on transparent AI practices over the next two years, says that Splunk’s approach to putting “responsible AI” into practice is fourfold. First, the company makes sure that the algorithms it’s developing and operating are in alignment with governance policies. Then, Splunk prioritizes talent to work with its AI and machine learning algorithms to “[drive] continual improvement.” Splunk also takes steps to bake security into its R&D processes while keeping “honesty, transparency, and fairness” top of mind throughout the building lifecycle.

“In the next few years, we’ll see newfound industry focus on transparent AI practices and principles — from more standardized ethical frameworks, to additional ethics training mandates, to more proactively considering the societal implications of our algorithms — as AI and machine learning algorithms increasingly weave themselves into our daily lives,” Tully said. “AI and machine learning was a hot topic before 2020 disrupted everything, and over the course of the pandemic, adoption has only increased.”

Companies in hiring and recruitment

LinkedIn says that it doesn’t look at bias in algorithms in isolation but rather identifies what biases cause harm to users and works to eliminate this. Two years ago, the company launched an initiative called Project Every Member to take a more rigorous approach to reducing and eliminating unintended consequences in the services it builds. By using inequality A/B testing throughout the product design process, LinkedIn says it aims to build trustworthy, robust AI systems and datasets with integrity that comply with laws and “benefit society.”

For example, LinkedIn says it uses differential privacy in its LinkedIn Salary product to allow members to gain insights from others without compromising information. And the company claims its Smart Replies product, which taps machine learning to suggest responses to conversations, was built to prioritize member privacy and avoid gender-specific replies.

“Responsible AI is very hard to do without company-wide alignment. ‘Members first’ is a core company value, and it is a guiding principle in our design process,” a spokesperson told VentureBeat via email. “We can positively influence the career decisions of more than 744 million people around the world.”

Mailchimp, which uses AI to, among other things, provide personalized product recommendations for shoppers, tells VentureBeat that it trains each of its data scientists in the fields that they’re modeling. (For example, data scientists at the company working on products related to marketing receive training in marketing.) However, Mailchimp also admits that its systems are trained on data gathered by human-powered processes that can lead to a number of quality-related problems, including errors in the data, data drift, and bias.

“Using AI responsibly takes a lot of work. It takes planning and effort to gather enough data, to validate that data, and to train your data scientists,” Mailchimp chief data science officer David Dewey told VentureBeat. “And it takes diligence and foresight to understand the cost of failure and adapt accordingly.”

For its part, Zendesk says it places an emphasis on a diversity of perspectives where its AI adoption is concerned. The company claims that broadly, its data scientists examine processes to ensure that its software is beneficial, unbiased, following strong ethical principles, and securing the data that makes its AI work. “As we continue to leverage AI and machine learning for efficiency and productivity, Zendesk remains committed to continuously examining our processes to ensure transparency, accountability and ethical alignment in our use of these exciting and game-changing technologies, particularly in the world of customer experience,” Zendesk president of products Adrian McDermott told VentureBeat.

Companies in marketing and management

Adobe EVP of general counsel and corporate secretary Dana Rao points to the company’s ethics principles as an example of its commitment to responsible AI.  Last year, Adobe launched an AI ethics committee and review board to help guide its product development teams and review new AI-powered features and products prior to release. At the product development stage, Adobe says its engineers use an AI impact assessment tool created by the committee to capture the potential ethical impact of any AI feature to avoid perpetuating biases.

“The continued advancement of AI puts greater accountability on us to address bias, test for potential misuse, and inform our community about how AI is used,” Rao said. “As the world evolves, it is no longer sufficient to deliver the world’s best technology for creating digital experiences; we want our technology to be used for the good of our customers and society.”

Among the first AI-powered features the committee reviewed was Neural Filters in Adobe Photoshop, which lets users add non-destructive, generative filters to create things that weren’t previously in images (e.g., facial expressions and hair styles). In accordance with its principles, Adobe added an option within Photoshop to report whether the Neural Filters output a biased result. This data is monitored to identify undesirable outcomes and allows the company’s product teams to address them by means of updating an AI model in the cloud.

Adobe says that while evaluating Neural Filters, one review board member flagged that the AI didn’t properly model the hairstyle of a particular ethnic group. Based on this feedback, the company’s engineering teams updated the AI dataset before Neural Filters was released.

“This constant feedback loop with our user community helps further mitigate bias and uphold our values as a company — something that the review board helped implement,” Rao said. “Today, we continue to scale this review process for all of the new AI-powered features being generated across our products.”

As for Hootsuite CTO Ryan Donovan, he believes that responsible AI ultimately begins and ends with transparency. Brands should demonstrate where and how they’re using AI — an ideal that Hootsuite strives to achieve, he says.

“As a consumer, for instance, I fully appreciate the implementation of bots to respond to high level customer service inquiries. However, I hate when brands or organizations masquerade those bots off as human, either through a lack of transparent labelling or assigning them human monikers,” Donovan told VentureBeat via email. “At Hootsuite, where we do use AI within our product, we have consciously endeavored to label it distinctly — suggested times to post, suggested replies, and schedule for me being the most obvious.”

SVP of product development at ADP Jack Berkowitz says that that responsible AI at ADP starts with the ethical use of data. In this context, “ethical use of data” means looking carefully at what the goal of an AI system is and the right way to achieve it.

“When AI is baked into technology, it comes with inherently heightened concerns, because it means an absence of direct human involvement in producing results,” Berkowitz said. “But a computer only considers the information you give it and only the questions you ask, and that’s why we believe human oversight is key.”

ADP retains an AI and data ethics board of experts in tech, privacy, law, and auditing that works with teams across the company to evaluate the way they use data. It also provides guidance to teams developing new uses and follows up to ensure the outputs are desirable. The board reviews ideas and evaluates potential uses to determine whether data is executed on fairly and in compliance with legal requirements and ADP’s own standards. If an idea falls short of meeting transparency, fairness, accuracy, privacy, and accountability requirements, it doesn’t move forward within the company, Berkowitz says.

Marketing platform HubSpot similarly says its AI projects undergo a peer review for ethical considerations and bias. According to senior machine learning engineer Sadhbh Stapleton Doyle, the company uses proxy data and external datasets to “stress test” its models for fairness. In addition to model cards, HubSpot also maintains a knowledge base of ways to detect and mitigate bias.

The road ahead

A number of companies declined to tell VentureBeat how they’re deploying AI responsibly in their organizations, highlighting one of the major challenges in the field: Transparency. A spokesperson for UiPath said that the robotic process automation startup “wouldn’t be able to weigh in” on responsible AI. Zoom, which recently faced allegations that its face-detection algorithm erased Black faces when applying virtual backgrounds, chose not to comment. And Intuit told VentureBeat that it had nothing to share on the topic.

Of course, transparency isn’t the end-all-be-all when it comes to responsible AI. For example, Google, which loudly trumpets its responsible AI practices, was recently the subject of a boycott by AI researchers over the company’s firing of Timnit Gebru and Margaret Mitchell, coleaders of a team working to make AI systems more ethical. Facebook also purports to be implementing AI responsibly, but to date, the company has failed to present evidence that its algorithms don’t encourage polarization on its platforms.

Returning to the Boston Consulting Group survey, Steven Mills, chief ethics officer and a coauthor, noted that the depth and breadth of most responsible AI efforts fall behind what’s needed to truly ensure responsible AI. Organizations’ responsible AI programs typically neglect the dimensions of fairness and equity, social and environmental impact, and human-AI cooperation because they’re difficult to address.

Greater oversight is a potential remedy. Companies like Google, Amazon, IBM, and Microsoft; entrepreneurs like Sam Altman; and even the Vatican recognize this — they’ve called for clarity around certain forms of AI, like facial recognition. Some governing bodies have begun to take steps in the right direction, like the EU, which earlier this year floated rules focused on transparency and oversight. But it’s clear from developments over the past months that much work remains to be done.

As Salesforce principal architect of ethical AI practice Kathy Baxter told VentureBeat in a recent interview, AI can result in harmful, unintended consequences if algorithms aren’t trained and designed inclusively. Technology alone can’t solve systemic health and social inequities, she asserts. In order to be effective, technology must be built and used responsibly — because no matter how good a tool is, people won’t use it unless they trust it.

“Ultimately, I believe the benefits of AI should be accessible to everyone, but it is not enough to deliver only the technological capabilities of AI,” Baxter said. “Responsible AI is technology developed inclusively, with a consideration towards specific design principles to mitigate, as much as possible, unforeseen consequences of deployment — and it’s our responsibility to ensure that AI is safe and inclusive. At the end of the day, technology alone cannot solve systemic health and social inequities.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Repost: Original Source and Author Link