Categories
AI

AI ethics champion Margaret Mitchell on self-regulation and ‘foresight’

All the sessions from Transform 2021 are available on-demand now. Watch now.


Ethics and artificial intelligence have become increasingly intertwined due to the pervasiveness of AI. But researchers, creators, corporations, and governments still face major challenges if they hope to address some of the more pressing concerns around AI’s impact on society.

Much of this comes down to foresight — being able to adequately predict what problems a new AI product, feature, or technology could create down the line, rather than focusing purely on short-term benefits.

“If you do believe in foresight, then it should become part of what you do before you make the product,” AI researcher and former Googler Margaret Mitchell said during a fireside chat at VentureBeat’s Transform 2021 event today. “I think right now, AI ethics is at a stage where it’s seen as the last thing you do, like a policing force or a block to launch. But if you’re taking it seriously, then it needs to be hand in hand with development as a tech-positive thing to do.”

Google fired Margaret Mitchell from her role as Ethical AI lead back in February, shortly after firing her co-lead Timnit Gebru. Accusations of research censorship and “retaliatory firings” abounded in the weeks that followed. While Mitchell said she was initially devastated about losing her job, she soon realized there was significant demand for her skills as an AI ethics researcher.

“From the time Timnit was fired until I was fired was general devastation,” Mitchell said. “And then upon being fired, it was not better. But it became clear to me that there was a very real interest in AI ethics. It made me realize that there were regulators who really needed help with the technical details of AI, that I could for the first time actually help and work with, that there were tons of companies that really wanted to start operationalizing details of AI ethics and bias and fairness and didn’t really know how to do it. It became a bit of an eye-opener, that there are a lot of opportunities right now.”

Self-regulation

Google, which releases all manner of AI-powered tools and features — from facial recognition for photo organization to smart replies for YouTube comments — has had to address growing societal concerns around AI. Although it has been embroiled in ethics controversies of late, in 2018 the company unveiled seven principles to guide its approach to AI development. And with more proposed AI regulations emerging to address the perceived threats posed by intelligent machines, it makes sense for big companies to proactively embed ethics into their AI product development ethos before external forces interfere. Just yesterday, the U.S. House Judiciary Committee held a hearing on facial recognition technology that included a look at the proposed Biometric Technology Moratorium Act, which seeks to ban government use of biometric technology in law enforcement.

The questions center around government restrictions versus corporate self-regulation.

“I came to a point in my career at Google where I realized that as we moved closer to dealing with regulation externally, we were really well-positioned to do self-regulation internally and really meet external regulation with nitty-gritty details of what it actually meant to do these higher-level goals that regulation put forward,” Mitchell explained. “And that’s in the benefit of the company because you don’t want regulation to be disconnected from technology in a way that [rules] stymie innovation or they end up creating the opposite of what they’re trying to get at. So it really seemed like a unique opportunity to work within the company, figuring out the basics of what it meant to do something like self-regulation.”

An AI ethics practitioner might find it easier to influence product design if they are deeply embedded inside the company. But there are clear tensions at play if — for example — an employee’s recommendations are seen as a threat to the company’s bottom line.

“I came into Google really wanting to work on hard problems, and this is definitely a hard problem, in part because it can push against the idea of making profit, for example,” Mitchell said. “And so that creates a natural tension, [but] at the same time it’s possible to do really meaningful research on AI ethics when you can be there in the company, understanding the ins and outs of how products are created. If you ever want to create some sort of auditing procedure, then really understanding — from end to end — how machine learning systems are built is really important.”

But as Mitchell and Gebru’s respective dismissals demonstrate, individuals working to reduce bias and help AI system designers embed ethics into their creations often face an uphill battle.

“I think [the firing] points to a lot about diversity and inclusion actually,” Mitchell said. “I try and tell people that if you want to include, don’t exclude. And here you have a very prime example of exclusion, and a certain amount of immaturity I would say, that speaks to a culture that really isn’t embracing the ideas of people who are not as similar to everyone else. I think it’s always an issue that one is concerned about if you have marginalized characteristics, if you’ve experienced the experiences of women in tech. But I think that it really came to bite me when I was fired, just how much of an outsider I was treated as, and I don’t think it would have been like that if I was part of the game in the same way that a lot of my male colleagues were.”

Idealistic

Mitchell argues that many tech companies and technologies have been built for an idealized future, or the idea that being able to do something would be “very, very cool.” But this thinking, she said, is usually devoid of the social context of how people, governments, or other companies actually use or misuse the technology. Thus, companies tend to hire people not so much because of their personal experiences or views on how technology might impact the world in 10 years, but based on short-term business goals.

“It tends to be a very sort of pie in the sky positive view that runs into a kind of myopia about the realities of how things evolve over time,” Mitchell said.

This sentiment was echoed in a recent study that found few major research papers properly addressed the ways AI could negatively impact the world. The findings, which were published by researchers from Stanford University; the University of California, Berkeley; the University of Washington; and University College Dublin & Lero, showed dominant values were “operationalized in ways that centralize power, disproportionally benefiting corporations while neglecting society’s least advantaged,” as VentureBeat wrote at the time.

Mitchell added that hiring a more diverse AI research workforce can help counter this. “I definitely found that people with marginalized characteristics — so people who had experienced discrimination — had a deeper understanding of the kinds of things that could happen to people negatively and the way the world works in a way that was a bit less rosy,” she said. “And that really helps to inform the longer-term view of what would happen over time.”

Communicate

One of the challenges of working in any big company is that of two-way communication — being able to broadcast orders down the chain is all very well, but how do you facilitate feedback, something that is integral to ethical AI research?

“A lot of companies are very hierarchical, and technology is no exception, where communication can flow top-down, but [it’s harder] communicating bottom-up to help the company understand that if they release this [new feature] they’re going to get in trouble for that,” Mitchell said. “The lack of two-way communication, I think largely due to the hierarchical structure, can really hinder moving tech forward in a way that is well-informed by foresight and social issues.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Google fires Ethical AI lead Margaret Mitchell

Google fired Margaret “Meg” Mitchell, lead of the Ethical AI team, today. The move comes just hours after Google announced diversity policy changes and Google AI chief Jeff Dean sent an apology in the wake of the firing of former Google AI ethics lead Timnit Gebru in late 2020.

Mitchell, a staff research scientist and Google employee since 2016, had been under an internal investigation by Google for five weeks. In an email sent to Google shortly before Mitchell was placed on investigation, Mitchell called Google firing Gebru “forever after a really, really, really terrible decision.”

A statement from a Google spokesperson about Mitchell reads: “After conducting a review of this manager’s conduct, we confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees.”

When asked for comment, Margaret declined, describing her mood as “confused and hurting.”

Mitchell was a member of the recently formed Alphabet Workers Union. Gebru has previously suggested that union protection could be a way for AI researchers to shield themselves from retaliation like the kind she encountered when a research paper she co-wrote was reviewed last year.

Earlier today, Dean apologized if Black and female employees were hurt by the firing of Gebru. Additional changes to Google diversity policy were also announced today, including tying DEI goals to performance evaluations for employees at the VP level and above.

On Thursday, Google restructured its AI ethics efforts that brings 10 teams within Google Research, including the Ethical AI team, under Google VP Marian Croak. Croak will report directly to Dean. In a video message, Croak called for more “diplomatic” conversations when addressing ways AI can harm people. Multiple members of the Ethical AI team said they found out about the restructure in the press.

“Marian is a highly accomplished trailblazing scientist that I had admired and even confided in. It’s incredibly hurtful to see her legitimizing what Jeff Dean and his subordinates have done to me and my team,” Gebru told VentureBeat about the decision Thursday.

Mitchell and Gebru came together to co-lead the Ethical AI team in 2018, eventually creating what’s believed to be one of the most diverse divisions within Google Research. The Ethical AI team has published research on model cards to bring transparency to AI and how to perform internal algorithm audits. Last year, the Ethical AI team hired its first sociologists and began to consider how to address algorithmic fairness with critical race theory. At the VentureBeat Transform conference in 2019, Mitchell called diversity in hiring practices important to ethical deployments of AI.

The way Gebru was fired led to allegations of gaslighting, racism, and retaliation, as well as questions from thousands of Google employees and members of Congress with records of authoring legislation to regulate algorithms. Members of the Ethical AI team requested Google leadership take a series of steps to restore trust.

A Google spokesperson told VentureBeat that the Google legal team has worked with outside counsel to conduct an investigation into how Google fired Gebru. Google also worked with outside counsel to investigate employee allegations of bullying and mistreatment by DeepMind cofounder Mustafa Suleyman, who led ethics research efforts at the London-based startup acquired by Google in 2014.

The spokesperson did not provide details when asked what steps the organization has taken to meet demands to restore trust the Ethical AI team made or those laid out in a letter signed by more than 2,000 employees shortly after the firing of Gebru that called for a transparent investigation in full view of the public.

A Google spokesperson also told VentureBeat that Google will work more closely with HR in regard to “certain employee exits that are sensitive in nature.” In a December 2020 interview with VentureBeat, Gebru called a companywide memo that called de-escalation strategies part of the solution “dehumanizing” and a response that paints her as an angry Black woman.

Updated 5:40 p.m. to include comment from Margaret Mitchell

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
Tech News

Google reportedly fired Margaret Mitchell, its Ethical AI Team founder

Google has evidently fired the founder and co-lead of its Ethical AI team, Margaret Mitchell.

This comes after weeks of being locked out of her work accounts over an investigation related to Mitchell’s objections concerning the controversial firing of her fellow co-lead Timnit Gebru.

According to a Google spokesperson, the investigation into Mitchell concerned alleged sharing of internal company files:

Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered. In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today.

The firing of Timnit Gebru sent shockwaves throughout the AI community. It’s been widely viewed as a move to remove voices of dissent when those voices, world renowned ethicists hired specifically to investigate and oversee the ethical development and deployment of Google’s AI systems, don’t say what the company wants to hear.

Details are still coming in, but it appears as though Mitchell’s been let go as the result of Google’s investigation.

This story is developing…

Published February 19, 2021 — 22:26 UTC



Repost: Original Source and Author Link

Categories
AI

Google targets AI ethics lead Margaret Mitchell after firing Timnit Gebru

Google has revoked Ethical AI team leader Margaret “Meg” Mitchell’s employee privileges and is currently investigating her activity, according to a statement provided by a company spokesperson. Should Google fire Mitchell, it will mean the company has effectively chosen to behead its own AI ethics team in under two months. In an interview with VentureBeat last month, former Google AI ethics co-lead Timnit Gebru said she had worked with Mitchell since 2018 to create one of the most diverse teams within Google Research.

Gebru tweeted Tuesday evening that Google’s move to freeze Mitchell’s employee account echoed the way hers was frozen before she was fired. When VentureBeat emailed Google to ask if Mitchell was still an employee, a spokesperson provided the following statement:

“Our security systems automatically lock an employee’s corporate account when they detect that the account is at risk of compromise due to credential problems or when an automated rule involving the handling of sensitive data has been triggered. In this instance, yesterday our systems detected that an account had exfiltrated thousands of files and shared them with multiple external accounts. We explained this to the employee earlier today. We are actively investigating this matter as part of standard procedures to gather additional details.”

Last month, Google fired Gebru following a demand by Google leadership that she rescind an AI research paper she coauthored about the negative consequences of large-scale language models, including their disproportionate impact on marginalized communities in the form of environmental impact and perpetuating stereotypes. Since then, Google released a trillion parameter language model and told its AI researchers to strike a positive tone on topics deemed “sensitive. Some members of the AI research community have pledged not to review the work of Google researchers at academic conferences in protest.

Mitchell has publicly criticized actions taken by Google leaders like AI chief Jeff Dean following the ousting of Gebru.

After Gebru was fired, April Curley, a queer Black woman who said she was fired by Google last fall, publicly recounted numerous negative experiences during her time as a recruiter of talent from historically Black colleges and universities (HBCU).

On Tuesday, news emerged that Google CEO Sundar Pichai will meet with HBCU leaders following allegations of racism and sexism at the company by current and former employees.

Members of Congress interested in regulating AI and more than 2,000 Google employees have joined prominent figures in the AI research community in questioning Gebru’s dismissal. Members of Google’s AI ethics team called for her reinstatement in a series of demands sent to company leadership.

Organizers cited the way Google treated Gebru and the impact AI can have on society as motivators behind the establishment of the Alphabet Workers Union, which was formed earlier this month and as of a week ago counted 700 members including Margaret Mitchell. Gebru had previously endorsed the idea of a workers union as a way to help protect AI researchers from company retribution.

“With AI permeating every aspect of our world—from criminal justice, to credit scores, to military applications—paying careful attention to ethics within the industry is critical,” the Alphabet Workers Union said in a statement shared with VentureBeat.

“As one of the most profitable players in the AI industry, Alphabet has a responsibility to continue investing in its ethical application. Margaret founded the Ethical AI team, built a cross-product area coalition around machine learning fairness, and is a critical member of academic and industry communities around the ethical production of AI. Regardless of the outcome of the company’s investigation, the ongoing targeting of leaders in this organization calls into question Google’s commitment to ethics—in AI and in their business practices. Many members of the Ethical AI team are AWU members and the membership of our union recognizes the crucial work that they do and stands in solidarity with them in this moment.”

The incoming Biden administration has in recent days shared a commitment to diversity and to addressing algorithmic bias and other AI-driven harms to society through its science and technology policy platform. Experts in AI, law, and policy told VentureBeat last month that Google’s treatment of Gebru could impact a range of policy matters, including the passage of stronger whistleblower protections for tech workers and more public funding of independent AI research.

What happens to Mitchell will continue to shape attitudes toward corporate self-governance and speculation about the voracity of research produced with Big Tech funding. A research paper published in late 2020 compared the way Big Tech funds AI ethics research to Big Tobacco’s history of funding health research.

Updated 7:18 am PT January 21 to include a statement from the Alphabet Workers Union.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member



Repost: Original Source and Author Link