Google’s ethical AI researchers complained of harassment long before Timnit Gebru’s firing

Google’s AI leadership came under fire in December when star ethics researcher Timnit Gebru was abruptly fired while working on a paper about the dangers of large language models. Now, new reporting from Bloomberg suggests the turmoil began long before her termination — and includes allegations of bias and sexual harassment.

Shortly after Gebru arrived at Google in 2018, she informed her boss that a colleague had been accused of sexual harassment at another organization. Katherine Heller, a Google researcher, reported the same incident, which included allegations of inappropriate touching. Google immediately opened an investigation into the man’s behavior. Bloomberg did not name the man accused of harassment, and The Verge does not know his identity.

The allegations coincided with an even more explosive story. Andy Rubin, the “father of Android” had received a $90 million exit package despite being credibly accused of sexual misconduct. The news sparked outrage at Google, and 20,000 employees walked out of work to protest the company’s handling of sexual harassment.

Gebru and Margaret Mitchell, co-lead of the ethical AI team, went to AI chief Jeff Dean with a “litany of concerns,” according to Bloomberg. They told Dean about the colleague who’d been accused of harassment, and said there was a perceived pattern of women being excluded and undermined on the research team. Some were given lower roles than men, despite having better qualifications. Mitchell also said she’d been denied a promotion due to “nebulous complaints to HR about her personality.”

Dean was skeptical about the harassment allegations but said he would investigate, Bloomberg reports. He pushed back on the idea that there was a pattern of women on the research team getting lower-level positions than men.

After the meeting, Dean announced a new research project with the alleged harasser at the helm. Nine months later, the man was fired for “leadership issues,” according to Bloomberg. He’d been accused of misconduct at Google, although the investigation was still ongoing.

After the man was fired, he threatened to sue Google. The legal team told employees who’d spoken out about his conduct that they might hear from the man’s lawyers. The company was “vague” about whether it would defend the whistleblowers, Bloomberg reports.

The harassment allegation was not an isolated incident. Gebru and her co-workers reported additional claims of inappropriate behavior and bullying after the initial accusation.

In a statement emailed to The Verge, a Google spokesperson said: “We investigate any allegations and take firm action against employees who violate our clear workplace policies.”

Gebru said there were also ongoing issues with getting Google to respect the ethical AI team’s work. When she tried to look into a dataset released by Google’s self-driving car company Waymo, the project became mired in “legal haggling.” Gebru wanted to explore how skin tone impacted Waymo’s pedestrian-detection technology. “Waymo employees peppered the team with inquiries, including why they were interested in skin color and what they were planning to do with the results,” according to the Bloomberg article.

After Gebru went public about her firing, she received an onslaught of harassment from people who claimed that she was trying to get attention and play the victim. The latest news further validates her response that the issues she raised were part of a pattern of alleged bias on the research team.

Update April 21st, 6:05PM ET: Article updated with statement from Google.

Repost: Original Source and Author Link


Google is restructuring its AI teams after Timnit Gebru’s firing

Google is reorganizing its responsible AI teams in the wake of Timnit Gebru’s firing. The ethical AI team will now roll up to Marian Croak, a prominent Black executive in the engineering department. Croak will also oversee employees focused on engineering fairness products, according to Bloomberg. She will report to Jeff Dean, who leads the company’s AI efforts.

The ethical AI team was not aware of the reorganization until news broke Wednesday night.

In a blog post confirming Croak’s appointment, Google said the executive will be leading “a new center of expertise on responsible AI within Google Research.”

The change is an attempt to stabilize the department, which has been in turmoil for months, Bloomberg reports. In December, Timnit Gebru, co-lead of the ethical AI team, announced she’d been abruptly fired. The following month, the company began investigating her counterpart Margaret Mitchell, who had been using a script to go through her emails to look for examples of discrimination against Gebru. Mitchell now says she’s been locked out of her corporate accounts for more than five weeks.

Prior to her dismissal, Gebru had been trying to publish a paper on the dangers of large language processing models. Megan Kacholia, vice president of Google Research, asked her to retract the paper. Gebru pushed back, saying the company needed to be more transparent about the publication process. Shortly afterward, she was fired.

The ethical AI team published a six-page letter in the wake of Gebru’s termination, calling on Kacholia to be replaced. “We have lost confidence in Megan Kacholia and we call for her to be removed from our reporting chain,” the letter read.

Now, the team may be getting its wish. As part of the reorganization, Kacholia will no longer lead the ethical AI researchers, according to Bloomberg. It’s not clear what this means for Margaret Mitchell, who is still being investigated by the company.

Google did not immediately respond to a request for comment from The Verge.

Update February 18th, 1:38PM EST: This article has been updated to include a blog post from Google.

Repost: Original Source and Author Link


Google is changing its diversity and research policies after Timnit Gebru’s firing

Google is changing its policies related to research and diversity after completing an internal investigation into the firing of ethical AI team co-leader Timnit Gebru, according to Axios. The company intends to tie the pay of certain executives to diversity and inclusivity goals. It’s also making changes to how sensitive employee exits are managed.

Although Google did not reveal the results of the investigation, the changes seem to be direct responses to how the situation with Gebru went down. After Google demanded that a paper she co-authored be retracted, Gebru told research team management that she would resign from her position and work on a transition plan, unless certain conditions were met. Instead of a transition plan, the company immediately ended her employment while she was on vacation. This sparked backlash from members of her team, and even caused some Google engineers to quit in protest.

Google had claimed that Gebru’s paper was not submitted properly, though the research team disagreed. Google has now said it will “streamline its process for publishing research,” according to Axios, but the exact details of the policy changes weren’t given.

In an internal email to staff, Jeff Dean, head of AI at Google, wrote:

I heard and acknowledge what Dr. Gebru’s exit signified to female technologists, to those in the Black community and other underrepresented groups who are pursuing careers in tech, and to many who care deeply about Google’s responsible use of AI. It led some to question their place here, which I regret.

He also apologized for how Gebru’s exit was handled, although he stopped short of calling it a firing.

The policy changes come a day after Google restructured its AI teams, a change which members of the ethical AI team were “the last to know about,” according to research scientist Alex Hanna, who is a part of the team.

Google declined to share the updated policies with The Verge, instead pointing to Axios’s article for details.

Repost: Original Source and Author Link


Timnit Gebru’s team at Google is going public with their side of the story

Google employees who worked with Timnit Gebru are coming out publicly to dispute claims against the star AI ethics researcher. On Monday, the team published a letter on the Google Walkout Medium account firmly stating that Gebru was fired and did not resign as Google’s head of artificial intelligence, Jeff Dean, said. They also said the publication review policy that Gebru was supposed to follow was applied “unevenly and discriminatorily.”

“Dr. Gebru’s dismissal has been framed as a resignation, but in Dr. Gebru’s own words, she did not resign,” the letter says. It notes that Gebru asked for certain conditions to be met in order for her to stay at Google, including transparency around who wanted her paper retracted. Ultimately, the leaders of the ethical AI team said they could not meet these conditions and preemptively accepted her resignation. Her own manager said he was “stunned.”

The paper that got Gebru fired detailed potential risks associated with large language processing models, including over-relying on data from wealthy countries that have more internet access. “The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities,” wrote MIT Technology Review.

This research could have been problematic for Google, which created a large language model called BERT in 2018, which has changed how it queries search results.

Gebru was planning to present the paper at a computer science conference in March. On October 7th, she submitted it for review internally at Google. Shortly after midnight on October 8th, it was approved.

In his statement, Dean said the research team requires two weeks for review. “Unfortunately, this particular paper was only shared with a day’s notice before its deadline,” he wrote.

But Gebru’s team is pushing back on that assessment, saying the review policy is meant to be flexible, and most people do not follow the structure Dean laid out. The team collected data showing the vast majority of approvals happen right before the deadline, and 41 percent happen after the deadline. “There is no hard requirement for papers to actually go through this review with two weeks notice,” they wrote.

Google managers asked Gebru to retract the paper or take her name off it, a request she said felt like censorship in an interview with Wired. “You’re not going to have papers that make the company happy all the time and don’t point out problems,” she said. “That’s antithetical to what it means to be that kind of researcher.”

Gebru sent an email to the Brain Women and Allies listserv at Google detailing the pushback she’d gotten on the paper. She also voiced exasperation with the company’s diversity, equity, and inclusion efforts. “The DEI [objectives and key results] that we don’t know where they come from (and are never met anyways), the random discussions, the ‘we need more mentorship’ rather than ‘we need to stop the toxic environments that hinder us from progressing’ the constant fighting and education at your cost, they don’t matter,” she wrote.

This frustration was shared by members of her team who felt the company’s goals to create a more diverse and equitable workplace were weak. “They’re really paltry demands,” says Alex Hanna, a senior researcher who worked under Gebru.

More than 1,500 employees have signed a Google Walkout petition protesting Gebru’s dismissal. “Instead of being embraced by Google as an exceptionally talented and prolific contributor, Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing,” the petition says.

But it’s members of her team who feel the loss most acutely. “We’re pretty deflated,” Hanna says. “When someone who is the heart and soul of your team gets fired ostensibly for doing ethics research, what can you do?”

Correction: An earlier version of this article stated the letter was signed by more than 1,500 Google employees. That was the Google Walkout petition. We regret the error.

Repost: Original Source and Author Link


Timnit Gebru’s actual paper may explain why Google ejected her

A paper co-authored by former Google AI ethicist Timnit Gebru raised some potentially thorny questions for Google about whether AI language models may be too big, and whether tech companies are doing enough to reduce potential risks, according to MIT Technology Review. The paper also questioned the environmental costs and inherent biases in large language models.

Google’s AI team created such a language model— BERT— in 2018, and it was so successful that the company incorporated BERT into its search engine. Search is a highly lucrative segment of Google’s business; in the third quarter of this year alone, it brought in revenue of $26.3 billion. “This year, including this quarter, showed how valuable Google’s founding product — search — has been to people,” CEO Sundar Pichai said on a call with investors in October.

Gebru and her team submitted their paper, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” for a research conference. She said in a series of tweets on Wednesday that following an internal review, she was asked to retract the paper or remove Google employees’ names from it. She says she asked Google for conditions for taking her name off the paper, and if they couldn’t meet the conditions they could “work on a last date.” Gebru says she then received an email from Google informing her they were “accepting her resignation effective immediately.”

The head of Google AI, Jeff Dean, wrote in an email to employees that the paper “didn’t meet our bar for publication.” He wrote that one of Gebru’s conditions for continuing to work at Google was for the company to tell her who had reviewed the paper and their specific feedback, which it declined to do. “Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google,” Dean wrote.

In his letter, Dean wrote that the paper “ignored too much relevant research,” a claim that the paper’s co-author Emily M. Bender, a professor of computational linguistics at the University of Washington, disputed. Bender told MIT Technology Review that the paper, which had six collaborators, was “the sort of work that no individual or even pair of authors can pull off,” noting it had a citation list of 128 references.

Gebru is known for her work on algorithmic bias, especially in facial recognition technology. In 2018, she co-authored a paper with Joy Buolamwini that showed error rates for identifying darker-skinned people were much higher than error rates for identifying lighter-skinned people, since the datasets used to train algorithms were overwhelmingly white.

Gebru told Wired in an interview published Thursday that she felt she was being censored. “You’re not going to have papers that make the company happy all the time and don’t point out problems,” she said. “That’s antithetical to what it means to be that kind of researcher.”

Since news of her termination became public, thousands of supporters, including more than 1,500 Google employees have signed a letter of protest. “We, the undersigned, stand in solidarity with Dr. Timnit Gebru, who was terminated from her position as Staff Research Scientist and Co-Lead of Ethical Artificial Intelligence (AI) team at Google, following unprecedented research censorship,” reads the petition, titled Standing with Dr. Timnit Gebru.

“We call on Google Research to strengthen its commitment to research integrity and to unequivocally commit to supporting research that honors the commitments made in Google’s AI Principles.”

The petitioners are demanding that Dean and others “who were involved with the decision to censor Dr. Gebru’s paper meet with the Ethical AI team to explain the process by which the paper was unilaterally rejected by leadership.”

Google did not immediately respond to a request for comment on Saturday.

Repost: Original Source and Author Link