Whistleblower says DeepMind waited months to fire a researcher accused of sexual misconduct

A former employee at DeepMind, the Google-owned AI research lab, accuses the company’s human resources department of intentionally delaying its response to her complaints about sexual misconduct in the workplace, as first reported by the Financial Times.

In an open letter posted to Medium, the former employee (who goes by Julia to protect her identity) says she was sexually harassed by a senior researcher for months while working at the London-based company. During this time, she was allegedly subject to numerous sexual propositions and inappropriate messages, including some that described past sexual violence against women and threats of self-harm.

Julia got in contact with the company’s HR and grievance team as early as August 2019 to outline her interactions with the senior researcher, and she raised a formal complaint in December 2019. The researcher in question reportedly wasn’t dismissed until October 2020. He faced no suspension and was even given a company award while HR was processing Julia’s complaint, leaving Julia fearing for her — and her other female colleagues’ — safety.

Although the Financial Times’ report says her case wasn’t fully resolved until seven months after she first reported the misconduct, Julia told The Verge that the whole process actually took 10 months. She claims DeepMind’s communications team used “semantics” to “push back” on the Financial Times’ story and shorten the amount of time it took to address her case.

“It was in fact 10 months, they [DeepMind] argued it was ‘only’ 7 because that’s when the appeal finished, though the disciplinary hearing took another 2 months, and involved more rounds of interviews for me,” Julia said. “My point stands: whether it was 10 months or 7, it was far, far too long.”

Besides believing her case was “intentionally dragged out,” Julia also claims two separate HR managers told her she would face “disciplinary action” if she spoke out about it. Her manager allegedly required her to attend meetings with the senior researcher as well, despite being “partially” aware of her report, the Financial Times says. While Julia herself didn’t sign a non-disclosure agreement, many other DeepMind employees have.

In a separate post on Medium, Julia and others offered several suggestions as to how Alphabet (Google and DeepMind’s parent company) can improve its response to complaints and reported issues, such as doing away with the NDA policy for victims and setting a strict two-month time limit for HR to resolve grievances.

The Alphabet Workers Union also expressed support for Julia in a tweet, noting: “The NDAs we sign should never be used to silence victims of harassment or workplace abuse. Alphabet should have a global policy against this.”

In a statement to The Verge, DeepMind interim head of communications Laura Anderson acknowledged the struggles Julia went through but avoided taking accountability for her experiences. “DeepMind takes all allegations of workplace misconduct extremely seriously and we place our employees’ safety at the core of any actions we take,” Anderson said. “The allegations were investigated thoroughly, and the individual who was investigated for misconduct was dismissed without any severance payments… We’re sorry that our former employee experienced what they did and we recognise that they found the process difficult.”

DeepMind has faced concerns over its treatment of employees in the past. In 2019, a Bloomberg report said DeepMind co-founder Mustafa Suleyman, also known as “Moose,” was placed on administrative leave for the controversy surrounding some of his projects. Suleyman left the company later that year to join Google. In 2021, a Wall Street Journal report revealed that Suleyman was deprived of management duties in 2019 for allegedly bullying staff members. Google also launched an investigation into his behavior at the time, but it never made its findings public.

“If anyone finds themselves in a similar situation: first, right now, before anything bad happens, join a union,” Julia said in response to the broader concerns. “Then if something bad happens: Document everything. Know your rights. Don’t let them drag it out. Stay vocal. These stories are real, they are happening to your colleagues.”

Correction April 5th 6:51PM ET: A previous version of the story stated Julia signed an NDA. She did not, but other DeepMind employees have. We regret the error.

Repost: Original Source and Author Link


Google employee group urges Congress to strengthen whistleblower protections for AI researchers

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

Google’s decision to fire its AI ethics leaders is a matter of “urgent public concern” that merits strengthening laws to protect AI researchers and tech workers who want to act as whistleblowers. That’s according to a letter published by Google employees today in support of the Ethical AI team at Google and former co-leads Margaret Mitchell and Timnit Gebru, who Google fired two weeks ago and in December 2020, respectively.

Firing Gebru, one of the best known Black female AI researchers in the world and one of few Black women at Google, drew public opposition from thousands of Google employees. It also led critics to claim the incident may have shattered Google’s Black talent pipeline and signaled the collapse of AI ethics research in corporate environments.

“We must stand up together now, or the precedent we set for the field — for the integrity of our own research and for our ability to check the power of big tech — bodes a grim future for us all,” reads the letter published by the group Google Walkout for Change. “Researchers and other tech workers need protections which allow them to call out harmful technology when they see it, and whistleblower protection can be a powerful tool for guarding against the worst abuses of the private entities which create these technologies.”

Google Walkout for Change was created in 2018 by Google employees organizing to force change at Google. According to organizers, the global walkout that year involved 20,000 Googlers in 50 cities around the world.

The letter also urges academic conferences to refuse to review papers subjected to editing by corporate lawyers and to begin declining sponsorship from businesses that retaliate against ethics researchers. “Too many institutions of higher learning are inextricably tied to Google funding (along with other Big Tech companies), with many faculty having joint appointments with Google,” the letter reads.

The letter addressed to state and national lawmakers cites a VentureBeat article published two weeks after Google fired Gebru about potential policy outcomes that could include changes to whistleblower protection laws and unionization. That analysis — which drew on conversations with ethics, legal, and policy experts — cites UC Berkeley Center for Law and Technology co-director Sonia Katyal, who analyzed whistleblower protection laws in 2019 in the context of AI. In an interview with VentureBeat late last year, Katyal called them “totally insufficient.”

“What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential,” Katyal told VentureBeat.

VentureBeat spoke with two sources familiar with Google AI ethics and policy matters who said they want to see stronger whistleblower protection for AI researchers. One person familiar with the matter said that at Google and other tech companies, people sometimes know something is broken but won’t fix it because they either don’t want to or don’t know how to.

“They’re stuck in this weird place between making money and making the world more equitable, and sometimes that inherent tension is very difficult to resolve,” the person, who spoke on condition of anonymity, told VentureBeat. “But I believe that they should resolve it because if you want to be a company that touches billions of people, then you should be responsible and held accountable for how you touch those billions of people.”

After Gebru was fired, that source described a sense among people from underrepresented groups at Google that if they push the envelope too far now they might be perceived as hostile and people will start filing complaints to push them out. She said this creates a feeling of “genuine unsafety” in the workplace and a “deep sense of fear.”

She also told VentureBeat that when we’re looking at technology with the power to shape human lives, we need to have people throughout the design process with the authority to overturn potentially harmful decisions and make sure models learn from mistakes.

“Without that, we run the risk of … allowing algorithms that we don’t understand to literally shape our ability to be human, and that inherently isn’t fair,” she said.

The letter also criticizes Google leadership for “harassing and intimidating” not only Gebru and Mitchell, but other Ethical AI team members as well. Ethical AI team members were reportedly told to remove their names from a paper under review at the time Gebru was fired. The final copy of that paper, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” was published this week at AI ethics conference Fairness, Accountability, and Transparency (FAccT) and lists no authors from Google. But a copy of the paper VentureBeat obtained lists Mitchell as a coauthor of the paper, as well as three other members of the Ethical AI team, each with an extensive background in examining bias in language models or human speech. Google AI chief Jeff Dean questioned the veracity of the research represented in that paper in an email to Google Research. Last week, FAccT organizers told VentureBeat the organization has suspended sponsorship from Google.

The letter published today calls on academics and policymakers to take action and follows changes to company diversity policy and reorganization of 10 teams within Google Research. These include Ethical AI, now under Google VP Marian Croak, who will report directly to AI chief Jeff Dean. As part of the change, Google will double staff devoted to employee retention and enact policy to engage HR specialists when certain employee exits are deemed sensitive. While Google CEO Sundar Pichai mentioned better de-escalation strategies as part of the solution in a companywide memo, in an interview with VentureBeat, Gebru called his memo “dehumanizing” and an attempt to characterize her as an angry Black woman.

A Google spokesperson told VentureBeat in an email following the reorganization last month that diversity policy changes were undertaken based on the needs of the organization, not in response to any particular team at Google Research.

In the past year or so, Google’s Ethical AI team has explored a range of topics, including the need for a culture change in machine learning and an internal algorithm auditing framework, algorithmic fairness issues specific to India, the application of critical race theory and sociology, and the perils of scale.

The past weeks and months have seen a rash of reporting about the poor experiences of Black people and women at Google, as well as reporting that raises concerns about corporate influence over AI ethics research. Reuters reported in December 2020 that AI researchers at Google were told to strike a positive tone when referring to “sensitive” topics. Last week, Reuters reported that Google will reform its approach to research review and additional instances of interference in AI research. According to an email obtained by Reuters, the coauthor of another paper about large language models referred to edits made by Google’s legal department as “deeply insidious.”

In recent days, the Washington Post has detailed how Google treats candidates from historically Black colleges and universities in a separate and unequal fashion, and NBC News reported that Google employees who experienced racism or sexism were told by HR to “assume good intent” and encouraged to take mental health leave instead of addressing the underlying issues.

Instances of gender discrimination and toxic work environments for women and people of color have been reported at other major tech companies, including Amazon, Dropbox, Facebook, Microsoft, and Pinterest. Last month, VentureBeat reported that dozens of current and former Dropbox employees, particularly women of color, reported witnessing or experiencing gender discrimination at their company. Former Pinterest employee Ifeoma Ozoma, who previously spoke with VentureBeat about whistleblower protections, helped draft the proposed Silenced No More Act in California last month. If passed, that law will allow employees to report discrimination even if they have signed a non-disclosure agreement.

The letter published by Google employees today follows other correspondence sent to Google company leadership since Gebru was fired in December 2020. Thousands of Google employees signed a Google Walkout letter protesting the way Gebru was treated and “unprecedented research censorship.” That letter also called for a public inquiry into Gebru’s termination for the sake of Google users and employees. Members of Congress with records of proposing regulation like the Algorithmic Accountability Act, including Rep. Yvette Clark (D-NY) and Senator Cory Booker (D-NJ), also sent Google CEO Sundar Pichai an email questioning the way Gebru was fired, Google’s research integrity, and steps the company takes to mitigate bias in large language models.

About a week after Gebru was fired, members of the Ethical AI team sent their own letter to company leadership. According to a copy obtained by VentureBeat, Ethical AI team members demanded Gebru be reinstated and Samy Bengio remain the direct report manager for the Ethical AI team. They also state that reorganization is sometimes used for “shunting workers who’ve engaged in advocacy and organizing into new roles and managerial relationships.” The letter described Gebru’s termination as having a demoralizing effect on the Ethical AI team and outlined a number of steps needed to re-establish trust. That letter cosigns letters of support for Gebru from Google’s Black Researchers group and the DEI Working Group. A Google spokesperson told VentureBeat an investigation was carried out by outside counsel but declined to share details. The Ethical AI letter also demands Google maintain and strengthen their team, guarantee the integrity of independent research, and clarify its sensitive review process by the end of Q1 2021. And it calls for a public statement that guarantees research integrity at Google, including in areas tied to the company’s business interests, such as large language models or datasets like JFT-300, a dataset with over a billion labeled images.

A Google spokesperson said Croak will oversee the work of about 100 AI researchers going forward. A source familiar with the matter told VentureBeat a reorganization that brings Google’s numerous AI fairness efforts under a single leader makes sense and had been discussed before Gebru was fired. The question, they said, is whether Google will fund fairness testing and analysis.

“Knowing what these communities need consistently becomes hard when these populations aren’t necessarily going to make the company a bunch of money,” a person familiar with the matter told VentureBeat. “So yeah, you can put us all under the same team, but where’s the money at? Are you going to give a bunch of headcount and jobs so that people can actually go do this work inside of products? Because these teams are already overtaxed — like these teams are really, really small in comparison to the products.”

Before Gebru and Mitchell, Google walkout organizers Meredith Whittaker and Claire Stapleton claimed they were the targets of retaliation before leaving the company, as did employees who attempted to unionize, many of whom identify as queer. Shortly before Gebru was fired, the National Labor Review Board filed a complaint against Google that accuses the company of retaliating against and illegally spying on employees.

The AI Index, an annual accounting of performance advances and AI’s impact on startups, business, and government policy, was released last week and found that the United States differs from other countries in its quantity of industry-backed research. The index report also called for more fairness benchmarks, found that Congress is talking about AI more than ever, and cites research finding only 3% of AI Ph.D. graduates in the U.S. are Black and 18% are women. The report notes that AI ethics incidents — including Google firing Gebru —  were among the most popular AI news stories of 2020.

VentureBeat requested an interview with Google VP Marian Croak, but a Google spokesperson declined on her behalf.

In a related matter, VentureBeat analysis about the “fight for the soul of machine learning” was cited in a paper published this week at FAccT about power, exclusion, and AI ethics education.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Florida’s justification for raiding COVID whistleblower Rebekah Jones is looking shaky

On Tuesday, Florida state police entered the home of Rebekah Jones with guns drawn, seizing her computer and phone, in an attempt to prove that she’d sent an unauthorized “group text” through “a Department of Health messaging system” that is “to be used for emergencies only,” according to authorities.

There are now two reasons why that’s significant. First, as we reported at the time, Jones isn’t just any former Florida Department of Health employee: she’s the whistleblower who built Florida’s once-celebrated COVID-19 tracking dashboard, then accused her bosses of ordering her to manipulate Florida’s data to justify reopening the state.

Second, it’s now come to our attention that the supposedly private messaging system that Jones might have accessed might have effectively just been an email address — an email address that the Florida Department of Health may have inadvertently published for anyone to see on the open web.

As Ars Technica reports, Redditors discovered that not only does the Florida Department of Health have a single shared username and password, but that username and password is also freely accessible on the web. Here’s a redacted screenshot that Ars captured of just one of at least seven PDFs that contain the information, PDFs that I also easily found with a Google search. All of them are still online at the time I type these words:

Screenshot by Ars Technica

But it’s not just the username and password that are listed: these pages also have the email address of the exact group Florida’s Department of Law Enforcement (FDLE) claimed was hacked: “StateESF8.Planning.” You don’t even need a username and password to send an email to an email address, anyone can do that.

In the FDLE’s affidavit — which is how it got a search warrant for Jones’ home — the department characterizes StateESF8.Planning as a “multi-user account group” and talks about how Florida uses it to “coordinate the state’s health and medical resources, capabilities, and capacities.” That all sounds very official and important:

A small portion of the affadavit.

However, the publicly available usernames, passwords, and email addresses suggest it might have just been a bog-standard mailing list with an awful lot of users, not something particularly private or secure. The email address still appears to be valid, though the Florida webmail application no longer seems to be online.

None of this necessarily means that Jones didn’t send the message (though she vehemently denies she did). An FDLE agent under oath says the “group text” was specifically sent from a Comcast ID associated with her home address, and that’s why her home was raided.

But if Jones did happen to send an email to a giant mailing list she used to be part of, one listed on the open web, would that be much of a crime? (I am not a lawyer.)

I asked the FDLE to explain how it could have been accessed illegally — if the email address might have required someone to use private credentials somehow — but it declined, citing the active investigation. A spokesperson simply stated that my suggestions were “not accurate,” and that “this was not simply an email.” The Florida Department of Health didn’t respond to a request for comment.

On Wednesday, a Republican attorney appointed by Florida governor Ron DeSantis to nominate judges resigned in protest over the raid on Jones’ house, calling it “unconscionable.”

“You don’t send 12 armed officers to raid her computer for doing that. That’s Gestapo. That’s authoritarian dictator tactics. That’s not America. It really viscerally bothered me,” he told the Sarasota Herald-Tribune.

Update, 7:00PM ET: Added info about the resignation of Ron Filipkowski.

Repost: Original Source and Author Link


With guns drawn, police raid home and seize computers of COVID-19 data whistleblower

Eight months ago, Deborah Birx of the White House Coronavirus Task Force praised Florida’s COVID-19 dashboard as an example of “the kind of knowledge and power we need to put into the hands of the American people.” That dashboard was built by Rebekah Jones.

But in May, Jones was fired by the Florida Department of Health for reportedly refusing to manipulate that data to justify reopening the state early — and now, Florida state police have raided her home and seized the equipment she was using to maintain a new, independent COVID-19 tracker of her own.

Jones posted a series of tweets about the incident, including a video of police entering — with guns drawn.

Florida’s Department of Law Enforcement (FDLE) confirmed to the Miami Herald and the Tallahassee Democrat that police had a search warrant and had seized her equipment. Here’s the department’s full statement as provided to The Verge:

“This morning FDLE served a search warrant at a residence on Centerville Court in Tallahassee, the home of Rebekah Jones. FDLE began an investigation November 10, 2020 after receiving a complaint from the Department of Health regarding unauthorized access to a Department of Health messaging system which is part of an emergency alert system, to be used for emergencies only. Agents believe someone at the residence on Centerville Court illegally accessed the system.

When agents arrived, they knocked on the door and called Ms. Jones in an attempt to minimize disruption to the family. Ms. Jones refused to come to the door for 20 minutes and hung-up on agents. After several attempts and verbal notifications that law enforcement officers were there to serve a legal search warrant, Ms. Jones eventually came to the door and allowed agents to enter. Ms. Jones family was upstairs when agents made entry into the home.

As the Tampa Bay Times reported last month, someone mysteriously sent an unauthorized message to the state’s emergency public health and medical coordination team, reading “speak up before another 17,000 people are dead. You know this is wrong. You don’t have to be a part of this. Be a hero. Speak out before it’s too late.”

According to an affidavit provided to us by the FDLE, law enforcement believes that Jones or someone at her address was the one who sent it. We’re not publishing the affidavit because it contains lots of personally identifying information, but the FDLE claims the message was sent from a Comcast IP associated with her home address and email address, and the affidavit asks permission to seize and search all computer equipment police might find.

But the COVID-19 data scientist says she didn’t do it, repeatedly denying to CNN in a full video interview that she’d accessed the system or sent any message, that the message doesn’t reflect how she talks and that the number of deaths quoted was wrong. She suggested that Florida police already knew she didn’t send the message, because they didn’t seize her router or her husband’s computer — only her own computer and phone.

“They took my phone, and they took the computer that I used to run my companies. On my phone is every communication I’ve ever had with someone who works at the state who’s come to me in confidence and told me about things that could get them fired or in trouble,” she told CNN, suggesting that the raid was designed to intimidate whistleblowers and critics of Florida governor Ron DeSantis.

A spokesperson for DeSantis told CNN his office had no knowledge of the investigation.

While there was a suggestion last month that the Florida messaging system might have been hacked rather than simply improperly accessed, it apparently didn’t have particularly strong security anyhow: the affidavit says all of the registered users shared the same username and password.

Jones didn’t comment when we asked, but on Twitter she says she’s getting a new computer and will continue to update her new website.

Additional reporting by Mitchell Clark

Update December 8th, 1:01PM ET: Added that Jones has vehemently denied the allegations in an interview with CNN.

Repost: Original Source and Author Link