Data leak from Russian delivery app shows dining habits of the secret police

A massive data leak from Russian food delivery service Yandex Food revealed the delivery addresses, phone numbers, names, and delivery instructions belonging to those associated with Russia’s secret police, according to findings from Bellingcat.

Yandex Food, a subsidiary of the larger Russian internet company, Yandex, first reported the data leak on March 1st, blaming it on the “dishonest actions” of one of its employees and noting that the leak doesn’t include users’ login information. Russian communications regulator Roskomnadzor has since threatened to fine the company up to 100,000 rubles (~$1,166 USD) for the leak, which Reuters says exposed the information of about 58,000 users. The Roskomnadzor also blocked access to an online map containing the data — an attempt to conceal the information of ordinary citizens, as well as those with ties to the Russian military and security services.

Researchers at Bellingcat gained access to the trove of information, sifting through it for leads on any people of interest, such as an individual linked to the poisoning of Russian opposition leader Alexey Navalny. By searching the database for phone numbers collected as part of a previous investigation, Bellingcat uncovered the name of the person who was in contact with Russia’s Federal Security Service (FSB) to plan Navalny’s poisoning. Bellingcat says this person also used his work email address to register with Yandex Food, allowing researchers to further ascertain his identity.

Researchers also examined the leaked information for the phone numbers belonging to individuals tied to Russia’s Main Intelligence Directorate (GRU), or the country’s foreign military intelligence agency. They found the name of one of these agents, Yevgeny, and were able to link him to Russia’s Ministry of Foreign Affairs and find his vehicle registration information.

Bellingcat uncovered some valuable information by searching the database for specific addresses as well. When researchers looked for the GRU headquarters in Moscow, they found just four results — a potential sign that workers just don’t use the delivery app, or opt to order from restaurants within walking distance instead. When Bellingcat searched for FSB’s Special Operation Center in a Moscow suburb, however, it yielded 20 results. Several results contained interesting delivery instructions, warning drivers that the delivery location is actually a military base. One user told their driver “Go up to the three boom barriers near the blue booth and call. After the stop for bus 110 up to the end,” while another said “Closed territory. Go up to the checkpoint. Call [number] ten minutes before you arrive!”

In a translated tweet, Russian politician and Navalny supporter, Lyubov Sobol, said the leaked information even led to additional information about Russian President Vladimir Putin’s former mistress and their alleged “secret” daughter. “Thanks to the leaked Yandex database, another apartment of Putin’s ex-mistress Svetlana Krivonogikh was found,” Sobol said. “That’s where their daughter Luiza Rozova ordered her meals. The apartment is 400 m², worth about 170 million rubles [~$1.98 million USD]!”

If researchers were able to uncover this much information based on data from a food delivery app, it’s a bit unnerving to think about the amount of information Uber Eats, DoorDash, Grubhub, and others have on users. In 2019, a DoorDash data breach exposed the names, email addresses, phone numbers, delivery order details, delivery addresses, and the hashed, salted passwords of 4.9 million people — a much larger number than those affected in the Yandex Food leak.

Repost: Original Source and Author Link


AI Weekly: The perils of AI analytics for police body cameras

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 

In 2015, spurred by calls for greater police accountability, the federal government provided more than $23 million to local and tribal police agencies to expand their use of body cameras. As of 2016, 47% of the country’s roughly 15,300 general-purpose law enforcement agencies had purchased body cameras, according to a report by the Bureau of Justice Statistics, the most recent study measuring nationwide usage.

Evidence on their efficacy is mixed — a recent comprehensive review of 70 studies of body camera use found that they had no consistent or statistically significant effects — but advocates assert that body cameras can deter bad behavior on the part of officers while reducing the number of citizen complaints. However, an outstanding technological challenge with body cameras is making sense of the amount of footage that they produce. As per one estimate, the average officer’s body camera will record about 32 files, 7 hours, and 20GB of video per month at 720p resolution.

A relatively new startup, Truleo, claims to solve this problem with a platform that leverages AI to analyze body cam footage as it comes in. Truleo — which has raised $2.5 million in seed funding — converts the data into “actionable insights,” CEO and cofounder Anthony Tassone claims, using natural language processing and machine learning models to categorize incidents captured by the cameras.

The Seattle Police Department is one of the company’s early customers.

“Truleo analyzes the audio stream within body camera videos — we analyze the conversation between the officer and the public,” Tassone told VentureBeat via email. “We specifically highlight the ‘risky’ language the officer uses, which most often means surfacing directed profanity or using extremely rude language. However, we can also highlight officer shouting commands, so the command staff can evaluate the effectiveness of the subject compliance.”

Potentially flawed AI

Tassone says that Truleo’s AI models were built by its data scientists and law enforcement experts looking for “de-escalation, auto-flagging of incidents, or early warning for volatile interactions” to generate searchable reports. The models can recognize if a call is about drugs, theft, a foot chase, and if there’s profanity or shouting, he claims. Truleo quantifies the classifications as metrics, such as the percentage of “negative interactions” an officer has on a monthly basis and what police language is “effective.”

“Obviously, a call that ends in an arrest is going to be negative. But what if an officer has an overwhelming amount of negative interactions but a below-average number of arrests?  Is he or she going through something in their personal lives? Perhaps something deeply personal such as a divorce or maybe the officer was shot at last week.  Maybe they need some time off to cool down or to be coached by more seasoned officers. We want to help command staff be more proactive about identifying risky behavior and improving customer service tactics — before the officer loses their job or ends up on the news.”

But some experts are concerned about the platform’s potential for misuse, especially in the surveillance domain. “[Body cam] footage doesn’t just contain the attitude of the officer; it also contains all comments by the person they were interacting with, even when no crime was involved, and potentially conversations nearby,” University of Washington AI researcher Os Keyes told VentureBeat via email. “This is precisely the kind of thing that people were worried about when they warned about the implications of body cameras: police officers as moving surveillance cameras.”


Above: Truleo’s analytics dashboard.

Image Credit: Truleo

Keyes also pointed out that natural language processing and sentiment analysis are far from perfect sciences. Aside from prototypes, AI systems struggle to recognize examples of sarcasm — particularly systems trained on text data alone. Natural language processing models can also exhibit prejudices along race, ethnic, and gender lines, for example associating “Black-aligned English” with higher toxicity or negative emotions like anger, fear, and sadness.

Speech recognition systems like the kind used by Truleo, too, can be discriminatory. In a study commissioned by the Washington Post, popular smart speakers made by Google and Amazon were 30% less likely to understand non-American accents than those of native-born users. More recently, the Algorithmic Justice League’s Voice Erasure project found that that speech recognition systems from Apple, Amazon, Google, IBM, and Microsoft collectively achieve word error rates of 35% for African American voices versus 19% for white voices.

“If it works, it’s dangerous. If it doesn’t work — which is far more likely — the very mechanism through which it is being developed and deployed is itself a reason to mistrust it, and the people using it,” Keyes said.

According to Tassone, Truleo consulted with officials on police accountability boards to define what interactions should be identified by its models to generate reports. To preserve privacy, the platform converts footage into an MP3 audio file during the upstream process “in memory” and deletes the stream after analysis in AWS GovCloud, writing nothing to disk.

“Truleo’s position is that this data 100% belongs to the police department,” Tassone added. “We aim to accurately transcribe about 90% of the audio file correctly … More importantly, we classify the event inside the audio correctly over 99% of the time … When customers look at their transcripts, if anything is incorrect, they can make those changes in our editor and submit them back to Truleo, which automatically trains new models with these error corrections.”

When contacted for comment, Axon, one of the world’s largest producers of police body cameras, declined to comment on Truleo’s product but said: “Axon is always exploring technologies that have [the] potential for protecting lives and improving efficiency for our public safety customers. We gear towards developing responsible and ethical solutions that are reliable, secure, and privacy-preserving.”

In a recent piece for Security Info Watch, Anthony Treviño, the former assistant chief of police for San Antonio, Texas and a Truleo advisor, argued that AI-powered body cam analytics platforms could be used as a teaching tool for law enforcement. “For example, if an agency learns through body camera audio analytics that a certain officer has a strong ability to de-escalate or control deadly force during volatile situations, the agency can use that individual as a resource to improve training across the entire force,” he wrote.

Given AI’s flaws and studies showing that body cams don’t reduce police misconduct on their own, however, Treviño’s argument would appear to lack merit. “Interestingly, although their website includes a lot of statistics about time and cost savings, it doesn’t actually comment on whether it changes the outcomes in any way,” AI researcher at the Queen Mary University of London, Mike Cook, told VentureBeat via email. “Truleo claims they provide ‘human accuracy at scale’ — but if we already doubt the existing accuracy provided by the humans involved, what good is it to replicate it at scale? What good is a 50% reduction in litigation time if it leads to the same amount of unjust, racist, or wrongful police actions? A faster-working unfair system is still unfair.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Police arrest 150 suspects after closure of dark web’s largest illegal marketplace

A 10-month investigation following the closure of the dark web’s largest illegal marketplace, DarkMarket, has resulted in the arrest of 150 suspected drug vendors and buyers.

DarkMarket was taken offline earlier this year as part of an international operation. The site boasted some 500,000 users and facilitated around 320,000 transactions, reports the EU’s law enforcement agency, Europol, with clientele buying and selling everything from malware and stolen credit card information, to weapons and drugs. When German authorities arrested the site’s alleged operator in January this year, they also seized valuable evidence of transactions which led to this week’s arrest of key players.

According to the US Department of Justice and Europol, Operation Dark HunTor saw law enforcement make numerous arrests in the United States (65), Germany (47), the United Kingdom (24), Italy (4), the Netherlands (4), France, (3), Switzerland (2), and Bulgaria (1). More than $31.6 million in cash and cryptocurrencies were seized during the arrests, as well 45 firearms and roughly 234 kilograms of drugs including cocaine, opioids, amphetamine, MDMA, and fentanyl. According to the DoJ: “A number of investigations are still ongoing.”

As part of the operation, Italian authorities also shut down two other dark web marketplaces — DeepSea and Berlusconi — arresting four alleged administrators and seizing €3.6 million ($4.17 million) in cryptocurrency.

The operation was conducted across the US, Europe, and Australia.
Image: Europol

Although the dark web was once considered to be a relatively safe haven for those selling and buying drugs, international operations like Dark HunTor have seen regular arrests of suspects and speedy closure of marketplaces. The list of dark web markets closed just in recent years is extensive, including Dream, WallStreet, White House, DeepSea, and Dark Market. Although law enforcement certainly have to play Whac-A-Mole with such sites, with new markets springing up as soon as established ones are closed, doing so makes it harder for buyers and sellers to build steady businesses.

“The point of operations such as the one today is to put criminals operating on the dark web on notice: the law enforcement community has the means and global partnerships to unmask them and hold them accountable for their illegal activities, even in areas of the dark web,” said Europol’s Deputy Executive Director of Operations, Jean-Philippe Lecouffe, in a press statement.

Repost: Original Source and Author Link

Tech News

Ring to make police video requests public – as long as you give Amazon your data

Amazon yesterday announced its ‘Neighbors by Ring‘ app would make police video requests “public” going forward. That would be really cool, if it were true.

Up front: Amazon purchased Ring for $2 billion a few years back. The company sells doorbells with cameras installed that offer owners the peace of mind that comes with having 24/7 surveillance on their property.

The big sell is that it’ll help keep your community safe and protect your precious Amazon deliveries from being stolen.

Amazon keeps all the videos and data related to your Ring account and partners with law enforcement agencies to allow officers to reach out to people using the ‘Neighbors by Ring‘ app to request video evidence and information.

The big deal: On the surface this sounds awesome. If someone commits a crime in your neighborhood and everyone has a Ring doorbell installed, there’s a pretty good chance someone’s surveillance system caught something.

But the reality of surveillance is entirely different. There are literally hundreds of studies and thousands of cautionary tales that have been written about the dangers of surveillance.

And Ring is arguably more dangerous than any other kind of surveillance system because it invades our homes.

The big problem: If you want Amazon to conduct 24/7 surveillance on you, your family, and your neighbors all you have to do is buy a Ring camera and install the Neighbors app.

But if you do not want Amazon to conduct 24/7 surveillance on you, your family, and your neighbors, there’s absolutely no way for you to opt out.

Simply put: if your neighbor has a Ring camera and your home, yard, garage, or car is in its typical FOV, Amazon is spying on you without your permission and there’s nothing you can do about it. If you drive past a Ring camera on your way to work, Amazon has that video. If you drive past 10, Amazon has 10 videos. 

What now?

Yesterday, Amazon said it would make all police requests for videos and data “public,” but that appears to just be another ploy to get user data.

You cannot access the “public” police data requests unless you sign up for the Neighbors app by Ring. And you can’t use the app unless you create an account. And, per Amazon, you’re legally obligated to provide the company with your real name and identity – meanwhile, for obvious reasons, the app doesn’t work unless it has your location.

Here’s the relevant passage from the app’s TOS:

You promise to provide us with accurate, complete, and up-to-date registration information about yourself. You may not select as your User ID a name that you do not have the right to use, or the name of another person with the intention of impersonating that person.

Other interesting tidbits in the TOS include notification that use of the app constitutes waiving your right to a trial by jury or to participate in a class-action lawsuit and agreeing that any video recorded retains your sole property and any legal issues stemming from the data obtained by Amazon and Ring is your sole responsibility, but the company has full, irrevocable legal use of your data in perpetuity.

Bottom line: If you don’t use the app, you can’t opt out of the Ring network. And if you don’t give Amazon your location data you can’t see how police are using the app.

Amazon’s Ring system represents a clear and targeted threat to privacy. The false transparency of making police requests “public” for app users doesn’t change that.

Did you know we have a newsletter all about consumer tech? It’s called Plugged In –
and you can subscribe to it right here.

Repost: Original Source and Author Link


Amazon extends ban on police use of facial recognition software

Elevate your enterprise data technology and strategy at Transform 2021.

(Reuters) — said on Tuesday it is extending until further notice a moratorium it imposed last year on police use of its facial recognition software.

The company had halted the practice for one year starting in June 2020. Its announcement came at the height of protests across the United States against police brutality toward people of color, sparked by the killing of George Floyd, a Black man, during an arrest in Minnesota.

Civil liberties advocates have long warned that inaccurate face matches by law enforcement could lead to unjust arrests, as well as to a loss of privacy and chilled freedom of expression.

Amazon’s extension, which Reuters was first to report, underscores how facial recognition remains a sensitive issue for big companies. The world’s largest online retailer did not comment on the reason for its decision.

Last year, it said it hoped Congress would put in place rules to ensure ethical use of the technology, though no such law has materialized.

Amazon also faced calls this month from activists who wanted its software ban to be permanent.

Nathan Freed Wessler, a deputy project director at the American Civil Liberties Union, expressed support for Amazon’s move and called on federal and state governments to ban law enforcement’s use of the software.

“Face recognition technology fuels the over-policing of Black and Brown communities, and has already led to the false arrests and wrongful incarcerations of multiple Black men,” he said in a statement.

Amazon offers face-matching with “Rekognition,” a service from its cloud computing division. Customers relying on the program to find human trafficking victims have still had access to the facial recognition capabilities, Amazon has said.

Critics have noted research born out of a project called Gender Shades, which showed Rekognition struggled to determine the sex of individuals with darker skin tones. Amazon has contested this.

Due to Amazon’s prominence and prior defense of facial recognition, its moratorium has carried significance. Rival Microsoft said shortly after Amazon’s announcement last June that it would await U.S. federal regulation before selling its face recognition software to police.

Pharmacy chain Rite Aid also stopped use of the technology at its stores, it said the following month.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link


Hackers threaten to release DC police data in apparent ransomware attack

Washington, DC’s police department has confirmed its servers have been breached after hackers began leaking its data online, The New York Times reports. In a statement, the department confirmed it was aware of “unauthorized access on our server” and said it was working with the FBI to investigate the incident. The hacked data appears to include details on arrests and persons of interest.

The attack is believed to be the work of Babuk, a group known for its ransomware attacks. BleepingComputer reports that the gang has already released screenshots of the 250GB of data it’s allegedly stolen. One of the files is claimed to relate to arrests made following the January Capitol riots. The group warns it will start leaking information about police informants to criminal gangs if the police department doesn’t contact it within three days.

Washington’s police force, which is called the Metropolitan Police Department, is the third police department to be targeted in the last two months, according to the NYT, following attacks by separate groups against departments in Presque Isle, Maine and Azusa, California. The old software and systems used by many police forces are believed to make them more vulnerable to such attacks.

The targeting of police departments is believed to be part of a wider trend of attacks targeting government bodies. Twenty-six agencies are believed to have been hit by ransomware in this year alone, with 16 of them seeing their data released online, according to Emsisoft ransomware analyst Brett Callow, Sky News notes. The Justice Department reports that the average ransom demand has grown to over $100,000 as the attacks surged during the pandemic.

The Biden administration is attempting to improve the USA’s cybersecurity defenses, with an executive order expected soon. The Justice Department also recently formed a task force to help defend against ransomware attacks, The Wall Street Journal reports. “By any measure, 2020 was the worst year ever when it comes to ransomware and related extortion events,” acting Deputy Attorney General John Carlin, who’s overseeing the task force, told the WSJ. “And if we don’t break the back of this cycle, a problem that’s already bad is going to get worse.”

Repost: Original Source and Author Link


Massachusetts on the verge of becoming first state to ban police use of facial recognition

Massachusetts lawmakers this week voted to ban the use of facial recognition by law enforcement and public agencies in a sweeping police reform bill that received significant bipartisan support. If signed into law, Massachusetts would become the first state to fully ban the technology, following bans barring the use of facial recognition in police body cameras and other, more limited city-specific bans on the tech.

The bill, S.2963, marks yet another state government tackling the thorny ethical issue of unregulated facial recognition use in the absence of any federal guidance from Congress. It also includes bans on chokeholds and rubber bullets in addition to restrictions on tear gas and other crowd-control weapons, as reported by TechCrunch. It isn’t a blanket ban on facial recognition; police will still be able to run searches against the state’s driver’s license database but only with a warrant and requirements that law enforcement agencies publish annual transparency reports regarding those searches.

Massachusetts joins cities like Portland, Maine, and Portland, Oregon, as well as San Francisco and Oakland in Northern California, that have banned police use of facial recognition. Earlier this year, Boston became the first major East Coast city to bar police from purchasing and using facial recognition services, but the Massachusetts bill goes a step further in making the ban statewide. S.2963 passed 28-12 in the state senate and 92-67 in the Massachusetts House of Representatives on Tuesday, and it now awaits signing from Massachusetts Gov. Charlie Baker.

Use of facial recognition has become a controversial topic in the artificial intelligence industry and the broader tech policy sphere because of a lack of federal guidance regulating its use. That vacuum has allowed a number of companies — most prominently controversial firm Clearview AI — to step in and offer services to governments, law enforcement agencies, private companies, and even individuals, often without any oversight or records as to how it’s used and whether it’s even accurate.

In August, Clearview AI — which has sold access to its software and its database of billions of images, scraped in part from social media sites to numerous government agencies and private companies — signed a contract with Immigration and Customs Enforcement. (In May, Clearview said it would stop selling its tech to private companies following a lawsuit brought against it for violating the Illinois Biometric Information Privacy Act, which, prior to these more recent city bans, was the only piece of US legislation regulating facial regulation use.)

A number of researchers have been sounding the alarm for years now that modern facial recognition, even when aided by advanced AI, can be flawed. Systems like Rekognition have been shown to have issues identifying the gender of darker-skinned individuals and suffer from other racial bias built into how the databases are constructed and how the models are trained on that data. Amazon in June banned police from using its facial recognition platform for one year, with the company saying it wants to give Congress “enough time to implement appropriate rules” governing the sale and use of the technology.

Amazon was following the lead of IBM, which announced that same month it would no longer develop the technology whatsoever after acknowledging criticism from researchers and activists over its potential use in racial profiling, mass surveillance, and other civil rights abuses.

Repost: Original Source and Author Link


Microsoft won’t sell facial recognition to police until Congress passes new privacy law

Microsoft has followed competitors Amazon and IBM in restricting how it provides facial recognition technology to third parties, in particular to law enforcement agencies. The company says that it does not currently provide the technology to police, but it’s now saying it will not do so until there are federal laws governing how it can be deployed safely and without infringing on human rights or civil liberties.

IBM said earlier this week it will outright cease all sales, development, and research of the controversial tech. Amazon on Wednesday said it would stop providing it to police for one year to give Congress time to put in place “stronger regulations to govern the ethical use of facial recognition technology.”

Microsoft president Brad Smith most closely echoed Amazon’s stance on Thursday in outlining the company’s new approach to facial recognition: not ruling out that it will one day sell the tech to police but calling for regulation first.

“As a result of the principles that we’ve put in place, we do not sell facial recognition technology to police departments in the United States today,” Smith told The Washington Post. “But I do think this is a moment in time that really calls on us to listen more, to learn more, and most importantly, to do more. Given that, we’ve decided that we will not sell facial recognition to police departments in the United States until we have a national law in place ground in human rights that will govern this technology.”

Smith said Microsoft would also “put in place some additional review factors so that we’re looking into other potential uses of the technology that goes even beyond what we already have.” That seems to indicate Microsoft may still provide facial recognition to human rights groups to combat trafficking and other abuses, as Amazon said it would continue doing with its Rekognition platform.

Amid ongoing protests around the US and the world against racism and police brutality and a national conversation about racial injustice, the tech industry is reckoning with its own role in providing law enforcement agencies with unregulated, potentially racially biased technology.

Research has shown facial recognition systems, due to being trained using data sets composed of mostly white males, have significant trouble identifying darker-skinned people and even determining the gender of such individuals. Artificial intelligence researchers, activists, and lawmakers have for years sounded the alarm about selling the technology to police, warning not just against racial bias, but also human rights and privacy violations inherent in a technology that could contribute to the rise of surveillance states.

While Microsoft has previously sold police departments access to such technology, the company has taken a more principled approach since. Last year, Microsoft denied California law enforcement access to its facial recognition tech out of concern for human rights violations. It also announced it would no longer invest in third-party firms developing the tech back in March, following accusations that an Israeli startup Microsoft invested in provided the technology to the Israel government for spying on Palestinians. (Microsoft later declared that its internal investigation found that the company, AnyVision, “has not previously and does not currently power a mass surveillance program in the West Bank,” but it divested from the company nonetheless.)

Microsoft has been a vocal supporter of federal regulation that would dictate how such systems can be used and what protections will be in place to protect privacy and guard against discrimination. Smith himself has been publicly expressing concerns over the dangers of unregulated facial recognition since at least 2018. But the company was also caught last year providing a facial recognition dataset of more than 10 million faces, including images of many people who were not aware of and did not consent to their participation in the dataset. The company pulled the dataset offline only after a Financial Times investigation.

According to the American Civil Liberties Union (ACLU), Microsoft, as recently as this year, supported legislation in California that would allow police departments and private companies to purchase and use such systems. That’s following laws in San Francisco, Oakland, and other Californian cities that banned use of the technology by police and governments last year. The bill, AB 2261, failed last week, in a victory for the ACLU and coalition of 65 organizations that came together to combat it.

Matt Cagle, the ACLU’s technology and civil liberties attorney with its Northern California branch, released this statement on Thursday regarding Microsoft’s decision:

When even the makers of face recognition refuse to sell this surveillance technology because it is so dangerous, lawmakers can no longer deny the threats to our rights and liberties. Congress and legislatures nationwide must swiftly stop law enforcement use of face recognition, and companies like Microsoft should work with the civil rights community — not against it — to make that happen. This includes halting its current efforts to advance legislation that would legitimize and expand the police use of facial recognition in multiple states nationwide.

It should not have taken the police killings of George Floyd, Breonna Taylor, and far too many other Black people, hundred of thousands of people taking to the streets, brutal law enforcement attacks against protesters and journalists, and the deployment of military-grade surveillance equipment on protests led by Black activists for these companies to wake up to the everyday realities of police surveillance for Black and Brown communities. We welcome these companies finally taking action — as little and as late as it may be. We also urge these companies to work to forever shut the door on America’s sordid chapter of over-policing of Black and Brown communities, including the surveillance technologies that disproportionately harm them.

No company backed bill should be taken seriously unless the communities most impacted say it is the right solution.

Repost: Original Source and Author Link


Amazon bans police from using its facial recognition technology for the next year

Amazon is announcing a one-year moratorium on allowing law enforcement to use its controversial Rekognition facial recognition platform, the e-commerce giant said on Wednesday.

The news comes just two days after IBM said it would no longer offer, develop, or research facial recognition technology, citing potential human rights and privacy abuses and research indicating facial recognition tech, despite the advances provided by artificial intelligence, remains biased along lines of age, gender, race, and ethnicity.

Much of the foundational work showing the flaws of modern facial recognition tech with regard to racial bias is thanks to Joy Buolamwini, a researcher at the MIT Media Lab, and Timnit Gebru, a member at Microsoft Research. Buolamwini and Gebry co-authored a widely cited 2018 paper that found error rates for facial recognition systems from major tech companies, including IBM and Microsoft, for identifying darker-skinned individuals were dozens of percentage points higher than when identifying white-skinned individuals. The issues lie in part with the data sets used to train the systems, which can be overwhelmingly male and white, according to a report from The New York Times.

In a separate 2019 study, Buolamwini and co-author Deborah Raji analyzed Rekognition and found that Amazon’s system too had significant issues identifying the gender of darker-skinned individuals, as well as mistaking darker-skinned women for men. The system worked with a near-zero error rate when analyzing images of lighter-skinned people, the study found.

Amazon tried to undermine the findings, but Buolamwini posted a lengthy and detailed response to Medium, in which she says, “Amazon’s approach thus far has been one of denial, deflection, and delay. We cannot rely on Amazon to police itself or provide unregulated and unproven technology to police or government agencies.” Her and Raji’s findings were later backed up by a group of dozens of AI researchers who penned an open letter saying Rekognition was flawed and should not be in the hands of law enforcement.

Amazon did not give a concrete reason for the decision beyond calling for federal regulation of the tech, although the company says it will continue providing the software to rights organizations dedicated to missing and exploited children and combating human trafficking. The unspoken context here of course is the death of George Floyd, a black man killed by former Minnesota police officers, and ongoing protests around the US and the globe against racism and systemic police brutality.

It seems as if Amazon decided police cannot be trusted to use the technology responsibly, although the company has never disclosed just how many police departments do actually use the tech. As of last summer, it appeared like only two departments— one in Oregon and one in Florida — were actively using Rekognition, and Orlando has since stopped. A much more widely used facial recognition system is that of Clearview AI, a secretive company now facing down a number of privacy lawsuits after scraping social media sites for photos and building a more than 3 billion-photo database it sells to law enforcement.

In a statement given to The Verge, Clearview AI CEO Hoan Ton-That doubled down on the technology as an effective law enforcement tool. “While Amazon, Google, and IBM have decided to exit the marketplace, Clearview AI believes in the mission of responsibly used facial recognition to protect children, victims of financial fraud and other crimes that afflict our communities,” he said. Ton-That says Clearview’s technology “actually works,” but that facial recognition is “not intended to be used as a surveillance tool relating to protests or under any other circumstances.”

Beyond studies calling into question its effectiveness, Amazon has faced constant criticism over the years for selling access to Rekognition to police department from activists, civil rights organizations like the ACLU, and lawmakers, all of which have cited concerns about the lack of oversight into how the tech is used in investigations and potential built-in bias that makes it unreliable and ripe for discrimination and other abuses.

Even after employees voiced concern about the tech in 2018, Amazon’s cloud chief Andrew Jassy said the company would continue to sell it to police. Only through media reports and activists, as well as the work of researchers like Buolamwini, highlighting the pitfalls of police use of facial recognition tech like Rekognition have departments begun discontinuing contracts with Amazon.

Here’s Amazon’s full note on the one-year ban:

We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.

We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.

Update June 10th, 7:17PM ET: Added additional information around studies finding evidence of racial bias in Amazon Rekognition and other facial recognition systems.

Update June 10th, 8:43PM ET: Added statement from facial recognition firm Clearview AI.

Repost: Original Source and Author Link


With guns drawn, police raid home and seize computers of COVID-19 data whistleblower

Eight months ago, Deborah Birx of the White House Coronavirus Task Force praised Florida’s COVID-19 dashboard as an example of “the kind of knowledge and power we need to put into the hands of the American people.” That dashboard was built by Rebekah Jones.

But in May, Jones was fired by the Florida Department of Health for reportedly refusing to manipulate that data to justify reopening the state early — and now, Florida state police have raided her home and seized the equipment she was using to maintain a new, independent COVID-19 tracker of her own.

Jones posted a series of tweets about the incident, including a video of police entering — with guns drawn.

Florida’s Department of Law Enforcement (FDLE) confirmed to the Miami Herald and the Tallahassee Democrat that police had a search warrant and had seized her equipment. Here’s the department’s full statement as provided to The Verge:

“This morning FDLE served a search warrant at a residence on Centerville Court in Tallahassee, the home of Rebekah Jones. FDLE began an investigation November 10, 2020 after receiving a complaint from the Department of Health regarding unauthorized access to a Department of Health messaging system which is part of an emergency alert system, to be used for emergencies only. Agents believe someone at the residence on Centerville Court illegally accessed the system.

When agents arrived, they knocked on the door and called Ms. Jones in an attempt to minimize disruption to the family. Ms. Jones refused to come to the door for 20 minutes and hung-up on agents. After several attempts and verbal notifications that law enforcement officers were there to serve a legal search warrant, Ms. Jones eventually came to the door and allowed agents to enter. Ms. Jones family was upstairs when agents made entry into the home.

As the Tampa Bay Times reported last month, someone mysteriously sent an unauthorized message to the state’s emergency public health and medical coordination team, reading “speak up before another 17,000 people are dead. You know this is wrong. You don’t have to be a part of this. Be a hero. Speak out before it’s too late.”

According to an affidavit provided to us by the FDLE, law enforcement believes that Jones or someone at her address was the one who sent it. We’re not publishing the affidavit because it contains lots of personally identifying information, but the FDLE claims the message was sent from a Comcast IP associated with her home address and email address, and the affidavit asks permission to seize and search all computer equipment police might find.

But the COVID-19 data scientist says she didn’t do it, repeatedly denying to CNN in a full video interview that she’d accessed the system or sent any message, that the message doesn’t reflect how she talks and that the number of deaths quoted was wrong. She suggested that Florida police already knew she didn’t send the message, because they didn’t seize her router or her husband’s computer — only her own computer and phone.

“They took my phone, and they took the computer that I used to run my companies. On my phone is every communication I’ve ever had with someone who works at the state who’s come to me in confidence and told me about things that could get them fired or in trouble,” she told CNN, suggesting that the raid was designed to intimidate whistleblowers and critics of Florida governor Ron DeSantis.

A spokesperson for DeSantis told CNN his office had no knowledge of the investigation.

While there was a suggestion last month that the Florida messaging system might have been hacked rather than simply improperly accessed, it apparently didn’t have particularly strong security anyhow: the affidavit says all of the registered users shared the same username and password.

Jones didn’t comment when we asked, but on Twitter she says she’s getting a new computer and will continue to update her new website.

Additional reporting by Mitchell Clark

Update December 8th, 1:01PM ET: Added that Jones has vehemently denied the allegations in an interview with CNN.

Repost: Original Source and Author Link