Categories
AI

How optimized object recognition is advancing tiny edge devices

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Emza Visual Sense and Alif Semiconductor have demonstrated an optimized face detection model running on Alif’s Ensemble microcontroller based on Arm IP. The two found it is suitable for enhancing low-power artificial intelligence (AI) at the edge.

The emergence of optimized silicon, models and AI and machine learning (ML) frameworks has made it possible to run advanced AI inference tasks such as eye tracking and face identification at the edge, at low-power and low cost. This opens up new use cases in areas such as industrial IoT and consumer applications.

Making edge devices magnitudes faster

By using Alif’s Ensemble multipoint control unit (MCU), which the Alif claims is the first MCU using the Arm Ethos-U55 microNPU, the AI model ran “an order of magnitude” faster than a CPU-only solution with the M55 at 400MHz. It appears Alif meant two orders of magnitude, as the footnotes state that  the high-performance U55 took 4ms compared to 394ms for the M55. The high efficiency U55 executed the model in 11ms. The Ethos-U55 is part of Arm’s Corstone-310 subsystem, which it launched new solutions for in April. 

Emza said it trained a full “sophisticated” face detection model on the NPU that can be used for face detection, yaw face angle estimation and facial landmarks. The complete application code has been contributed to Arm’s open-source AI repository called “ML Embedded Eval Kit,” making it the first Arm AI ecosystem partner to do so. The repository can be used to gauge runtime, CPU demand and memory allocation before silicon is available. 

“To unleash the potential of endpoint AI, we need to make it easier for IoT developers to access higher performance, less complex development flows and optimized ML models,” said Mohamed Awad, vice president of IoT and embedded at Arm. “Alif’s MCU is helping redefine what is possible at the smallest endpoints and Emza’s contribution of optimized models to the Arm AI open-source repository will accelerate edge AI development.” 

Emza claims its visual sensing technology is already shipping in millions of products and with this demonstration, it is expanding its optimized algorithms to SoC vendors and OEMs. 

“As we look at the dramatically expanding horizon for TinyML edge devices, Emza is focused on enabling new applications across a broad array of markets,” said Yoram Zylberberg, CEO ofEmza. “There is virtually no limit to the types of visual sensing use cases that can be supported by new powerful, highly efficient hardware.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link

Categories
AI

Clearview AI is closer to getting a US patent for its facial recognition technology

Clearview AI is on track to receive a US patent for its facial recognition technology, according to a report from Politico. The company was reportedly sent a “notice of allowance” by the US Patent and Trademark Office, which means that once it pays the required administration fees, its patent will be officially approved.

Clearview AI builds its facial recognition database using images of people that it scrapes across social media (and the internet in general), a practice that has the company steeped in controversy. The company’s patent application details its use of a “web crawler” to acquire images, even noting that “online photos associated with a person’s account may help to create additional records of facial recognition data points,” which its machine learning algorithm can then use to find and identify matches.

Critics argue that Clearview AI’s facial recognition technology is a violation of privacy and that it may negatively impact minority communities. The technology is allegedly less accurate when identifying people of color and women, potentially leading to false arrests when used by law enforcement agencies.

Last year, the company said that its technology was used by over 2,400 police agencies — including the FBI and Department of Homeland Security — to identify suspects. In the aftermath of the Capitol riots this January, Clearview AI said the use of its technology by law enforcement sharply increased as detectives worked to identify those associated with the incident.

The American Civil Liberties Union sued the company last year for violating the Illinois Biometric Information Privacy Act, resulting in Clearview stopping the sale of its technology to private companies and non-law enforcement entities. In November, the Australian government ordered the company to clear its database of all its citizens, and earlier this year, a number of European agencies filed legal complaints against Clearview AI. In addition, a Canadian privacy commissioner called the company’s technology “illegal mass surveillance.”

Clearview AI hasn’t even been able to get on Big Tech’s good side. Last year, Facebook, LinkedIn, Twitter, YouTube all sent cease and desist letters demanding that the company stop scraping images and videos from the platforms, as the practice is in violation of each site’s policies.

Repost: Original Source and Author Link

Categories
AI

Facebook is shutting down its Face Recognition tagging program

Meta (formerly known as Facebook) is discontinuing Facebook’s Face Recognition feature following a lengthy privacy battle. Meta says the change will roll out in the coming weeks. As part of it, the company will stop using facial recognition algorithms to tag people in photographs and videos, and it will delete the facial recognition templates that it uses for identification.

Meta artificial intelligence VP Jerome Pesenti calls the change part of a “company-wide move to limit the use of facial recognition in our products.” The move follows a lawsuit that accused Facebook’s tagging tech of violating Illinois’ biometric privacy law, leading to a $650 million settlement in February. Facebook previously restricted facial recognition to an opt-in feature in 2019.

“Looking ahead, we still see facial recognition technology as a powerful tool,” writes Pesenti in a blog post, citing possibilities like face-based identity verification. “But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole.” Pesenti notes that regulators haven’t settled on comprehensive privacy regulation for facial recognition. “Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”

Pesenti says more than one-third of Facebook’s daily active users had opted into Face Recognition scanning, and over a billion face recognition profiles will be deleted as part of the upcoming change. As part of the change, Facebook’s automated alt-text system for blind users will no longer name people when it’s analyzing and summarizing media, and it will no longer suggest people to tag in photographs or automatically notify users when they appear in photos and videos posted by others.

Facebook’s decision won’t stop independent companies like Clearview AI — which built huge image databases by scraping photos from social networks, including Facebook — from using facial recognition algorithms trained with that data. US law enforcement agencies (alongside other government divisions) work with Clearview AI and other companies for facial recognition-powered surveillance. State or national privacy laws would be needed to restrict the technology’s use more broadly.

By shutting down a feature it’s used for years, Meta is hoping to bolster user confidence in its privacy protections as it prepares a rollout of potentially privacy-compromising virtual and augmented reality technology. The company launched a pair of camera-equipped smart glasses in partnership with Ray-Ban earlier this year, and it’s gradually launching 3D virtual worlds on its Meta VR headset platform. All these efforts will require a level of trust from users and regulators, and giving up Facebook auto-tagging — especially after a legal challenge to the program — is a straightforward way to bolster it.

Repost: Original Source and Author Link

Categories
AI

AI Weekly: Recognition of bias in AI continues to grow

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


This week, the Partnership on AI (PAI), a nonprofit committed to responsible AI use, released a paper addressing how technology — particularly AI — can accentuate various forms of biases. While most proposals to mitigate algorithmic discrimination require the collection of data on so-called sensitive attributes — which usually include things like race, gender, sexuality, and nationality — the coauthors of the PAI report argue that these efforts can actually cause harm to marginalized people and groups. Rather than trying to overcome historical patterns of discrimination and social inequity with more data and “clever algorithms,” they say, the value assumptions and trade-offs associated with the use of demographic data must be acknowledged.

“Harmful biases have been found in algorithmic decision-making systems in contexts such as health care, hiring, criminal justice, and education, prompting increasing social concern regarding the impact these systems are having on the wellbeing and livelihood of individuals and groups across society,” the coauthors of the report write. “Many current algorithmic fairness techniques [propose] access to data on a ‘sensitive attribute’ or ‘protected category’ (such as race, gender, or sexuality) in order to make performance comparisons and standardizations across groups. [But] these demographic-based algorithmic fairness techniques [remove] broader questions of governance and politics from the equation.”

The PAI paper’s publication comes as organizations take a broader — and more critical — view of AI technologies, in light of wrongful arrestsracist recidivismsexist recruitment, and erroneous grades perpetuated by AI. Yesterday, AI ethicist Timnit Gebru, who was controversially ejected from Google over a study examining the impacts of large language models, launched the Distributed Artificial Intelligence Research (DAIR), which aims to ask question about responsible use of AI and recruit researchers from parts of the world rarely represented in the tech industry. Last week, the United Nations’ Educational, Scientific, and Cultural Organization (UNESCO) approved a series of recommendations for AI ethics, including regular impact assessments and enforcement mechanisms to protect human rights. Meanwhile, New York University’s AI Now Institute, the Algorithmic Justice League, and Data for Black Lives are studying the impacts and applications of AI algorithms, as are Khipu, Black in AI, Data Science Africa, Masakhane, and Deep Learning Indaba.

Legislators, too, are taking a harder look at AI systems — and their potential to harm. The U.K.’s Centre for Data Ethics and Innovation (CDEI) recently recommended that public sector organizations using algorithms be mandated to publish information about how the algorithms are being applied, including the level of human oversight. The European Union has proposed regulations that would ban the use of biometric identification systems in public and prohibit AI in social credit scoring across the bloc’s 27 member states. Even China, which is engaged in several widespread, AI-powered surveillance initiatives, has tightened its oversight of the algorithms that companies use to drive their business.

Pitfalls in mitigating bias

PAI’s work cautions that efforts to mitigate bias in AI algorithms will inevitably encounter roadblocks, however, due to the nature of algorithmic decision-making. If optimizing for a goal that’s poorly defined, it’s likely that a system will reproduce historical inequity — possibly under the guise of objectivity. Attempting to ignore societal differences across demographic groups will work to reinforce systems of oppression because demographic data coded in datasets has an enormous impact on the representation of marginalized peoples. But deciding how to classify demographic data is an ongoing challenge, as demographic categories continue to shift and change over time.

“Collecting sensitive data consensually requires clear, specific, and limited use as well as strong security and protection following collection. Current consent practices are not meeting this standard,” the PAI report coauthors wrote. “Demographic data collection efforts can reinforce oppressive norms and the delegitimization of disenfranchised groups … Attempts to be neutral or objective often have the effect of reinforcing the status quo.”

At a time when relatively few major research papers consider the negative impacts of AI, leading ethicists are calling on practitioners to pinpoint biases early in the development process. For example, a program at Stanford — the Ethics and Society Review (ESR) — requires AI researchers to evaluate their grant proposals for any negative impacts. NeurIPS, one of the largest machine learning conferences in the world, mandates that coauthors who submit papers state the “potential broader impact of their work” on society. And in a whitepaper published by the U.S. National Institute of Standards and Technology (NIST), the coauthors advocate for “cultural effective challenge,” a practice that seeks to create an environment where developers can question steps in engineering to help identify problems.

Requiring AI practitioners to defend their techniques can incentivize new ways of thinking and help create change in approaches by organizations and industries, the NIST coauthors posit.

“An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended,” NIST scientist Reva Schwartz, a coauthor of the NIST paper, wrote. “All these factors can allow bias to go undetected … [Because] we know that bias is prevalent throughout the AI lifecycle … [not] knowing where [a] model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital … step.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Black teen barred from skating rink by inaccurate facial recognition

A facial recognition algorithm used by a local roller skating rink in Detroit wouldn’t let teen Lamya Robinson onto the premises, and accused her of previously getting into a fight at the establishment.

But Robinson had never even been to the rink.

The facial recognition system had incorrectly matched her to another patron, she told Fox 2 Detroit. The rink removed her from the building and put her outside alone, her family says.

“To me, it’s basically racial profiling,” Juliea Robinson, her mother, told the TV station. “You’re just saying every young Black, brown girl with glasses fits the profile and that’s not right.”

The harms of facial recognition systems deployed in businesses and by police have been slowly coming to light as the technology is more widely used. Research into these algorithms has shown that they are far less accurate when distinguishing between the faces of Black people, women, and children, which might help explain the error faced by Lamya Robinson.

The highest-profile case of facial recognition leading to a wrongful arrest was also in Detroit, in the case of Robert Williams. Williams was arrested and detained for 30 hours in January 2020, after being accused of shoplifting from a Shinola watch store. He testified in front of the House Judiciary Committee, urging for legislators to adopt a moratorium on the technology introduced as legislation in June 2020.

“I don’t want anyone to walk away from my testimony thinking that if only the technology was made more accurate, its problems would be solved,” Williams said in his testimony. “Even if this technology does become accurate at the expense of people like me, I don’t want my daughters’ faces to be part of some government database.”

The disparity in racial and gender accuracy, as well as the invasive nature of the technology, has led to civil rights organizations and politicians calling for bans. The American Civil Liberties Union has called for nationwide bans and is suing the Detroit Police Department on behalf of Williams for its misuse of the technology. Some states like Maine have already begun to limit police use of the technology. However, only Portland, Oregon, currently has laws limiting how private businesses can use facial recognition.

Civil rights nonprofit Fight for the Future announced that more than 35 other organizations had joined it in demanding that retailers stop using facial recognition in their stores. The group reiterated its position today after the report of Lamya Robinson’s experience getting kicked out of the skating rink.

“This is exactly why we think facial recognition should be banned in public places,” wrote Fight for the Future’s director of campaign and operations Caitlin Seeley George in a press release. “It’s also not hard to imagine what could have happened if police were called to the scene and how they might have acted on this false information.”

Repost: Original Source and Author Link

Categories
Security

The Israeli army is using facial recognition to track Palestinians, former soldiers reveal

The Israeli military has deployed an extensive facial recognition program to track Palestinians in the Israeli-occupied West Bank, according to a new report by The Washington Post.

Former Israeli soldiers told the Post about a smartphone technology called “Blue Wolf,” which takes photos of Palestinians and stores them in a large-scale database. Once an image is captured, Blue Wolf matches that picture to a person in its database, and as the Post describes, soldiers’ phones will then flash a specific color that signifies if that individual should be arrested, detained, or left undisturbed.

The Post notes that the Israeli army has been filling up the database with thousands of images of Palestinians over the past two years, and it even held “competitions” that rewarded soldiers for taking the most photos of people. The database is essentially a “Facebook for Palestinians,” a former soldier told the Post.

The Israeli military has also set up cameras throughout the city of Hebron that scan Palestinians’ faces and identify them for soldiers at checkpoints. Meanwhile, a series of CCTV cameras, some of which point into people’s homes, provide live monitoring 24/7.

According to the Post, the former soldiers were told by the military that the surveillance system was put in place to prevent terrorism. Either way, Israel’s system takes facial recognition to a dystopian extreme.

The largest city in the West Bank, Hebron has seen bitter and long-standing conflict between Israeli and Palestinian populations. A large portion of the city is administered directly by the Israeli military, which enforces curfews and other movement restrictions on the local population. But even in the context of the extreme security measures, the former soldiers who spoke to the Post found the facial recognition system alarming.

“I wouldn’t feel comfortable if they used it in the mall in [my hometown], let’s put it that way,” a former soldier told the Post. “People worry about fingerprinting, but this is that several times over.”

There have been a number of similar systems implemented in other countries, and all have been controversial. China developed a similar facial recognition system to monitor the Uyghur minority population, although it’s unclear how widely the system was put into use. Moscow recently added facial recognition payment systems to hundreds of metro stations, while the UK launched a similar face-scanning payment system for schoolchildren at lunchtime.

Repost: Original Source and Author Link

Categories
AI

Maine passes the strongest state facial recognition ban yet

The state of Maine now has the most stringent laws regulating government use of facial recognition in the country.

The new law prohibits government use of facial recognition except in specifically outlined situations, with the most broad exception being if police have probable cause that an unidentified person in an image committed a serious crime, or for proactive fraud prevention.

Since Maine police will not have access to facial recognition, they will be able to ask the FBI and Maine Bureau of Motor Vehicles (BMV) to run these searches.

Crucially, the law plugs loopholes that police have used in the past to gain access to the technology, like informally asking other agencies or third parties to run backchannel searches for them. Logs of all facial recognition searches by the BMV must be created and are designated as public records.

The ACLU trumpeted this new law as a major win for state action to block facial recognition.

“Maine is showing the rest of the country what it looks like when we the people are in control of our civil rights and civil liberties, not tech companies that stand to profit from widespread government use of face surveillance technology,” Michael Kebede, a lawyer at the ACLU of Maine, said in a press release.

The only other state-wide facial recognition law was enacted by Washington in 2020, but many privacy advocates were dissatisfied with the specifics of the law. The Washington law gives police generous carve-outs to conduct surveillance with the technology and also allows the technology to be used to deny access to services like housing or education enrollment. Notably, it was written by State Senator Joe Nguyen, who is a current employee of Microsoft.

Virginia and Massachusetts legislatures have also banned some police use of facial recognition, but both fall short of regulating the tech in schools and other state agencies.

Maine’s new law also gives citizens the ability to sue the state if they’ve been unlawfully targeted by facial recognition, which was notably absent from Washington’s regulation. If facial recognition searches are performed illegally, they must be deleted and cannot be used as evidence.

The law was enacted after passing state legislature and will not require a signature from Maine Governor Janet Mills. It will go into effect on October 1st, 2021.

Repost: Original Source and Author Link

Categories
AI

Federal agencies have almost no facial recognition oversight, report finds

A new report from the Government Accountability Office (GAO) has revealed near-total lack of accountability from federal agencies using facial recognition built by private companies, like Clearview AI.

Of the 14 federal agencies that said they used privately built facial recognition for criminal investigations, only Immigration and Customs Enforcement was in the process of implementing a list of approved facial recognition vendors and a log sheet for the technology’s use.

The rest of the agencies, including Customs and Border Protection, the Federal Bureau of Investigation, and the Drug Enforcement Administration, had no process in place to track the use of private facial recognition.

This GAO report greatly expands the public’s knowledge of how the federal government uses facial recognition more broadly by distilling which agencies use facial recognition built by the government, which are using third-party vendors, and how large those datasets are in each case. Of 42 federal agencies surveyed, 20 told the oversight agency they used facial recognition in some form, most relying on federal systems maintained by the Department of Defense and Department of Homeland Security.

These federal systems can hold a staggering amount of identities: the Department of Homeland Security’s Automated Biometric Identification System holds more than 835 million identities, according to the GAO report.

Federal agencies were also asked how they used this technology during racial justice protests in the wake of George Floyd’s murder, as well as the Capitol Hill riot on January 6th.

Six agencies, including the FBI, US Marshals Service, and Postal Inspection Service, used facial recognition on “individuals suspected of violating the law” in protests last summer. Three agencies used the technology investigating the riot on January 6th: Capitol Police, Customs and Border Protection, and the Bureau of Diplomatic Security. However, some information was withheld from the GAO investigators as it pertained to active investigations.

The use of this technology on protestors and rioters shows how critical it is to have accountability mechanisms in place. The GAO explains if these agencies don’t know which facial recognition services they’re using, they have no way to mitigate the enormous privacy, security, or accuracy risks inherent in the technology.

“When agencies use facial recognition technology without first assessing the privacy implications and applicability of privacy requirements, there is a risk that they will not adhere to privacy-related laws, regulations, and policies,” the report says.

In one case, GAO investigators asked a federal agency if it was using facial recognition built by private companies, and the agency said it was not. But after an internal poll, the unnamed agency learned that employees had run such facial recognition searches more than 1,000 times.

Going forward, the GAO has issued 26 recommendations to federal agencies on the continued use of facial recognition. They consist of two identical recommendations for each of the 13 agencies without an accountability mechanism in place: Figure out which facial recognition systems you’re using, and then study the risks of each.

Repost: Original Source and Author Link

Categories
Security

Moscow adds facial recognition payment system to more than 240 metro stations

Moscow launched “Face Pay” on Friday, a facial recognition payment system implemented in more than 240 Mosmetro stations, “the largest use of facial recognition technology in the world,” officials claim (via The Guardian). The service relies on stored photographs to validate metro payments, an obvious privacy concern given the previous uses of facial recognition technology by the Russian capital’s law enforcement.

Face Pay requires metro riders to upload a photo and connect their bank and metro cards to the Mosmetro mobile app. With everything uploaded, all you need to do is look at the camera posted above the turnstiles to make it in time for your next train. Moscow authorities expect 10 to 15 percent of riders to use Face Pay “regularly” in the next two to three years, the hope being less time swiping and paying for rides will translate to shorter lines and waits, and less close contact during the ongoing pandemic.

Face Pay launched at all Moscow Underground stations

Moscow’s head of city transport and road infrastructure Maxim Liksutov with a Face Pay camera.
Photo by TASSTASS via Getty Images

That’s all fine and good, at least conceptually. The relative convenience biometric recognition can add to payment systems is a concept that’s currently being floated in the US through Amazon One, the shipping giant’s palm recognition tech. As The Guardian notes, Moscow’s Department of Information Technology claims photographs collected through official channels won’t be turned over to the police and are instead securely encrypted in the GIS ETSHD system (Moscow’s Unified Data Storage and Processing Center).

That hasn’t convinced Russian privacy advocates, though. “This is a dangerous new step in Russia’s push for control over its population. We need to have full transparency on how this application will work in practice,” Stanislav Shakirov, the founder of digital rights group Roskomsvoboda, told The Guardian. “The Moscow metro is a government institution and all the data can end up in the hands of the security services.”

Shakirov has good reason to be concerned. Moscow’s implementation of facial recognition across its vast network of more than 10,000 CCTV cameras is more than a little scary. Worse than the possibility of abuse by local Moscow law enforcement, the system can apparently be hijacked for as little as $200 by enterprising hackers. That’s the real risk of applying facial recognition across even more of daily life in the city, not just that the government could have an easier time tracking the movements of citizens, but that the system itself is a vulnerable target for even worse abuses.

Repost: Original Source and Author Link

Categories
AI

Legal chatbot firm DoNotPay adds anti-facial recognition filters to its suite of handy tools

Legal services startup DoNotPay is best known for its army of “robot lawyers” — automated bots that tackle tedious online tasks like canceling TV subscriptions and requesting refunds from airlines. Now, the company has unveiled a new tool it says will help shield users’ photos from reverse image searches and facial recognition AI.

It’s called Photo Ninja and it’s one of dozens of DoNotPay widgets that subscribers can access for $36 a year. Photo Ninja operates like any image filter. Upload a picture you want to shield, and the software adds a layer of pixel-level perturbations that are barely noticeable to humans, but dramatically alter the image in the eyes of roving machines.

The end result, DoNotPay CEO Joshua Browder tells The Verge, is that any image shielded with Photo Ninja yields zero results when run through search tools like Google image search or TinEye. You can see this in the example below using pictures of Joe Biden:

Before Photo Ninja, you get plenty of results from Google Image Search (top) and TinEye (below).
Image: DoNotPay

After Photo Ninja, the image yields no results in reverse image searches.
Image: DoNotPay

The tool also fools popular facial recognition software from Microsoft and Amazon with a 99 percent success rate. This, combined with the anti-reverse-image search function, makes Photo Ninja handy in a range of scenarios. You might be uploading a selfie to social media, for example, or a dating app. Running the image through Photo Ninja first will prevent people from connecting this image to other information about you on the web.

Browder is careful to stress, though, that Photo Ninja isn’t guaranteed to beat every facial recognition tool out there. When it comes to Clearview AI, for example, a controversial facial recognition service that is widely used by US law enforcement, Browder says the company “anticipates” Photo Ninja will fool the company’s software but can’t guarantee it.

In part, this is because Clearview AI probably already has a picture of you in its databases, scraped from public sources long ago. As the company’s CEO Hoan Ton-That said in an interview with The New York Times last year: “There are billions of unmodified photos on the internet, all on different domain names. In practice, it’s almost certainly too late to perfect a technology [that hides you from facial recognition search] and deploy it at scale.”

Browder agrees: “In a perfect world, all images released to the public from Day 1 would be altered. As that is clearly not the case for most people, we recognize this as a significant limitation to the efficacy of our pixel-level changes. Hence, the focal point and intended use case of our tool was to avoid detection from Google Reverse Image Search and TinEye.”

DoNotPay isn’t the first to build this sort of tool. In August 2020, researchers from the University of Chicago’s SAND Lab created an open-source program named Fawkes that performs the same task. Indeed, Browder says DoNotPay’s engineers referenced this work in their own research. But while Fawkes is a low-profile piece of software, very unlikely to be used by the average internet consumer, DoNotPay has a slightly larger reach, albeit one that is still limited to tech-savvy users who are happy to let bots litigate on their behalf.

Tools like this don’t provide a silver bullet to modern privacy intrusions, but as facial recognition and reverse image search tools become more commonly used, it makes sense to deploy at least some protections. Photo Ninja won’t hide you from law enforcement or an authoritarian state government, but it might fool an opportune stalker or two.

Repost: Original Source and Author Link