Facebook is shutting down its Face Recognition tagging program

Meta (formerly known as Facebook) is discontinuing Facebook’s Face Recognition feature following a lengthy privacy battle. Meta says the change will roll out in the coming weeks. As part of it, the company will stop using facial recognition algorithms to tag people in photographs and videos, and it will delete the facial recognition templates that it uses for identification.

Meta artificial intelligence VP Jerome Pesenti calls the change part of a “company-wide move to limit the use of facial recognition in our products.” The move follows a lawsuit that accused Facebook’s tagging tech of violating Illinois’ biometric privacy law, leading to a $650 million settlement in February. Facebook previously restricted facial recognition to an opt-in feature in 2019.

“Looking ahead, we still see facial recognition technology as a powerful tool,” writes Pesenti in a blog post, citing possibilities like face-based identity verification. “But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole.” Pesenti notes that regulators haven’t settled on comprehensive privacy regulation for facial recognition. “Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”

Pesenti says more than one-third of Facebook’s daily active users had opted into Face Recognition scanning, and over a billion face recognition profiles will be deleted as part of the upcoming change. As part of the change, Facebook’s automated alt-text system for blind users will no longer name people when it’s analyzing and summarizing media, and it will no longer suggest people to tag in photographs or automatically notify users when they appear in photos and videos posted by others.

Facebook’s decision won’t stop independent companies like Clearview AI — which built huge image databases by scraping photos from social networks, including Facebook — from using facial recognition algorithms trained with that data. US law enforcement agencies (alongside other government divisions) work with Clearview AI and other companies for facial recognition-powered surveillance. State or national privacy laws would be needed to restrict the technology’s use more broadly.

By shutting down a feature it’s used for years, Meta is hoping to bolster user confidence in its privacy protections as it prepares a rollout of potentially privacy-compromising virtual and augmented reality technology. The company launched a pair of camera-equipped smart glasses in partnership with Ray-Ban earlier this year, and it’s gradually launching 3D virtual worlds on its Meta VR headset platform. All these efforts will require a level of trust from users and regulators, and giving up Facebook auto-tagging — especially after a legal challenge to the program — is a straightforward way to bolster it.

Repost: Original Source and Author Link


Apple’s New AR Headset Will Track Hand Gestures Via Face ID

Apple has an augmented reality (AR) headset in the works, and a well-known analyst now predicts that it will use Face ID to track hand movements.

The upcoming headset is said to be equipped with more 3D sensing modules than iPhones and, according to the report, may one day replace iPhones altogether.

Antonio De Rosa

The information comes from a note for investors prepared by Ming-Chi Kuo, a respected analyst, which was then shared by MacRumors. In his report, he elaborates on the kind of performance and features we can expect from the upcoming Apple AR/MR (augmented reality/mixed reality) headset.

According to Kuo, the new headsets will feature four sets of 3D sensors as opposed to the one to two sets currently offered by the latest iPhones. The use of extra sensors opens up the headset to a whole lot of new capabilities, extending the realism of the user experience.

The sensors used in the new Apple headset rely on structured light to detect motion and actions. Kuo predicts that this will make it possible for the headset to track not just the position of the user, but also the hands of the user and other people, objects in front of the user, and lastly, detailed changes in hand movements.

Kuo compared the headset’s ability to track small hand movements to the way Apple’s Face ID is capable of tracking changes in facial expressions. Being able to detect small hand and finger movements allows for a more intuitive user interface that doesn’t take away from the realism of using an AR/MR headset.

Apple VR Headset Concept by Antonio De Rosa.
Apple VR Headset Concept Antonio De Rosa

Both the iPhone and the yet unnamed Apple headset rely on structured light, but the headset needs to be more powerful than the iPhone in order to offer proper hand movement detection. Kuo notes that this means that the structured light power consumption of the AR/MR headset is higher.

“We predict that the detection distance of Apple’s AR/MR headset with structured light is 100% to 200% farther than the detection distance of the iPhone Face ID. To increase the field of view for gesture detection, we predict that the Apple AR/MR headset will be equipped with three sets of ToFs (time of flight) to detect hand movement trajectories with low latency requirements,” said Ming-Chi Kuo in his investor note.

Kuo believes that Apple may one day wish to replace the iPhone with the AR headset, although that won’t happen anytime soon. He predicts that in the next 10 years, headsets may very well replace existing electronics with displays.

With the added hand gesture tracking, the new Apple headset may offer an immersive user experience. As rumors suggest that Apple may be looking to join Meta and other companies in expanding toward the metaverse, it’s possible that this headset might be the first step toward just that.

Editors’ Choice

Repost: Original Source and Author Link


Razer Claims It’s Already Sold Out of Its Zephyr Face Mask

Razer’s attention-grabbing N95 face mask has sold out within minutes of its release.

The company hit Twitter on Thursday evening to announce the news, disappointing those who were keen to get their hands on the uniquely designed protective face covering.

“The demand for the Razer Zephyr has been overwhelming and our first wave is sold out within minutes,” the gaming hardware giant said in a tweet, adding: “Stay tuned and [we] appreciate your patience as we work hard to restock them as fast as we can. Sign up to be notified when the next batch arrives.”

The demand for the Razer Zephyr has been overwhelming and our first wave is sold out within minutes. Stay tuned and appreciate your patience as we work hard to restock them as fast as we can. Sign up to be notified when the next batch arrives:

— R Λ Z Ξ R (@Razer) October 22, 2021

But the somewhat surprising news is already raising eyebrows among those who had been interested in placing an order, with many left wondering exactly how many of the masks were available at launch.

Plenty of replies to Razer’s tweet complained about the sale, with one person asking any successful buyers to post a screenshot of their transaction. At the time of writing, no one had responded.

Digital Trends has reached out to Razer to ask how many masks were available at launch and we will update this article when we receive a response.

Razer unveiled an early version of the high-tech Zephyr mask at CES 2021 in January, describing it at the time as “the world’s smartest mask.”

Along with N95 protection, Razer’s Zephyr mask features two “air exchange chambers” — or fans — that allow filtered air to flow freely for added comfort, though you can use it with the fans switched off, too.

It also comes with a transparent front so people can see more of your facial expressions, with an anti-fog coating and interior light ensuring a clear view at all times.

The exterior of the mask includes Chroma RGB lighting to brighten up dark spaces and surprise anyone close by, with all of the various features able to be controlled using a dedicated smartphone app.

The Zephyr mask, if you’re willing or able to order it, will set you back $99, with replacement N95 filters costing $29 for a pack of 10. Or you could just get a regular N95 mask.

Editors’ Choice

Repost: Original Source and Author Link


Fortnite KAWS Skeleton tarot card hits Halloween in the face

KAWS isn’t letting a good opportunity to cross-brand go to waste with the latest Fortnite fashion crossover. Technically KAWS isn’t just a fashion element, the artist behind the skulls being a man whose art has more of a basis in graffiti, but still, this fits the bill. Fortnite’s latest collaboration is just in time for Halloween in the spookiest month of the year, October of 2021.

The Tarot Card teaser released this week by Fortnite – and Kaws on social media outlets aplenty – shows a Kaws character busting through a rip in a Kaws cartoon gloves pattern. You’ll find a lit border with pink and green Kaws “X” marks, too. Given what’s been revealed with similar cards over the last week, the small set of clues we see here should be enough to estimate what we’ll get.

The most important indicator of the actual end-content comes in the character ripping through – that’s not exactly what you might expect. It isn’t the standard “Companion” character that’s been made into a set of toys over the past couple of decades. Instead, it’s the full human body Kaws skeleton character.

You might refer to this character as the Kaws Paper Skeleton. This skeleton appeared first in the year 2007 as a Kaws x Arktip collaboration, made with printed paper and circular metal joints. It was made in several colors and sold by the Kaws official store OriginalFake, as well.

There’s a DECENT chance we’ll see a full Kaws Skeleton character available for play in Fortnite later this week. Don’t be shocked if that’s the extent of this collaboration – through we’d be happy to see some Kaws x Fortnite clothing released in limited edition fashion, too. Why not?

Take a peek at the timeline of links below for more recent Fortnite updates, leaks, and information about the near-present future.

Repost: Original Source and Author Link


Clearview AI hit with sweeping legal complaints over controversial face scraping in Europe

Privacy International (PI) and several other European privacy and digital rights organizations announced today that they’ve filed legal complaints against the controversial facial recognition company Clearview AI. The complaints filed in France, Austria, Greece, Italy, and the United Kingdom say that the company’s method of documenting and collecting data — including images of faces it automatically extracts from public websites — violates European privacy laws. New York-based Clearview claims to have built “the largest known database of 3+ billion facial images.”

PI, NYOB, Hermes Center for Transparency and Digital Human Rights, and Homo Digitalis all claim that Clearview’s data collection goes beyond what the average user would expect when using services like Instagram, LinkedIn, or YouTube. “Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users,” said PI legal officer Ioannis Kouvakas in a joint statement.

Clearview AI uses an image scraper to automatically collect publicly available photos of faces across social media and other websites to build out its biometric database. It then sells access to that database — and the ability to identify people — to law enforcement agencies and private companies.

The legality of Clearview AI’s approach to building its facial recognition service is the subject of a number of legal challenges globally. Authorities in the UK and Australia opened a privacy probe last year into the company’s data scraping techniques. In February, Canada’s privacy commissioners determined that Clearview’s face scraping is “illegal” and creates a system that “inflicts broad-based harm on all members of society, who find themselves continually in a police lineup.”

Swedish police were fined by the country’s data regulator for using Clearview’s offerings to “unlawfully” identify citizens. And in one case in Germany, the Hamburg Data Protection Agency ordered Clearview to delete the mathematical hash representing a user’s profile after he complained.

In the US, Clearview was sued by the American Civil Liberties Union in the state of Illinois in 2020 for violating the Illinois Biometric Privacy Act. The results of that lawsuit contributed to the company’s decision to stop selling its product to private US companies. Clearview also faced legal action in Vermont, New York and California.

The privacy watchdogs say regulators have three months to respond to their complaints. In the meantime, you can request any data Clearview might have on you via the email and forms provided on its site and ask that your face be omitted from client searches.

Repost: Original Source and Author Link

Tech News

This manual for a face recognition tool shows how much it tracks people

In 2019, the Santa Fe Independent School District in Texas ran a weeklong pilot program with the facial recognition firm AnyVision in its school hallways. With more than 5,000 student photos uploaded for the test run, AnyVision called the results “impressive” and expressed excitement at the results to school administrators.

“Overall, we had over 164,000 detections the last 7 days running the pilot. We were able to detect students on multiple cameras and even detected one student 1100 times!” Taylor May, then a regional sales manager for AnyVision, said in an email to the school’s administrators.

The number gives a rare glimpse into how often people can be identified through facial recognition, as the technology finds its way into more schools, stores, and public spaces like sports arenas and casinos.

May’s email was among hundreds of public records reviewed by The Markup of exchanges between the school district and AnyVision, a fast-growing facial recognition firm based in Israel that boasts hundreds of customers around the world, including schools, hospitals, casinos, sports stadiums, banks, and retail stores. One of those retail stores is Macy’s, which uses facial recognition to detect known shoplifters, according to Reuters. Facial recognition, purportedly AnyVision, is also being used by a supermarket chain in Spain to detect people with prior convictions or restraining orders and prevent them from entering 40 of its stores, according to research published by the European Network of Corporate Observatories.

Neither Macy’s nor supermarket chain Mercadona responded to requests for comment.

The public records The Markup reviewed included a 2019 user guide for AnyVision’s software called “Better Tomorrow.” The manual contains details on AnyVision’s tracking capabilities and provides insight on just how people can be identified and followed through its facial recognition.

The growth of facial recognition has raised privacy and civil liberties concerns over the technology’s ability to constantly monitor people and track their movements. In June, the European Data Protection Board and the European Data Protection Supervisor called for a facial recognition ban in public spaces, warning that “deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places.”

Lawmakers, privacy advocates, and civil rights organizations have also pushed against facial recognition because of error rates that disproportionately hurt people of color. A 2018 research paper from Joy Buolamwini and Timnit Gebru highlighted how facial recognition technology from companies like Microsoft and IBM is consistently less accurate in identifying people of color and women.

In December 2019, the National Institute of Standards and Technology also found that the majority of facial recognition algorithms exhibit more false positives against people of color. There have been at least three cases of a wrongful arrest of a Black man based on facial recognition.

“Better Tomorrow” is marketed as a watchlist-based facial recognition program, where it only detects people who are a known concern. Stores can buy it to detect suspected shoplifters, while schools can upload sexual predator databases to their watchlists, for example.

But AnyVision’s user guide shows that its software is logging all faces that appear on camera, not just people of interest. For students, that can mean having their faces captured more than 1,000 times a week.

And they’re not just logged. Faces that are detected but aren’t on any watchlists are still analyzed by AnyVision’s algorithms, the manual noted. The algorithm groups faces it believes belong to the same person, which can be added to watchlists for the future.

AnyVision’s user guide said it keeps all records of detections for 30 days by default and allows customers to run reverse image searches against that database. That means that you can upload photos of a known person and figure out if they were caught on camera at any time during the last 30 days.

The software offers a “Privacy Mode” feature in which it ignores all faces not on a watchlist, while another feature called “GDPR Mode” blurs non-watchlist faces on video playback and downloads. The Santa Fe Independent School District didn’t respond to a request for comment, including on whether it enabled the Privacy Mode feature.

“We do not activate these modes by default but we do educate our customers about them,” AnyVision’s chief marketing officer, Dean Nicolls, said in an email. “Their decision to activate or not activate is largely based on their particular use case, industry, geography, and the prevailing privacy regulations.”

AnyVision boasted of its grouping feature in a “Use Cases” document for smart cities, stating that it was capable of collecting face images of all individuals who pass by the camera. It also said that this could be used to “track [a] suspect’s route throughout multiple cameras in the city.”

The Santa Fe Independent School District’s police department wanted to do just that in October 2019, according to public records.

In an email obtained through a public records request, the school district police department’s Sgt. Ruben Espinoza said officers were having trouble identifying a suspected drug dealer who was also a high school student. AnyVision’s May responded, “Let’s upload the screenshots of the students and do a search through our software for any matches for the last week.”

The school district originally purchased AnyVision after a mass shooting in 2018, with hopes that the technology would prevent another tragedy. By January 2020, the school district had uploaded 2,967 photos of students for AnyVision’s database.

James Grassmuck, a member of the school district’s board of trustees who supported using facial recognition, said he hasn’t heard any complaints about privacy or misidentifications since it’s been installed.

“They’re not using the information to go through and invade people’s privacy on a daily basis,” Grassmuck said. “It’s another layer in our security, and after what we’ve been through, we’ll take every layer of security we can get.”

The Santa Fe Independent School District’s neighbor, the Texas City Independent School District, also purchased AnyVision as a protective measure against school shootings. It has since been used in attempts to identify a kid who had been licking a neighborhood surveillance camera, to kick out an expelled student from his sister’s graduation, and to ban a woman from showing up on school grounds after an argument with the district’s head of security, according to WIRED.

“The mission creep issue is a real concern when you initially build out a system to find that one person who’s been suspended and is incredibly dangerous, and all of a sudden you’ve enrolled all student photos and can track them wherever they go,” Clare Garvie, a senior associate at the Georgetown University Law Center’s Center on Privacy & Technology, said. “You’ve built a system that’s essentially like putting an ankle monitor on all your kids.”

This article by Alfred Ng was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Repost: Original Source and Author Link

Tech News

TikTok just expanded ways they can collect data on your face and voice

An update made to TikTok’s privacy policy this week made it technically legal to automatically collect data on you and your activities in the app. This update includes notes about biometric identifiers – like fingerprint scans and “voiceprints”. In the notes, TikTok suggests that they will seek “any required permissions” to get this data, but only “where required by law.”

As spotted by TechCrunch, in the latest version of the US privacy policy for TikTok, you’ll find changes to “Image and Audio Information”. See the Wayback Machine saved image of this change as of June 4, 2021. Scroll to “Information we collect automatically and compare to what was posted as of May 30, 2021.

Removed from this area is a sentence as follows: “We also link your subscriber information with your activity on our Platform across all your devices using your email, phone number, or similar information.” That could be good – maybe TikTok has decided to track users slightly less than they were before – or at least that’s what it looks like if this is the ONLY change you notice. The rest seems to move in the opposite direction.

An entire section was added under “Image and Audio Information.” There, TikTok adds a note that they may collect information “about the images and autio that are a part of your User Content.” TikTok notes that they may collect information by “identifying the objects and scenery that appear, the existence and location within an image of face and body features and attributes, the nature of the audio, and the text of the words spoken in your User Content.”

TikTok notes in their policy that this data may be collected to enable the following:
• Special video effects
• Content moderation
• Demographic classification
• Content recommendations
• Advertising recommendations
• Non-personally-identifying operations

As noted above, TikTok also added a note about how they may “collect biometric identifiers and biometric information.” This may include face-prints and voiceprints “from your User Content.”

They also added a more all-encompassing description of how they may collect information on the devices you use to access TikTok. Before, they included IP address, Unique device identifiers, model, mobile carrier, time zone, screen resolution, OS, app names, file names, file types, “keystroke patterns or rythms”, and platform.

Now, TikTok also includes user agent, network type, “identifiers for advertising purposes,” device IDs, battery state, audio settings, and connected audio devices. They’ve expanded their ability to collect information across devices, to make absolutely sure that if you log in to multiple devices, they will “be able to use your profile information to identify your activity across devices.”

TikTok can also now use “informatiuon collected from devices other than those you use to log-in to the Platform.” That effectively gives them the right to utilize audio that comes from devices connected to your smartphone – like connected microphones, smart speakers, and so forth.


Why should you care if TikTok is expanding the ways in which they can collect data based on your activities on your smartphone while using TikTok? If you knew that TikTok was just as guilty as any other social network of collection user data on you whenever you use the app, you probably won’t care about this newest update. If, however, you had the idea that TikTok was far more private than Facebook, Instagram, or other apps like them – now’s a good time to reconsider how you use the app and the network.

Repost: Original Source and Author Link


Fortnite’s next Gaming Legend is a familiar face for PlayStation fans

Considering the title of Fortnite‘s current season – Primal – it probably shouldn’t be a surprise to hear that the next addition to the Gaming Legends series is Aloy from Horizon Zero Dawn. Indeed, it seems that the leaks we’ve been seeing over the past day or so were accurate. Not only will Aloy be joining with her own bundle that’ll be available in the item shop, but there will be a special PlayStation-only tournament and a new limited-time mode featuring Aloy and Lara Croft as well.

Aloy’s outfit will be available in Fortnite‘s item shop beginning on April 15th, which is this Thursday. Also available will be an array of Horizon Zero Dawn-themed items: the Blaze Canister Back Bling, the Glinthawk Glider, Aloy’s Spear Pickaxe, the Heart-rizon Emote, the Focus effect, and the Shield-Weaver Wrap. All of these items will be available separately or in the Horizon Zero Dawn bundle, but unfortunately for us, the prices of these items and the bundle weren’t revealed today.

Epic and Sony have also revealed that anyone who owns the Aloy Outfit and plays Fortnite on PlayStation 5 will unlock the Ice Hunter style, which you can see along with all of the other Horizon Zero Dawn-themed items in the image above. Sorry PlayStation 4 owners, it looks like you’ll have to sit that particular promotion out.

One thing you won’t have to sit out is the Aloy Cup on April 14th, which will only be open to PlayStation 4 and PlayStation 5 players. This is a duos tournament where teams of two will be able to compete in 10 matches across a three-hour time window. The top-scoring teams from each regions will get the Horizon Zero Dawn bundle early. If you’re looking to rack up as many points as possible, then get comfortable with the bow, as you’ll get bonus points for each elimination you make with a bow in those 10 matches.

Then we have the limited-time even featuring Lara Croft, which is appropriately called Team Up! Aloy & Lara. This is, again, a duos mode, only this time around one player on the team will take up the mantle of Aloy and the other will play as Lara. If you’re playing Aloy, you’ll only be able to use the bow, while Lara players will be limited to her dual pistols, so you’ll need to upgrade your weaponry through crafting if you want to have the firepower to make it deep into a match.

Team Up! Aloy & Lara will only be available for a couple of days, going live at 6 AM PDT/9 AM EDT on April 16th and running to the same time on April 18th. The Aloy Cup, meanwhile, will be taking place tomorrow – regional start times can be found under the Compete tab in-game, and you can see the scoring breakdown for the tournament over at the Fortnite website.

Repost: Original Source and Author Link


Face masks are breaking facial recognition algorithms, says new government study

Face masks are one of the best defenses against the spread of COVID-19, but their growing adoption is having a second, unintended effect: breaking facial recognition algorithms.

Wearing face masks that adequately cover the mouth and nose causes the error rate of some of the most widely used facial recognition algorithms to spike to between 5 percent and 50 percent, a study by the US National Institute of Standards and Technology (NIST) has found. Black masks were more likely to cause errors than blue masks, and the more of the nose covered by the mask, the harder the algorithms found it to identify the face.

“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” said Mei Ngan, an author of the report and NIST computer scientist. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”

Example images used by NIST to assess the accuracy of various facial recognition algorithms.
Image: B. Hayes/NIST

Facial recognition algorithms such as those tested by NIST work by measuring the distances between features in a target’s face. Masks reduce the accuracy of these algorithms by removing most of these features, although some still remain. This is slightly different to how facial recognition works on iPhones, for example, which use depth sensors for extra security, ensuring that the algorithms can’t be fooled by showing the camera a picture (a danger that is not present in the scenarios NIST is concerned with).

Although there’s been plenty of anecdotal evidence about face masks thwarting facial recognition, the study from NIST is particularly definitive. NIST is the government agency tasked with assessing the accuracy of these algorithms (along with many other systems) for the federal government, and its rankings of different vendors is extremely influential.

Notably, NIST’s report only tested a type of facial recognition known as one-to-one matching. This is the procedure used in border crossings and passport control scenarios, where the algorithm checks to see if the target’s face matches their ID. This is different to the sort of facial recognition system used for mass surveillance, where a crowd is scanned to find matches with faces in a database. This is called a one-to-many system.

Although NIST’s report doesn’t cover one-to-many systems, these are generally considered more error pone than one-to-one algorithms. Picking out faces in a crowd is harder because you can’t control the angle or lighting on the face and the resolution is generally reduced. That suggest that if face masks are breaking one-to-one systems, they’re likely breaking one-to-many algorithms with at least the same, but probably greater, frequency.

This matches reports we’ve heard from inside government. An internal bulletin from the US Department of Homeland Security earlier this year, reported by The Intercept, said the agency was concerned about the “potential impacts that widespread use of protective masks could have on security operations that incorporate face recognition systems.”

NEC Transforms Tokyo HQ Into Smart Building

Some companies say they’ve already developed new facial recognition algorithms that work with masks, as in the system from NEC, above.
Image: Tomohiro Ohsumi / Getty Images

For privacy advocates this will be welcome news. Many have warned about the rush by governments around the world to embrace facial recognition systems, despite the chilling effects such technology has on civil liberties, and the widely-recognized racial and gender biases of these systems, which tend to perform worse on anyone who is not a white male.

Meanwhile, the companies who build facial recognition tech have been rapidly adapting to this new world, designing algorithms that identify faces just using the area around the eyes. Some vendors, like leading Russian firm NtechLab, say their new algorithms can identify individuals even if they’re wearing a balaclava. Such claims are not entirely trustworthy, though. They usually come from internal data, which can be cherry-picked to produce flattering results. That’s why third-parties agencies like NIST provide standardized testing.

NIST says it plans to test specially tuned facial recognition algorithms for mask wearers later this year, along with probing the efficacy of one-to-many systems. Despite the problems caused by masks, the agency expects that technology will persevere. “With respect to accuracy with face masks, we expect the technology to continue to improve,” said Ngan.

Repost: Original Source and Author Link


Synthesis AI emerges from stealth with $4.5M to create synthetic face datasets

Join GamesBeat Summit 2021 this April 28-29. Register for a free or VIP pass today.

Synthesis AI, a synthetic data company, today emerged from stealth with the announcement that it closed a $4.5 million funding round. The startup says that the capital will allow it to expand its R&D team and develop new synthetic data technologies.

Self-driving vehicle companies alone spend billions of dollars per year collecting and labeling data, according to estimates. Third-party contractors enlist hundreds of thousands of human data labelers to draw and trace the annotations machine learning models need to learn. (A properly labeled dataset provides a ground truth that the models use to check their predictions for accuracy and continue refining their algorithms.) Curating these datasets to include the right distribution and frequency of samples becomes exponentially more difficult as performance requirements increase. And the pandemic has underscored how vulnerable these practices are, as contractors have been increasingly forced to work from home, prompting some companies to turn to synthetic data as an alternative.

Synthesis AI’s platform leverages generative machine learning models, image rendering and composition, and other techniques to create and label images of objects, scenes, people, and environments. Customers can modify things like geometries, textures, lighting, image modalities, and camera locations to produce varied data for training computer vision models.

Synthesis AI

Above: A face generated by Synthesis AI’s API.

Image Credit: Synthesis AI

Synthesis AI offers datasets containing 10,000 to 200,000 scenes for common use cases including head poses and facial expressions, eye gazes, and near infrared images. But what the company uniquely provides is an API that generates millions of images of realistic faces captured from different angles in a range of environments. Using the API, customers can submit a job in the cloud to synthesize as much as terabytes of data.

Synthesis AI says its API covers tens of thousands of identities spanning genders, age groups, ethnicities, and skin tones. It procedurally generates modifications to faces to reflect changes in expressions and emotions, as well as motions like head turns and features such as head and facial hair. Built-in styles adorn subjects with accessories like glasses, sunglasses, hats and other headwear, headphones, and face masks. Other controls enable adjustments in camera optics, lighting, and post-processing.

Synthesis AI makes the claim that its data is unbiased and “perfectly labeled,” but the jury’s out on the representativeness of synthetic data. In a study last January, researchers at Arizona State University showed that when an AI system trained on a dataset of images of engineering professors was tasked with creating faces, 93% were male and 99% white. The system appeared to have amplified the dataset’s existing biases — 80% of the professors were male and 76% were white.

On the other hand, startups like Hazy and Mostly AI say that they’ve developed methods for controlling the biases of data in ways that actually reduce harm. A recent study published by a group of Ph.D. candidates at Stanford claims the same — the coauthors say their technique allows them to weight certain features as more important in order to generate a diverse set of images for computer vision training.

Synthesis AI


Despite competition from startups like Datagen and Parallel Domain, Synthesis AI says that “major” technology and handset manufacturers are already using its API to generate model training and test datasets. Among the early adopters is Affectiva, a company that builds AI it claims can understand emotions by analyzing facial expressions and speech.

“One of our teleconferencing customers leveraged synthetic data to create more robust facial segmentation models. By creating a very diverse set of data with more than 1,000 individuals with a wide variety of facial features, hairstyles, accessories, cameras, lighting, and environments, they were able to significantly improve the performance of their models,” founder and CEO Yashar Behzadi told VentureBeat via email. “[Another one] of our customers is building a car driver and occupant sensing systems. They leveraged synthetic data of thousands of individuals in the car cabin across various situations and environments to determine the optimal camera placement and overall configuration to ensure the best performance.”

In the future, 11-employee Synthesis AI plans to launch additional APIs to address different computer vision challenges. “It is inevitable that simulation and synthetic data will be used to develop computer vision AI,” Behzadi continued. “To reach widespread adoption, we need to continue to build out 3D models to represent more of the real world and create scalable cloud-based systems to make the simulation platform available on-demand across a broad set of use cases.”

Existing investors Bee Partners, PJC, iRobot Ventures, Swift Ventures, Boom Capital, Kubera VC, and Leta Capital contributed to San Francisco, California-based Synthesis AI’s seed round announced today.


VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link