Categories
Computing

Meta expects a billion people in the metaverse by 2030

Meta believes that a billion people will be participating in the metaverse within the next decade, despite the concept feeling very nebulous at the moment.

CEO Mark Zuckerberg spoke with CNBC’s Jim Cramer on a recent broadcast of Mad Money and went on to say that purchases of metaverse digital content would bring in hundreds of billions of dollars for the company by 2030. This would quickly reverse the growing deficit of Meta’s Reality Labs, which has already invested billions into researching and developing VR and AR hardware and software.

Currently, this sounds like a stretch given that only a small percentage of the population owns virtual reality hardware and few dedicated augmented reality devices have been released from major manufacturers. Apple and Google have each developed AR solutions for smartphones and Meta has admitted that the metaverse won’t require special hardware in order to access it.

Any modern computer, tablet, or smartphone has sufficient performance to display virtual content, however, the fully immersive experience is available only when wearing a head-mounted display, whether that takes the form of a VR headset or AR glasses.

According to Cramer, Meta is not taking a cut from creators initially, while planning to continue to invest heavily into hardware and software infrastructure for the metaverse. Meta realizes it can’t build an entire world by itself and needs the innovation of creators and the draw of influencers to make the platform take off in the way Facebook and Instagram have.

Zuckerberg explained that Meta’s playbook has always been to build services that fill a need and grow the platform to a billion or more users before monetizing it. That means the next 5 to 10 years might be a rare opportunity for businesses and consumers to take advantage of a low-cost metaverse experience before Meta begins to demand a share. Just as Facebook was once ad-free, the early metaverse might be blissfully clear from distractions.

This isn’t exclusively Meta’s strategy, but the growth method employed by most internet-based companies. Focusing on growth first and money later has become standard practice. In the future, a balancing act will be required to make enough money to fund services while keeping the metaverse affordable enough to retain users.

While Meta might not get a billion people to strap on a VR headset by 2030, there’s little doubt that the metaverse will become an active area of growth. It should interest enough VR, AR, smartphone, tablet, and computer owners to be self-sustaining within a few years and could actually explode to reach a billion people by 2030.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

Deep North, which uses AI to track people from camera footage, raises $16.7M

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Deep North, a Foster City, California-based startup applying computer vision to security camera footage, today announced that it raised $16.7 million in a series A-1 round. Led by Celesta Capital and Yobi Partners, with participation from Conviction Investment Partners, Deep North plans to use the funds to make hires and expand its services “at scale,” according to CEO Rohan Sanil.

Deep North, previously known as Vmaxx, claims its platform can help brick-and-mortar retailers “embrace digital” and protect against COVID-19 by retrofitting security systems to track purchases and ensure compliance with masking rules. But the company’s system, which relies on algorithms with potential flaws, raises concerns about both privacy and bias.

“Even before a global pandemic forced retailers to close their doors … businesses were struggling to compete with a rapidly growing online consumer base,” Sanil said in a statement. “As stores open again, retailers must embrace creative digital solutions with data driven, outcome-based computer vision and AI solutions, to better compete with online retailers and, at the same time, accommodate COVID-safe practices.”

AI-powered monitoring

Deep North was founded in 2016 by Sanil and Jinjun Wang, an expert in multimedia signal processing, pattern recognition, computer vision, and analytics. Wang — now a professor at Xi’an Jiaotong University in Xi’an, China — was previously a research scientist at NEC before joining Epson’s R&D division as a member of the senior technical staff. Sanil founded a number of companies prior to Deep North, including Akirra Media Systems, where Wang was once employed as a research scientist.

“In 2016, I pioneered object detection technology to help drive targeted advertising from online videos. When a major brand saw this, they challenged m e to create a means of identifying, analyzing, and sorting objects captured on their security video cameras in their theme parks,” Sanil told VentureBeat via email. “My exploration inspired development that would unlock the potential of installed CCTV and security video cameras within the customer’s physical environment and apply object detection and analysis in any form of video.”

After opening offices in China and Sweden and rebranding in 2018, Deep North expanded the availability of its computer vision and video analytics products, which offer object and people detection capabilities. The company says its real-time, AI-powered and hardware-agnostic software can understand customers’ preferences, actions, interactions, and reactions “in virtually any physical setting” across “a variety of markets,” including retailers, grocers, airports, drive-thrus, shopping malls, restaurants, and events.

Deep North says that retailers, malls, and restaurants in particular can use its solution to analyze customer “hotspots,” seating, occupancy, dwell times, gaze direction, and wait times, leveraging these insights to figure out where to assign store associates or kitchen staff. Stores can predict conversion by correlating tracking data with the time of day, location, marketing events, weather, and more, while shopping centers can draw on tenant statistics to understand trends and identify “synergies” between tenants, optimizing for store placement and cross-tenant promotions.

Deep North

“Our algorithms are trained to detect objects in motion and generate rich metadata about physical environments such as engagement, pathing, and dwelling. Our inference pipeline brings together camera feeds and algorithms for real-time processing,” Deep North explains on its website. “[We] can deploy both via cloud and on-premise and go live within a matter of hours. Our scalable GPU edge appliance enables businesses to bring data processing directly to their environments and convert their property into a digital AI property. Video assets never leave the premise, ensuring the highest level of security and privacy.”

Beyond these solutions, Deep North developed products for particular use cases like social distancing and sanitation. The company offers products that monitor for hand-washing and estimate wait times at airport check-in counters, for example, as well as detect the presence of masks and track the status of maintenance workers on tarmacs.

“With Deep North’s mask detection capability, retailers can easily monitor large crowds and receive real-time alerts,” Deep North explains about its social distancing products. “In addition, Deep North … monitors schedules and coverage of sanitization measures as well as the total time taken for each cleaning activity … Using Deep North’s extensive data, [malls can] create tenant compliance scorecards to benchmark efforts, track overall progress, course-correct as necessary. [They] can also ensure occupancy limits are adhered to across several properties, both locally and region-wide, by monitoring real-time occupancy on our dashboard and mobile apps.”

Bias concerns

Like most computer vision systems, Deep North’s were trained on datasets of images and videos showing examples of people, places, and things. Poor representation within these datasets can result in harm — particularly given that the AI field generally lacks clear descriptions of bias.

Previous research has found that ImageNet and Open Images — two large, publicly available image datasets — are U.S.- and Euro-centric, encoding humanlike biases about race, ethnicity, gender, weight, and more. Models trained on these datasets perform worse on images from Global South countries. For example, images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan, compared to images of grooms from the United States. And because of how images of words like “wedding” or “spices” are presented in distinctly different cultures, object recognition systems can fail to classify many of these objects when they come from the Global South.

Bias can arise from other sources, like differences in the sun path between the northern and southern hemispheres and variations in background scenery. Studies show that even differences between camera models — e.g., resolution and aspect ratio — can cause an algorithm to be less effective in classifying the objects it was trained to detect.

Tech companies have historically deployed flawed models into production. ST Technologies’ facial recognition and weapon-detecting platform was found to misidentify black children at a higher rate and frequently mistook broom handles for guns. Meanwhile, Walmart’s AI- and camera-based anti-shoplifting technology, which is provided by Everseen, came under scrutiny last May over its reportedly poor detection rates.

Deep North doesn’t disclose on its website how it trained its computer vision algorithms, including whether it used synthetic data (which has its own flaws) to supplement real-world datasets. The company also declines to say to what extent it takes into account accessibility and users with major mobility issues.

In an email, Sanil claimed that Deep North “has one of the largest training datasets in the world,” derived from real-world deployments and scenarios. “Our human object detection and analysis algorithms have been trained with more than 130 million detections, thousands of camera feeds, and various environmental conditions while providing accurate insights for our customers,” he said. “Our automated and semi-supervised training methodology helps us build new machine learning models rapidly, with the least amount of training data and human intervention.”

In a follow-up email, Sanil added: “Our platform detects humans, including those with unique gaits, and those that use mobility aids and assistive devices. We don’t do any biometric analysis, and therefore there is no resulting bias in our system … In the simplest terms, the platform interprets everything as an object whether it’s a human or a shopping cart or a vehicle. We provide object counts entering or exiting a location. Our object counting and reporting is not influenced by specific characteristics.” He continued: “We have a large set of labeled data. For new data to be labeled, we need to classify some of the unlabeled data using the labeled information set. With the semi-supervised process we can now expedite the labeling process for new datasets. This saves time and cost for us. We don’t need annotators, or expensive and slow processes.”

Privacy and controversy

While the purported goal of products like Deep North’s are health, safety, and analytics, the technology could be coopted for other, less humanitarian intents. Many privacy experts worry that they’ll normalize greater levels of surveillance, capturing data about workers’ movements and allowing managers to chastise employees in the name of productivity.

Deep North is no stranger to controversy, having reportedly worked with school districts and universities in Texas, Florida, Massachusetts, and California to pilot a security system that uses AI and cameras to detect threats. Deep North claims that the system, which it has since discontinued, worked with cameras with resolutions as low as 320p and could interpret people’s behavior while identifying objects like unattended bags and potential weapons.

Deep North is also testing systems in partnership with the U.S. Transportation Security Administration, which furnished it with a grant last March. The company received close to $200,000 in funding to provide metrics like passenger throughput, social distancing compliance, agent interactions, and bottleneck zones as well as reporting of unattended baggage, movement in the wrong direction, or occupying restricted areas.

“We are humbled and excited to be able to apply our innovations to help TSA realize its vision of improving passenger experience and safety throughout the airport,” Sanil said in a statement. “We are committed to providing the U.S. Department of Homeland Security and other government entities with the best AI technologies to build a safer and better homeland through continued investment and innovation.”

Deep North admitted in an interview with Swedish publication Breakit that it offers facial characterization services to some customers to estimate age range. And on its website, the startup touts its technologies’ ability to personalize marketing materials depending on a person’s demographics, like gender. But Deep North is adamant that its internal protections prevent it from ascertaining the identity of any person captured via on-camera footage.

“We have no capability to link the metadata to any single individual. Further, Deep North does not capture personally identifiable information (PII) and was developed to govern and preserve the integrity of each and every individual by the highest possible standards of anonymization,” Sanil told TechCrunch in March 2020. “Deep North does not retain any PII whatsoever, and only stores derived metadata that produces metrics such as number of entries, number of exits, etc. Deep North strives to stay compliant with all existing privacy policies including GDPR and the California Consumer Privacy Act.”

To date, 47-employee Deep North has raised $42.3 million in venture capital.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

T-Mobile investigating report of customer data breach that reportedly involves 100 million people

T-Mobile confirmed Sunday that it’s looking into an online forum post that claims to be selling a large trove of its customers’ sensitive data. Motherboard reported that it was in contact with the seller of the data, who said they had taken data from T-Mobile’s servers that included Social Security numbers, names, addresses, and driver license information related to more than 100 million people. After reviewing samples of the data, Motherboard reported it appeared authentic.

“We are aware of claims made in an underground forum and have been actively investigating their validity,” a T-Mobile spokesperson said in an email to The Verge. “We do not have any additional information to share at this time.”

It’s not clear when the data may have been accessed, but T-Mobile has been the target of several data breaches in the last few years, most recently in December 2020. During that incident, call-related information and phone numbers for some of its customers may have been exposed, but the company said at the time that it did not include more sensitive info such as names or Social Security numbers.

In 2018, hackers accessed personal information for roughly 2 million T-Mobile customers that included names, addresses, and account numbers, and in 2019, some of T-Mobile’s prepaid customers were affected by a breach that also accessed names, addresses, and account numbers.

A March 2020 breach exposed some T-Mobile customers’ financial information, Social Security numbers, and other account information.

Repost: Original Source and Author Link

Categories
Game

8 Spooky Halloween Games for People Who Don’t Like Horror

October is spooky season, where vampires, zombies, and cat people come out to play. A lot of people live for Halloween and look forward to the horrorfest every year. Unfortunately, I’m a weenie. I don’t like horror — never have, never will.

Still, I want to play some fitting games to get in the spirit of the season, so I’ve been finding some less scary options. They’re games that only kind of give you the chills or take place in pretty autumn settings, perfect for channeling your inner pumpkin spice. They’re also critically acclaimed games that should at least give you something to talk to your friends about, even if you end up not liking them. Best of all, they won’t keep you up at night or make the game barely playable because you’re afraid to turn the corner.

Here are a few picks for people who don’t like horror. Alternatively, if you’re not a weenie, you can instead look at our recommendations for the best horror games of all time.

Luigi’s Mansion 3

Luigi’s Mansion 3 replaces the typical haunted mansion with a haunted hotel. Luigi, Mario, and Princess Peach accept an invite to a luxurious resort, not realizing that it’s actually a bed of supernatural activity. Our hero Luigi wakes up from a nap in his hotel room to find everyone gone and the hotel overrun by ghosts. He, along with his trusty ghost dog Polterpup, must clean up the hotel’s ghostly infestation and save his brother and friends from the vengeful King Boo.

Luigi’s Mansion fits the Halloween theme with its haunted vibe — think Casper the Friendly Ghost. It’s not meant to terrify players as much as it is to tell a story that happens to have ghosts. Plus, its cartoonish graphics lighten the blow when it comes to scares. It’s easier to downplay any spookiness when a ball-nosed cartoon plumber is running around vacuuming blob-bodied ghosts. No eerily realistic rotting skin, chilling background music, or blood baths to be seen in these haunted halls.

Luigi’s Mansion 3 is available on the Nintendo Switch. Other Luigi’s Mansion games are fine picks for Halloween too, but Luigi’s Mansion 3 is the most modern one. As Digital Trends’ review put it, “Exploring all the different rooms with all the carefully added details and clever ghost encounters has a greater impact than it did in previous games.”

Little Nightmares 2

mono and six together

Little Nightmares 2 takes place in a dark, cluttered universe eerily similar to our own. Mono, a young boy wearing a paper bag over his head, finds himself trapped in this world that’s been distorted by a mysterious signal tower. He meets Six, the little girl wearing a yellow raincoat from Little Nightmares, and the two work together to uncover the secrets of the tower and save Six from her fate.

This prequel to Little Nightmares scares players in an unsettling, bubbling at the pit of your stomach kind of way. Its shadowy, bleak setting and silent protagonists moving about a dangerous area of spooky residents stir up a sense of unease. I kept it in this list because of how its suspense-filled story convinced me to continue through cryptic corridors, even though I felt it was scarier than what I was used to.

Jump into this realm of nightmares on PC, PS4, PS5, Xbox One, Xbox Series X/S, Nintendo Switch, and Google Stadia. Alternatively, you can play Very Little Nightmares on mobile devices, which changes gameplay and graphics for a more easygoing spinoff adventure.

Famicom Detective Club

famicom detective club protagonists

Famicom Detective Club is a series, not just one title. Famicom Detective Club: The Missing Heir introduces an amnesiac protagonist who discovers that he’s a detective in the middle of solving a murder related to the wealthy Ayashiro family. On the other hand, Famicom Detective Club: The Girl Who Stands Behind stars the same protagonist before the events of Missing Heir. He investigates the murder of a schoolgirl alongside her best friend and leader of the Detective Club, Ayumi Tachibana.

Both games count as murder mystery visual novels. They aren’t made to be horror in a way that invokes a sense of creeping unease like some other titles on this list. However, they still involve murder and dead people in a way that might be entertaining for a late evening playthrough. The murders are also related to some urban legend ghost stories, which match the Halloween spirit. Overall, it checks off most elements of a scary story while keeping it light.

Both Famicom Detective Club games are available on the Nintendo Switch. You can buy one first to try out the games, or buy the entire bundle upfront for slightly cheaper than it would cost to separately buy each one.

What Remains of Edith Finch

looking at finch mansion through forest

What Remains of Edith Finch takes place through the eyes of Edith Finch, the last surviving member of the Finch family. Edith explores the abandoned Finch mansion to find out why she’s the only one left. It’s essentially an anthology of short stories about each Finch family member. You play through each Finch’s life through various interactive means until their untimely deaths.

Edith Finch has a rainy night vibe to it. Edith doesn’t seem to be in any immediate, bloody danger, but she is investigating her spooky family curse. The story explores themes of what people leave behind, but in an unnerving way that reminds us life is fleeting and our actions have consequences. As Creative Director Ian Dallas told Digital Trends, “What we’re really interested in is exploring a moment that feels very beautiful, but also a little unsettling.”

Find out what exactly remains of Edith Finch on PC, PlayStation 4, Xbox One, Nintendo Switch, and iOS.

Lost in Random

Lost in Random tells the story of one sister’s quest to save their sibling from a twisted fairytale. In the Kingdom of Random, children roll a magical die on their twelfth birthday to decide where they live for the rest of their lives. Odd rolls a six, which should mean a life of luxury in the Queen’s Castle. However, one year later, her sister Even receives a signal that indicates Odd might be in danger. Even meets a sentient die named Dicey and the two fight through different districts to save Odd.

It’s a Tim Burton-esque adventure with spindly 3D characters, dreamy settings, and mean-looking monsters. Lost in Random might have a spooky premise with the kidnapped sister and all, but it’s also a heartwarming tale without putting it in outright nightmare territory. Even the promotional materials like the trailer scream storybook come to life. Digital Trends’ reviewer mentioned some cons, but it succeeded in entertaining them with well-developed (if long-winded) characters and an intriguing mystery.

Lost in Random is on the Nintendo Switch, Xbox Series X/S, Microsoft Windows, PlayStation 5, Xbox One, PS4, and PS5.

Night in the Woods

Night in the Woods stars college dropout Mae Benson in her return to her rundown hometown of Possum Springs. Players help Mae cope with her feelings of aimlessness while uncovering something sinister brewing behind the suburban normalcy of the Western Pennsylvania-based town. It’s a hybrid genre adventure game that’s sure to entertain with its variety of mini-games and humorous, thoughtful dialogue.

It takes place in the fall, but that’s not the only reason why it’s a Halloween game. There’s a murder mystery subplot underneath this coming-of-age story. Play through the events from Halloween (or the game’s version of it) all the way toward the winter to reconnect with old friends and uncover the shady happenings that might have to do with missing people in Possum Springs.

This game might be for you if you’re looking for a young adult novel in the form of an indie hybrid adventure game with ghostly undertones. It’s available on basically every gaming platform now including PC, PS4, Xbox One, Nintendo Switch, and mobile devices. It’s definitely one of the better indie games still worth playing in 2021 and beyond.

Doki Doki Literature Club

doki doki literature club characters

Doki Doki Literature Club seems like an innocent high school dating sim, but it’s actually a psychological horror game that subverts the genre. You play as a faceless protagonist who has the option of picking between three girls: Sayori, your cheerful childhood friend, Yuri, the shy beauty with a possessive side, and Natsuki, the small feisty girl with a temper. There’s also Monika, the non-romanceable club president.

It starts off as a fairly standard sim where the player composes poems with words that represent each girl to strengthen their bonds with them. Then, everything changes when a certain cataclysmic event corrupts the entire game. Players then witness a darkening narrative with each scene until the big reveal at “the end.” This so-called sim might be for you if you like anime tropes, philosophical discussions, and creepypasta.

Doki Doki Literature Club is one of the best free-to-play games you can get on PC, PS4, and Xbox One. It also has an expansion called Doki Doki Literature Club Plus!, which includes additional content. In addition to the aforementioned platforms, the expansion is also available on the Nintendo Switch.

Oxenfree

inside house in oxenfree

Oxenfree starts as what seems like a typical coming-of-age story before the main characters discover a ghostly rift. You play as a teenage girl named Alex, who travels to Edwards Island with her friend Ren and new stepbrother Jonas to meet up for a weekend trip. There, they meet Clarissa, the former girlfriend of Alex’s dead brother, and Nona, Clarissa’s best friend. But, just as these friends start to explore the abandoned island, their weekend getaway shifts into something spooky.

Oxenfree relies on Alex’s decisions to drive the narrative to one of the multiple endings. Players uncover Edwards Island’s dark past and determine what ultimately happens to this band of friends. Decisions can get complicated, especially with the supernatural elements like time travel, pocket dimensions, and ghosts in the story. It’s never really the bloody kind of horror, but jump scares and suspenseful moments can get a rise out of players.

Oxenfree is available on PC, PS4, PS5, Xbox One, Xbox Series X/S, and Nintendo Switch. Its sequel doesn’t come out until 2022, so you have plenty of time to catch up on the original game on your platform of choice.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Security

T-Mobile data breach exposed the personal info of more than 47 million people

T-Mobile has released more information about its most recent data breach, and while the company’s findings fall short of the reported 100 million records, the numbers are staggering.

While saying its investigation is still ongoing, the company confirmed that records of over 40 million “former or prospective customers” who had previously applied for credit and 7.8 million postpaid customers (those who currently have a contract) were stolen. In its last earnings report (PDF), T-Mobile said it had over 104 million customers.

The data in the stolen files contained critical personal information included first and last names, dates of birth, Social Security numbers, and driver’s license / ID numbers — the kind of information you could use to set up an account in someone else’s name or hijack an existing one. It apparently did not include “phone numbers, account numbers, PINs or passwords.”

That isn’t the end of it, either, as over 850,000 prepaid T-Mobile customers were also victims of the breach, and for them, the exposed data includes “names, phone numbers, and account PINs.” Affected customers have already had their PINs reset and will receive a notification “right away.” There was also unspecified information accessed for inactive prepaid accounts. However, T-Mobile says, “No customer financial information, credit card information, debit or other payment information or SSN was in this inactive file.”

T-Mobile:

Customers trust us with their private information and we safeguard it with the utmost concern. A recent cybersecurity incident put some of that data in harm’s way, and we apologize for that. We take this very seriously, and we strive for transparency in the status of our investigation and what we’re doing to help protect you.

The notice includes boilerplate language saying that “We take our customers’ protection very seriously,” but it rings especially hollow from T-Mobile considering that this is at least the fourth data breach exposed in the last few years, including one in January. According to the company’s statement, its investigation began based on a report of someone claiming in an online forum that they had compromised T-Mobile’s servers. A spokesperson for the FCC says in a statement that “Telecommunications companies have a duty to protect their customers’ information. The FCC is aware of reports of a data breach affecting T-Mobile customers and we are investigating.”

A Twitter account advertising stolen data for sale claimed the attack affected all 100 million customers and included IMEI / IMSI data for 36 million customers that could uniquely identify specific devices or SIM cards, but T-Mobile’s announcement does not confirm that is the case.

T-Mobile has added a page on its site where customers can go for information as well as shortcuts to change their PINs and passwords. It’s offering two years of free identity protection services from McAfee, recommends postpaid customers change their PIN, and mentions its Account Takeover Protection capabilities to prevent SIM-swapping attacks.

Update August 18th, 4:49PM ET: Added link and information regarding T-Mobile’s dedicated site, and its apology statement.

Update August 19th, 10:30AM ET: Added statement from the FCC.

Repost: Original Source and Author Link

Categories
AI

AmplifAI’s data-powered people enablement platform gets a $18.5M investment

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Can artificial intelligence and data inspire employees to perform at their very best? If you ask Sean Minter, CEO and founder of the data-driven people enablement platform AmplifAI, the answer is an unequivocal yes. And as of today, the six-year-old startup has a lot more cash on hand to help it make the case for an AI-enhanced human workforce.

“Our view of AI is that it isn’t here to replace people,” Minter said. “We use AI to enable people to become better at what they do, and let AI become their trainer and their coach.”

AmplifAI offers a SaaS self-learning platform that plugs into a variety of company data sources to analyze employee performance and deliver personalized feedback and actionable suggestions to team members and managers at every level of the org chart. Using a company’s highest-performing employees as a benchmark, AmplifAI uses its own proprietary data smarts to generate performance-related recommendations aimed at elevating others in the organization. In short, it’s data-powered professional development.

After expanding its user base tenfold over the last year, AmplifAI just announced a $18.5 million series A funding round led by Greycroft, with LiveOak Venture Partners, Dallas Venture Partners, and Capital Factory chipping in. AmplifAI, which is utilized by teams at brands like The Home Depot and Omnicare365, plans to use these new funds to scale its AI-driven platform and put more sales and support resources in place internally as it continues to grow.

AI for a people-focused problem

Minter, a self-described serial entrepreneur, knew there was a problem that needed solving when he was running an organization with over 15,000 employees around the world. For him, one of the biggest challenges of managing such a sizable workforce was explaining variances in performance among employees and clients, especially when a particular department — say, tech support or customer service — was inexplicably performing well in one locale, but not another.

“It’s an age-old problem,” Minter said. “When you have a big group of people, you’re going to have some people that do really good and some that don’t do so good. Why does that happen? How do I enable everybody else to become a better performer?”

To solve this mystery, Minter started with data. His team built out an engine for ingesting and aggregating data from a variety of company sources, like CRM software, collaboration platforms, and pretty much any employee-facing tool from which useful data could be extracted easily — or at least, somewhat easily.

“The number one complexity of implementing any new client [on AmplifAI] is that there’s no standardization of data,” Minter said. “Every client has different datasets, different systems, different capabilities, different needs.”

To rectify this, AmplifAI built its own proprietary data ingestion system that integrates with dozens of platforms and accounts for the wide range of data formats, schemas, and APIs flowing out of various other enterprise tools. Once ingested and aggregated, this multi-faceted pool of workforce data can then be analyzed and put to good use.

AmplifAI uses unsupervised learning to read this data, understand the organization’s workforce overall, and suss out which employees are high-performing, which are average, and which are performing below average. With that crucial nugget of intel, AmplifAI is then able to create a persona of what a high-performing employee looks like and generate and deliver what it calls “next best actions” for managers, coaches, and other employees who aspire to the reach the level — if not the pay grade — of their high-performing colleagues.

Growing people as technology grows

AmplifAI isn’t the first company to take a software-focused approach to workforce management and development. Offerings from bigger plays like Oracle, SAP, and Workday, to name just a few, have all taken various cracks at this problem over the years. What sets AmplifAI apart for its growing roster of enterprise customers — Minter says the company has yet to lose an RFP — is its own unique, home-baked approach to wrangling complex and varied datasets and using the latest in AI technology to make sense of it all.

“What’s enabling us to do this now, as opposed to 10 years ago, is … a lot of the technology that has been built around this, like the cloud capabilities and the AI models that are now available,” Minter said.

As complex and sophisticated as these things can get, the company’s biggest challenge has nothing to do with databases or machine learning at all.

“The most challenging part is not the technology,” Minter said. “It’s changing human behavior. What we’re trying to do is get people to work differently compared to how they used to work.”

Correction: An earlier version of this post said AmplifAI raised $16.5 million when the correct figure was $18.5 million. We regret the error.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Security

Instagram scammers figured out a way to get paid for banning people

Instagram scammers have developed a lucrative “banning” racket, according to a new report from Motherboard. For around $60, some scammers will get banned whatever Instagram account you choose, friend or foe, and often the scammers make even more money on the backend by helping the targeted users regain access to their accounts.

The process, according to interviews and material reviewed by Motherboard, involves using a verified account to impersonate a target (their name, photo, bio), and then reporting the target as an impersonator to get them banned. Apparently, as long as the target has a human in their profile picture the method works.

Motherboard writes that other users they spoke to also had their accounts banned for being reported for violating Instagram’s policies around suicide and self-harm, a type of content the company has tried to become even more proactive around addressing in recent years. These bans could have been caused by any variety of scripts that can spam Instagram’s reporting tools without hitting the app’s limits (around 40 reports, apparently).

The businesses of banning people is very lucrative according to at least one of the people Motherboard spoke to:

War, the pseudonymous user offering the ban service, told Motherboard in a Telegram message that banning “is pretty much a full time job lol.” They claimed to have made over five-figures from selling Instagram bans in under a month.

The fact that many of the businesses offering banning services also offered help getting accounts back, sometimes for anywhere between $3,500 to $4,000, probably doesn’t hurt either. Some users noted that they received offers of account help immediately after their accounts were disabled, and that often the Instagram account that reported them was following the Instagram account that offered help.

Instagram did not immediately reply to a request for comment, but the company told Motherboard that it was investigating sites that offered banning services, and that users should report people they suspect are guilty of that kind of activity.

If you believe your account has been disabled or banned, Instagram offers instructions in its Help Center on how to get it back.



Repost: Original Source and Author Link

Categories
AI

Kohl’s CTO on empowering people and optimizing supply chain with AI

All the sessions from Transform 2021 are available on-demand now. Watch now.


Say you have a product at a store you want customers to be able to grab off the shelf and buy. How do you balance that with selling the same inventory to people ordering online? How do you decide how much of your inventory you sell online? When someone makes an order online, how do you decide whether to fulfill the order with inventory from stores or your company’s warehouses?

These are just some of the many questions that retail giant Kohl’s wrestled with. The answer the retailer came to, according to Paul Gaffney, Kohl’s chief technology and supply chain officer, was to let AI take a shot at the decision-making.

“When you start allowing machine learning algorithms to make decisions, they sometimes make decisions that aren’t intuitive. They aren’t what the people would make,” Gaffney said.

AI makes a decision

Usually, the deciding factor when trying to pick where to ship from would be shipping costs, Gaffney said at VentureBeat’s Transform 2021 virtual summit. However, it also became clear to the company that when an item was left in inventory at a location where it takes longer to sell, it will eventually wind up being marked down, and that would hurt the bottom line.

“We had this nagging suspicion we were incurring more markdowns than we needed to. Could we be smarter and say, ‘Hey, how about if we sell the merchandise that we might have placed months ago in a spot where we now know it’s probably not going to sell in that store … so let’s pick it from that store and avoid the future markdown,’” Gaffney said.

Kohl’s turned to partners to develop solutions for their supply chain optimization. Then came the leap of faith.

“What opened a bunch of doors for us was the willingness to say, ‘OK, we’re willing to risk a certain amount of money in the belief in the algorithm, and even if it doesn’t work, that investment in learning was good enough,’” Gaffney said. “And it turned out that it paid off.”

With successes in hand, Kohl’s is reflecting on its usage of AI, developing their in-house capability to exercise more control over their AI tools, and also considering further ways to optimize their stores beyond backend inventory management. For example, the data showed that each store has a different make up of customers, so the AI decides what kind of things to display to account for the different group of customers. Allowing the algorithm to suggest making changes to the products on sale at different stores based on customer data, resulted in “enormous positive upside,” said Gaffney.

Human experience

People should “educate themselves” on what machine learning can do, but also to understand how these advanced technologies can disrupt people’s work patterns. Enterprises need to think about ways to “purposefully re-engage” people in activities that aren’t conducive to machine learning.

“It’s tempting to treat the adoption of machine learning AI and big data as a technical problem,” Gaffney said. “But it is much more so a human change management problem as well.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Zillow utilizes explainer AI, data, to revolutionize how people sell houses

Join executive leaders at the Data, Analytics, & Intelligent Automation Summit, presented by Accenture. Watch now!


Zillow has been a big name for online home seekers. There have been more than 135 million homes listed on the platform, and the company has streamlined the real estate transaction process from home loans, title, and buying. Its success in providing customized search functions, product offerings, and accurate home valuations, with a median error rate of less than 2%, have been thanks to the power of AI.

Zillow’s initial forays in AI in 2005 centered around blackbox models for prediction and accuracy, Stan Humphries, chief analytics officer at Zillow, said at VentureBeat’s virtual Transform 2021 conference on Tuesday. Over the past three or four years, as Zillow started purchasing homes directly from sellers, the company shifted towards explainable frameworks to add context that while still getting the same levels of accuracy from blackbox models. “That’s been a kind of a fun Odyssey,” Humphries said, noting that the results needed to be “understandable and intelligible” to a consumer in the same way as if the conversation was with a real estate agent. Zillow took inspiration from Comparative Market Assessments (CMAs), which are estimated appraisals of the property provided by realtors, to create an algorithm analyzing three to five similar homes.

“Humans can wrap their heads around [that], and say, ‘Okay, I see that home’s pretty similar, but it’s got an extra bedroom, and now, there’s been some adjustment for that extra bedroom,’ [compared to] a fully ensemble model approach using a ton of different methodologies,” Humphries said.

The move into explainable models was helpful for consumers to understand the value of their home, but also to inspect it and get a “gut check” relative to their own intuition, Humphries said. Now that he’s seen that it was possible to have “the best of both worlds” with accuracy from blackbox models and intuitiveness from explainable models, Humphries said that he wished Zillow had shifted approaches sooner.

Improving the appraisal model

Zestimate, the AI tool which allows Zillow to estimate market value for homes, has largely improved thanks to utilizing the information provided by its progress.

“We think about our gains that we’re going to make as being data-related, [which include] getting new data features out of that data or their algorithm, or algorithm-related, which is new ways to combine and utilize the features that we’re doing,” Humphries said.

Zillow only used public record data for Zestimate in the past but now incorporates information associated with previous sales of comparable homes. By utilizing natural language processing, Zestimate can pull information about what people wrote and said about the property when interacting with Zillow’s representatives. Another rich source of data has come from computer vision, to mine data out of all the images associated with the homes. It makes sense. People look at the appraisals and then look at homes and make judgements about which house looks nicer. Zillow had to teach computers to do that same type of work, Humphries said.

In February 2006, Zestimate required 35,000 statistical models to estimate the market value of 2.4 million homes. Now, the tool generates 7 million machine learning models to estimate 110 million homes nationwide.

“There’s been a lot of algorithmic advances in what we’re doing. But behind the scenes, there’s also been a huge amount of additional data that we take in now that we just didn’t back then,” Humphries said.

Zillow recently announced a new release of the Zestimate algorithm, version 6.7. This update introduces a new framework that leverages neural networking within the ensemble approach, making the algorithm much more accurate, decreasing Zillow’s median absolute percent error of 7.6% to 6.9%.

Zillow’s AI journey

The company’s technological innovation has to strike a balance between consumer interest and technological limitations. The team thinks about the pain points for consumers and the products to solve those challenges, but also has to consider what can actually be built. In the case of Zestimate, the context behind the appraisal was what the customer was asking for. The representatives using the tool didn’t ask for insights generated by natural language processing and computer vision because they didn’t even know that would be possible, Humphries said.

Currently, the company is working on having users close their own deals with a human agents. The goal is to have this evaluation eventually be completely machine-generated.

“The customer is kind of our North Star,” Humphries said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

This manual for a face recognition tool shows how much it tracks people

In 2019, the Santa Fe Independent School District in Texas ran a weeklong pilot program with the facial recognition firm AnyVision in its school hallways. With more than 5,000 student photos uploaded for the test run, AnyVision called the results “impressive” and expressed excitement at the results to school administrators.

“Overall, we had over 164,000 detections the last 7 days running the pilot. We were able to detect students on multiple cameras and even detected one student 1100 times!” Taylor May, then a regional sales manager for AnyVision, said in an email to the school’s administrators.

The number gives a rare glimpse into how often people can be identified through facial recognition, as the technology finds its way into more schools, stores, and public spaces like sports arenas and casinos.

May’s email was among hundreds of public records reviewed by The Markup of exchanges between the school district and AnyVision, a fast-growing facial recognition firm based in Israel that boasts hundreds of customers around the world, including schools, hospitals, casinos, sports stadiums, banks, and retail stores. One of those retail stores is Macy’s, which uses facial recognition to detect known shoplifters, according to Reuters. Facial recognition, purportedly AnyVision, is also being used by a supermarket chain in Spain to detect people with prior convictions or restraining orders and prevent them from entering 40 of its stores, according to research published by the European Network of Corporate Observatories.

Neither Macy’s nor supermarket chain Mercadona responded to requests for comment.

The public records The Markup reviewed included a 2019 user guide for AnyVision’s software called “Better Tomorrow.” The manual contains details on AnyVision’s tracking capabilities and provides insight on just how people can be identified and followed through its facial recognition.

The growth of facial recognition has raised privacy and civil liberties concerns over the technology’s ability to constantly monitor people and track their movements. In June, the European Data Protection Board and the European Data Protection Supervisor called for a facial recognition ban in public spaces, warning that “deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places.”

Lawmakers, privacy advocates, and civil rights organizations have also pushed against facial recognition because of error rates that disproportionately hurt people of color. A 2018 research paper from Joy Buolamwini and Timnit Gebru highlighted how facial recognition technology from companies like Microsoft and IBM is consistently less accurate in identifying people of color and women.

In December 2019, the National Institute of Standards and Technology also found that the majority of facial recognition algorithms exhibit more false positives against people of color. There have been at least three cases of a wrongful arrest of a Black man based on facial recognition.

“Better Tomorrow” is marketed as a watchlist-based facial recognition program, where it only detects people who are a known concern. Stores can buy it to detect suspected shoplifters, while schools can upload sexual predator databases to their watchlists, for example.

But AnyVision’s user guide shows that its software is logging all faces that appear on camera, not just people of interest. For students, that can mean having their faces captured more than 1,000 times a week.

And they’re not just logged. Faces that are detected but aren’t on any watchlists are still analyzed by AnyVision’s algorithms, the manual noted. The algorithm groups faces it believes belong to the same person, which can be added to watchlists for the future.

AnyVision’s user guide said it keeps all records of detections for 30 days by default and allows customers to run reverse image searches against that database. That means that you can upload photos of a known person and figure out if they were caught on camera at any time during the last 30 days.

The software offers a “Privacy Mode” feature in which it ignores all faces not on a watchlist, while another feature called “GDPR Mode” blurs non-watchlist faces on video playback and downloads. The Santa Fe Independent School District didn’t respond to a request for comment, including on whether it enabled the Privacy Mode feature.

“We do not activate these modes by default but we do educate our customers about them,” AnyVision’s chief marketing officer, Dean Nicolls, said in an email. “Their decision to activate or not activate is largely based on their particular use case, industry, geography, and the prevailing privacy regulations.”

AnyVision boasted of its grouping feature in a “Use Cases” document for smart cities, stating that it was capable of collecting face images of all individuals who pass by the camera. It also said that this could be used to “track [a] suspect’s route throughout multiple cameras in the city.”

The Santa Fe Independent School District’s police department wanted to do just that in October 2019, according to public records.

In an email obtained through a public records request, the school district police department’s Sgt. Ruben Espinoza said officers were having trouble identifying a suspected drug dealer who was also a high school student. AnyVision’s May responded, “Let’s upload the screenshots of the students and do a search through our software for any matches for the last week.”

The school district originally purchased AnyVision after a mass shooting in 2018, with hopes that the technology would prevent another tragedy. By January 2020, the school district had uploaded 2,967 photos of students for AnyVision’s database.

James Grassmuck, a member of the school district’s board of trustees who supported using facial recognition, said he hasn’t heard any complaints about privacy or misidentifications since it’s been installed.

“They’re not using the information to go through and invade people’s privacy on a daily basis,” Grassmuck said. “It’s another layer in our security, and after what we’ve been through, we’ll take every layer of security we can get.”

The Santa Fe Independent School District’s neighbor, the Texas City Independent School District, also purchased AnyVision as a protective measure against school shootings. It has since been used in attempts to identify a kid who had been licking a neighborhood surveillance camera, to kick out an expelled student from his sister’s graduation, and to ban a woman from showing up on school grounds after an argument with the district’s head of security, according to WIRED.

“The mission creep issue is a real concern when you initially build out a system to find that one person who’s been suspended and is incredibly dangerous, and all of a sudden you’ve enrolled all student photos and can track them wherever they go,” Clare Garvie, a senior associate at the Georgetown University Law Center’s Center on Privacy & Technology, said. “You’ve built a system that’s essentially like putting an ankle monitor on all your kids.”

This article by Alfred Ng was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Repost: Original Source and Author Link