Did data drift in AI models cause the Equifax credit score glitch?

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

Earlier this year, from March 17 to April 6, 2022, credit reporting agency Equifax had an issue with its systems that led to incorrect credit scores for consumers being reported.

The issue was described by Equifax as a ‘coding issue’ and has led to legal claims and a class action lawsuit against the company. There has been speculation that the issue was somehow related to the company’s AI systems that help to calculate credit scores. Equifax did not respond to a request for comment on the issue from VentureBeat.

“When it comes to Equifax, there is no shortage of finger-pointing,” Thomas Robinson, vice president of strategic partnerships and corporate development at Domino Data Lab, told VentureBeat. “But from an artificial intelligence perspective, what went wrong appears to be a classic issue, errors were made in the data feeding the machine learning model.”

Robinson added that the errors could have come from any number of different situations, including labels that were updated incorrectly, data that was manually ingested incorrectly from the source or an inaccurate data source.


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

The risks of data drift on AI models

Another possibility that Krishna Gade, cofounder and CEO of Fiddler AI speculated was possible, was a phenomenon known as data drift. Gade noted that according to reports, the credit scores were sometimes off by 20 points or more in either direction, enough to alter the interest rates consumers were offered or to result in their applications being rejected altogether.

Gade explained that data drift can be defined as the unexpected and undocumented changes to the data structure, semantics and distribution in a model.

He noted that drift can be caused by changes in the world, changes in the usage of a product, or data integrity issues, such as bugs and degraded application performance. Data integrity issues can occur at any stage of a product’s pipeline. Gade commented that, for example, a bug in the front-end might permit a user to input data in an incorrect format and skew the results. Alternatively, a bug in the backend might affect how that data gets transformed or loaded into the model.

Data drift is not an entirely uncommon phenomenon, either.

“We believe this happened in the case of the Zillow incident, where they failed to forecast house prices accurately and ended up investing hundreds of millions of dollars,” Gade told VentureBeat.

Gade explained that from his perspective, data drift incidents happen because implicit in the machine learning process of dataset construction, model training and model evaluation is the assumption that the future will be the same as the past.

“In effect, ML algorithms search through the past for patterns that might generalize to the future,” Gade said. “But the future is subject to constant change, and production models can deteriorate in accuracy over time due to data drift.”

Gade suggests that if an organization notices data drift, a good place to start remediation is to check for data integrity issues. The next step is to dive deeper into model performance logs to pinpoint when the change happened and what type of drift is occurring.

“Model explainability measures can be very useful at this stage for generating hypotheses,” Gade said. “Depending on the root cause, resolving a feature drift or label drift issue might involve fixing a bug, updating a pipeline, or simply refreshing your data.”

Playtime is over for data science

There is also a need for the management and monitoring of AI models. Gade said that robust model performance management techniques and tools are important for every company operationalizing AI in their critical business workflows.

The need for companies to be able to keep track of their ML models and ensure they are working as intended was also emphasized by Robinson.

“Playtime is over for data science,” Robinson said. “More specifically, for organizations that create products with models that are making decisions impacting people’s financial lives, health outcomes and privacy, it is now irresponsible for those models not to be paired with appropriate monitoring and controls.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Repost: Original Source and Author Link


Server glitch allowed Eufy owners to see through other homes’ cameras

Last night, a number of Eufy home security camera owners discovered they were able to access smart camera feeds and saved videos from users they had never met, due to an apparent security glitch. First reported by 9to5Mac, the issue came to light in an extended Reddit thread, in which users from around the world detailed their experiences.

“Basically I could see every camera, their front door and backdoor bells, master bedroom, living room, garage, kitchen, their motion recordings, everything,” one Eufy owner noted. “I was wondering what was going on as it still had my email and name as signed in and noticed that some unknown email, I’m guessing of the Hawaii owner, was in my shared guest account.”

Some reported that signing out of their account and signing back in resolved the behavior; by now, whatever problem caused the behavior appears to have been fixed. Still, many users are left concerned that their own cameras and feeds might have been exposed without their knowledge.

“For a security product to become completely unsecure, it’s pretty worrying,” the users continued.

Eufy did not respond to a request for comment from The Verge, but told Android Police that the problem lasted only an hour and did not affect baby monitor products. On Reddit, users higlighted a message sent to customers attributing the issue to a server error:

Dear user,
The issue was due to a bug in one of our servers. This was quickly resolved by our engineering team and our customer service team will continue to assist those affected. We recommend all users to:
1.Please unplug and then reconnect the home base.
2.Log out of the eufy security app and log in again.
Contact for enquiries.

There’s no indication that specific individuals were targeted as part of the bug, but it’s still a troubling behavior for a service that often monitors private homes. Eufy also makes an Echo Dot-style voice assistant called the Genie, although Genie products appear to have been unaffected by the bug.

Update 1:54PM ET: Added Eufy statement to Android Police.

Repost: Original Source and Author Link

Tech News

Software glitch causes Tui aircraft to be involved in a “serious incident”

Tui is a company in the UK that operates commercial airline services. Recently, a flight departing Birmingham airport and heading to Majproca with 187 people on board was in what the Air Accidents Investigation Branch classifies as a “serious incident.” The incident wasn’t because of a failure of some sort aboard the aircraft or an error made by the pilots in charge.

Rather, the incident was due to a software glitch that caused the flight to take off at a heavier weight than expected. An update in the airline’s reservation system while the aircraft were grounded due to the coronavirus pandemic led to 38 passengers on the flight being allocated a child’s standard weight of 35 kilograms rather than the standard adult weight 69 kilograms. While that may not sound like much of an issue, it significantly impacted the load sheet used by pilots to calculate the correct aircraft settings for takeoff.

The result of the software glitch was that pilots believed the Boeing 737 was 1200 kilograms lighter than it actually was. While the Air Accidents Investigation Branch classifies this as a “serious incident,” the result was that the aircraft departed the runway using “marginally less” thrust than should have been used. However, the Air Accidents Investigation Branch says that the aircraft’s safety and its operation were not compromised.

The same software flaw impacted two other Tui flights that left the UK later the same day. All 38 of the passengers that had incorrect weights applied used the prefix Miss on boarding paperwork. Investigators describe the software glitch as a “simple flaw” in an IT system that had been programmed in a foreign country.

While the country is unnamed, investigators say that in that country, the title Miss is used for children and “Ms” for an adult female. The airline has introduced additional manual checks to ensure that adult females are referred to as Ms on relevant documentation.

Repost: Original Source and Author Link