Categories
Game

UK competition regulator finds Microsoft-Activision deal ‘could lead to competition concerns’

The United Kingdom’s antitrust regulator is concerned that Microsoft’s blockbuster purchase of Activision Blizzard could create a monopoly in the nascent cloud gaming space. The Competition and Markets Authority (CMA), which began investigating the deal back in July, says that it’s not yet reassured by the promises Microsoft has made to get the deal done. It feels that, once Activision is a part of Microsoft, the Xbox maker could use its “control over popular games like Call of Duty and World of Warcraft” to “harm rivals” by boxing them out of access to popular titles. Microsoft has already publicly committed not to hoard exclusives, (and said that Actiblizz’s library isn’t all that anyway) but sweet words haven’t appeased the officials.

In a statement, it said that it was giving Microsoft and Activision five days to submit proposals that would address its concerns. If those did not pass muster, however, then the office will open a lengthy “Phase 2” investigation involving an independent panel to scrutinize the deal in greater depth. That will likely delay any completion of the deal, which would then only be rubber-stamped if regulators were convinced that the deal would not cause a “substantial lessening of competition.” It’s likely that, whatever happens, Microsoft will need to commit to not using its growing clout to hurt other companies in the space by depriving them of key franchises.

Microsoft’s gaming chief Phil Spencer has already responded to the announcement, affirming the previous pledge not to pull Call of Duty from PlayStation, for instance. Spencer pointed to the cross-platform appeal of Minecraft, a title Microsoft purchased in 2014, as evidence of the company’s good faith. Activision CEO Bobby Kotick published an open letter to employees, saying that the company will “fully cooperate” with regulators, which are taking “appropriate” steps to ensure that there are no risks to competition.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Repost: Original Source and Author Link

Categories
Security

A Russian-backed malware group is spoofing pro-Ukraine apps, Google finds

“All warfare is based on deception,” Sun Tzu wrote in The Art of War. Some 2,500 years later, the maxim applies to the virtual battlefield as well as the physical.

As the war in Ukraine rages on, researchers from Google have discovered malware from a Russian state-backed group disguised as a pro-Ukraine app. The details were revealed in a blog post published by Google’s Threat Analysis Group (TAG), which specializes in tracking and exposing state-sponsored hacking.

According to TAG, the Cyber Azov app — which invokes Ukraine’s far-right military unit, the Azov Regiment — was actually created by Turla, a Kremlin-backed hacking group known for compromising European and American organizations with malware.

A web page screenshot shows an app labelled “Azov” in the Cyrillic alphabet, with a description asking users to “Join Cyber Azov and help stop Russian aggression against Ukraine”

Screenshot taken from the Cyber Azov website.
Image: Google Threat Analysis Group

Per TAG’s research, the app was distributed through a domain controlled by Turla and had to be manually installed from the APK application file rather than being hosted on the Google Play Store. Text on the Cyber Azov website claimed the app would launch denial-of-service attacks on Russian websites, but TAG’s analysis showed that the app was ineffective for this purpose.

Meanwhile, analysis of the APK file on VirusTotal indicates that many of the biggest anti-malware providers flag it as a malicious app containing a Trojan.

TAG’s blog post suggests that the number of users who installed the app is small. However, the Cyber Azov domain was still accessible to The Verge on Tuesday morning, meaning more Android users could be tricked into downloading an app. A Bitcoin address listed on the website to solicit donations had not made or received any transactions at time of publication, lending support to the assessment that the malicious app has not achieved a wide reach. (On the other side of the conflict, Bitcoin and other cryptocurrencies have provided one revenue stream for the Ukrainian government and military thanks to the efforts of the Ukraine-based Kuna exchange.)

Besides malicious Android apps, TAG also flagged the exploitation of the recently discovered Follina vulnerability in Microsoft Office, which allows hackers to take over computers using maliciously crafted Word documents. The vulnerability had been used by groups linked to the Russian military (GRU) to target media organizations in Ukraine, Google researchers said.

The spoof app uploaded by Turla taps into a significant trend in the cyber dimension of the Russia-Ukraine conflict, namely the participation of a large decentralized base of digital volunteers hoping to aid the Ukrainian cause. Early in the conflict, Anonymous-linked groups scored a number of victories against Russian companies by hacking and leaking sensitive data, although it is unclear what material effect this has had on the course of the war.

Throughout the invasion, Ukraine’s “IT army” has made headlines by carrying out a string of denial-of-service attacks, loosely coordinated through a government-endorsed Telegram channel — an organizational strategy that analysts have described as a groundbreaking approach to cyber and information warfare.

Repost: Original Source and Author Link

Categories
Security

Daycare monitoring apps are ‘dangerously insecure,’ report finds

Popular daycare and childcare communications apps are “dangerously insecure,” according to newly published research, exposing children and parents to the risk of data breaches with lax security settings and permissive or outright misleading privacy policies.

The details come from a new report from the Electronic Frontier Foundation (EFF), which published the results of a months-long research project on Tuesday.

The research, conducted Alexis Hancock, EFF’s director of engineering for the Certbot project, found that popular apps like Brightwheel, HiMama, and Tadpoles lacked two-factor authentication (2FA), meaning that any malicious actor who was able to obtain a user’s password could log in remotely. Further analysis of application code revealed a number of other privacy-compromising features, including data sharing with Facebook and other third parties, that were not disclosed in privacy policies.

After being contacted by the EFF, Brightwheel implemented 2FA and claims to be ”the first in the early education industry to add this extra layer of security.” HiMama reportedly said that it would pass on the feature request to its design team but has not yet implemented the additional security feature. It is not known whether Tadpoles has an intention to implement 2FA.

Network traffic analysis shows the Tadpoles app sending user event data to Facebook.
Image: EFF

Hancock started researching the privacy and security settings of various daycare apps after being asked to download Brightwheel when enrolling her two-year-old daughter in daycare for the first time. Hancock told The Verge that she initially enjoyed using the app to receive updates about her daughter but became concerned about a lack of security given the potentially sensitive nature of the information.

“At first there was a lot of comfort in seeing [my daughter] during the day, with the images they were sending me” Hancock said. “Then I was looking at the app like, huh, I don’t really see security controls I would normally see in most services like this.”

With a background in software development, Hancock was able to use a range of tools like Apktool and mitmproxy to analyze the application code and investigate network calls being made by each of the childcare apps, and she was surprised to find a number of easily fixable errors.

“I found trackers in a few apps. I found weak security policy, weak password policies,” Hancock said. “I found vulnerabilities that were very easy to fix as I went through some of the applications. Really just low hanging fruit.”

The EFF’s new report is not the first to draw attention to serious flaws in applications trusted to keep children safe. For years, researchers have raised concerns over security weaknesses in baby monitor apps and associated hardware, with some of these weaknesses exploited by hackers to send messages to children. More broadly, a survey of 1,000 apps likely to be used by children found that more than two-thirds were sending personal information to the advertising industry.

Hancock hopes that reporting on these privacy and security flaws could lead to better regulation of child-focused apps — but nonetheless, the findings have left her concerned.

“It made me feel, as a parent, even more afraid for my child,” she said. “I don’t want her to have a data breach before she’s five. I’m doing all I can to make sure that doesn’t happen.”

Repost: Original Source and Author Link

Categories
AI

Federal agencies have almost no facial recognition oversight, report finds

A new report from the Government Accountability Office (GAO) has revealed near-total lack of accountability from federal agencies using facial recognition built by private companies, like Clearview AI.

Of the 14 federal agencies that said they used privately built facial recognition for criminal investigations, only Immigration and Customs Enforcement was in the process of implementing a list of approved facial recognition vendors and a log sheet for the technology’s use.

The rest of the agencies, including Customs and Border Protection, the Federal Bureau of Investigation, and the Drug Enforcement Administration, had no process in place to track the use of private facial recognition.

This GAO report greatly expands the public’s knowledge of how the federal government uses facial recognition more broadly by distilling which agencies use facial recognition built by the government, which are using third-party vendors, and how large those datasets are in each case. Of 42 federal agencies surveyed, 20 told the oversight agency they used facial recognition in some form, most relying on federal systems maintained by the Department of Defense and Department of Homeland Security.

These federal systems can hold a staggering amount of identities: the Department of Homeland Security’s Automated Biometric Identification System holds more than 835 million identities, according to the GAO report.

Federal agencies were also asked how they used this technology during racial justice protests in the wake of George Floyd’s murder, as well as the Capitol Hill riot on January 6th.

Six agencies, including the FBI, US Marshals Service, and Postal Inspection Service, used facial recognition on “individuals suspected of violating the law” in protests last summer. Three agencies used the technology investigating the riot on January 6th: Capitol Police, Customs and Border Protection, and the Bureau of Diplomatic Security. However, some information was withheld from the GAO investigators as it pertained to active investigations.

The use of this technology on protestors and rioters shows how critical it is to have accountability mechanisms in place. The GAO explains if these agencies don’t know which facial recognition services they’re using, they have no way to mitigate the enormous privacy, security, or accuracy risks inherent in the technology.

“When agencies use facial recognition technology without first assessing the privacy implications and applicability of privacy requirements, there is a risk that they will not adhere to privacy-related laws, regulations, and policies,” the report says.

In one case, GAO investigators asked a federal agency if it was using facial recognition built by private companies, and the agency said it was not. But after an internal poll, the unnamed agency learned that employees had run such facial recognition searches more than 1,000 times.

Going forward, the GAO has issued 26 recommendations to federal agencies on the continued use of facial recognition. They consist of two identical recommendations for each of the 13 agencies without an accountability mechanism in place: Figure out which facial recognition systems you’re using, and then study the risks of each.

Repost: Original Source and Author Link

Categories
AI

Infrastructure and data issues hamper companies adopting AI, study finds

More than three-quarters of companies say that they have AI models that never come into use. For 20% of companies, the numbers look even worse, with only 10% of their models making it into production.

That’s according to a new survey commissioned by Run:AI, which found that infrastructure challenges are causing resources to sit idle at companies investing in AI. “[I]f most AI models never make it into production, the promise of AI is not being realized,” Run:AI CEO Omri Geller said in a statement. “Our survey revealed that … data scientists are requesting manual access to GPUs, and the journey to the cloud is ongoing.”

The research conducted by Global Surveyz canvassed more than 200 scientists, AI and IT practitioners, and system architects across companies with over 5,000 employees. Just 17% of respondents said that they were able to achieve “high utilization” of their hardware resources, while 22% admitted that their infrastructure sits idle for the most part. That’s despite significant investment — 38% of respondents pegged their company’s annual budget for hardware, software, and cloud fees at more than $1 million. For 15%, their companies spend more than $10 million.

Implementation challenges

Many challenges stand in the way of successfully embedding AI throughout an organization. In an Alation whitepaper, a clear majority of employees (87%) cited data quality shortcomings as the reason their organizations failed to embrace the technology. Another report — this from MIT Technology Review Insights and Databricks — found that AI’s business impact is limited by issues in managing its end-to-end lifecycle.

The end result is abysmal adoption rates. According to a 2019 IDC study, only 25% of the organizations already using AI have developed an “enterprise-wide” strategy. A recent Juniper Networks survey is less optimistic, with only 6% of respondents reporting adoption of AI-powered solutions across their business.

In its research, Run:AI identified data inconsistencies as the biggest deployment blocker. Results state 61% of respondents said that data collection, data cleansing, and governance caused deployment problems. Forty-two percent of experts responding to the survey highlighted challenges with their companies’ AI infrastructure and compute capacity. More than a third say they had to manually request access to resources in order to complete their work.

Data scientists spend the bulk of their time cleaning and organizing data, according to a 2016 survey by CrowdFlower. And respondents to Alation’s latest quarterly State of Data Culture Report said that inherent biases in the data being used in their AI systems produce discriminatory results that create compliance risks for their organizations.

The business value of any AI solution is likely to be limited without clean, centralized data pools or a strategy for actively managing them, Broadridge VP of innovation and growth Neha Singh noted in a recent piece. “McKinsey estimates that companies may be squandering as much as 70% of their data-cleansing efforts,” she wrote. “The key is prioritizing these efforts based on what’s most critical to implement the most valuable use cases.”

Despite the hurdles, Run:AI reports that companies still commit to AI. These put millions toward infrastructure and likely millions more toward trained staff. Seventy-four percent of survey respondents said that their employers were planning to increase hardware capacity or infrastructure spend in the near future.

“Companies that handle these challenges the most effectively will bring models to market and win the AI race,” Geller continued.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI-driven strategies are becoming mainstream, survey finds

Deloitte today released the fourth edition of its State of AI in the Enterprise report, which surveyed 2,857 business decision-makers between March and May 2021 about their perception of AI technologies. Few organizations claim to be completely AI-powered, the responses show, but a significant percentage are beginning to adopt practices that could get them there.

In the survey, Deloitte explored the transformations happening inside firms applying AI and machine learning to drive value. During the pandemic, digitization efforts prompted many companies to adopt AI-powered solutions to back-office and customer-facing challenges. A PricewaterhouseCoopers whitepaper found that 52% percent of companies have accelerated their AI adoption plans, with global spending on AI systems set to jump from $85.3 billion in 2021 to over $204 billion in 2025, according to IDC.

However, only 40% of respondents to the Deloitte survey agreed that their employer has an enterprise-wide AI strategy in place. While 66% view AI as critical to their success, only 38% believe that their use of AI differentiates them from competitors and only about one-third say that they’ve adopted “leading operational practices” for AI.

“The risks associated with AI remain top of mind for executives,” Deloitte executive director of the AI institute Beena Ammanath said in a statement. “We found that high-achieving organizations report being more prepared to manage risks associated with AI and confident that they can deploy AI initiatives in a trustworthy way.”

Embracing AI is a marathon, not a sprint

To this end, “AI-fueled” businesses leverage data to deploy and scale AI across core processes in a human-centric way, according to Deloitte. Using data-driven decision-making, they enhance workforce and customer experiences to achieve an advantage, continuously innovating.

Organizations with an enterprise-wide strategy and leaders who communicate a bold vision are nearly twice as likely to achieve high-level outcomes, Deloitte reports. Furthermore, businesses that document and enforce MLOps processes are twice as likely to achieve their goals “to a high degree,” four times more likely to be prepared for AI risks, and three times more confident in their ability to deploy AI products “in a trustworthy way.”

MLOps, a compound of “machine learning” and “information technology operations,” is a newer discipline involving collaboration between data scientists and IT professionals with the aim of productizing machine learning algorithms. MLOps essentially aims to capture and expand on previous operational practices while extending these practices to manage the unique challenges of machine learning.

“Becoming an AI-fueled organization is to understand that the transformation process is never complete, but rather a journey of continuous learning and improvement,” Deloitte AI principal Nitin Mittal said.

Companies successfully adopting AI also haven’t ignored cultural and change management, the Deloitte report found. Those investing heavily in change management are 60% more likely to report that their AI initiatives exceed expectations and 40% more likely to achieve their desired goals. As for organizations that have undergone significant changes to workflows or added new roles, they’re almost 1.5 times more likely to achieve outcomes to a high degree, while 83% of the highest-achieving organizations create a diverse ecosystem of partnerships to execute their AI strategy, according to Deloitte.

But only 37% of decision-maker respondents reported a major investment in change management, incentives, or training activities, highlighting roadblocks companies will need to overcome. “By embracing AI strategically and challenging orthodoxies, organizations can define a roadmap for adoption, quality delivery, and scale to create or unlock value faster than ever before,” Deloitte AI principal Irfan Saif said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI datasets are prone to mismanagement, study finds

All the sessions from Transform 2021 are available on-demand now. Watch now.


Public datasets like Duke University’s DukeMTMC are often used to train, test, and fine-tune machine learning algorithms that make their way into production, sometimes with controversial results. It’s an open secret that biases in these datasets could negatively impact the predictions made by an algorithm, for example causing a facial recognition system to misidentify a person. But a recent study coauthored by researchers at Princeton reveals that computer vision datasets, particularly those containing images of people, present a range of ethical problems.

Generally speaking, the machine learning community now recognizes mitigating the harms associated with datasets as an important goal. But these efforts could be more effective if they were informed by an understanding of how datasets are used in practice, the coauthors of the report say. Their study analyzed nearly 1,000 research papers that cite three prominent datasets — DukeMTMC, Labeled Faces in the Wild (LFW), and MS-Celeb-1M — and their derivative datasets, as well as models trained on the datasets. The top-level finding is that the creation of derivatives and models and a lack of clarity around licensing introduces major ethical concerns.

Auditing datasets

DukeMTMC, LFW, and MS-Celeb-1M contain up to millions of images curated to train object- and people-recognizing algorithms. DukeMTMC draws from surveillance footage captured on Duke University’s campus in 2014, while LFW has photos of faces scraped from various Yahoo News articles. MS-Celeb-1M, meanwhile, which was released by Microsoft in 2016, comprises the facial photos of roughly 10,000 different people.

Problematically, two of the datasets — DukeMTMC and MS-Celeb-1M — were used by corporations tied to mass surveillance operations. Worse still, all three contain at least some people who didn’t give their consent to be included, despite Microsoft’s insistence that MS-Celeb-1M featured only “celebrities.”

In response to blowback, the creators of DukeMTMC and MS-Celeb-1M took down their respective datasets, while the University of Massachusetts, Amherst team behind LFW updated its website with a disclaimer prohibiting “commercial applications.” However, according to the Princeton study, these retractions fell short of making the datasets unavailable and actively discouraging their use.

The coauthors found that offshoots of MS-Celeb-1M and DukeMTMC containing the entire original datasets remain publicly accessible. MS-Celeb-1M, while taken down by Microsoft, survives on third-party sites like Academic Torrents. Twenty GitHub repositories host models trained on MS-Celeb-1M. And both MS-Celeb-1M and DukeMTMC have been used in over 120 research papers 18 months after the datasets were retracted.

The retractions present another challenge, according to the study: a lack of license information. While the DukeMTMC license can be found in GitHub repositories of derivatives, the coauthors were only able to recover the MS-Celeb-1M license — which prohibits the redistribution of the dataset or derivatives — from an archived version of its now-defunct website.

Derivatives and licenses

Creating new datasets from subsets of original datasets can serve a valuable purpose, for example enabling new AI applications. But altering the compositions with annotations and post-processing can lead to unintended consequences, raising responsible use concerns, the Princeton researchers note.

For example, a derivative of DukeMTMC — DukeMTMC-ReID, a “person re-identification benchmark” — has been used in research projects for “ethically dubious” purposes. Multiple derivatives of LFW label the original images with sensitive attributes including race, gender, and attractiveness. SMFRD, a spin-off of LFW, adds face masks to its images — potentially violating the privacy of those who wish to conceal their face. And several derivatives of MS-Celeb-1M align, crop, or “clean” images in a way that might impact certain demographics.

Derivatives, too, expose the limitations of licenses, which are meant to dictate how datasets may be used, derived from, and distributed. MS-Celeb-1M was released under a Microsoft Research license agreement, which specifies that users may “use and modify [the] corpus for the limited purpose of conducting non-commercial research.” However, the legality of using models trained on MS-Celeb-1M data remains unclear. As for DukeMTMC, it was made available under a Creative Commons license, meaning it can be shared and adapted as long as (1) attribution is given, (2) it’s not used for commercial purposes, (3) derivatives are shared under the same license, and (4) no additional restrictions are added to the license. But as the Princeton coauthors note, there’s many possible ambiguities in a “non-commercial” designation for a dataset, like how nonprofits and governments can apply the dataset.

Choice

To address these and other ethical issues with AI datasets, the coauthors recommend that dataset creators be precise in license language about how datasets can be used and prohibit potentially questionable uses. They also advocate ensuring licenses remain available even if, like in the case of  MS-Celeb-1M, the website hosting the dataset becomes unavailable.

Beyond this, the Princeton researchers say that creators should continuously steward a dataset, actively examine how it may be misused, and make updates to license, documentation, or access restrictions as necessary. They also suggest that dataset creators use “procedural mechanisms” to control derivative creation, for example, by requiring explicit permission to be obtained to create a derivative.

“At a minimum, dataset users should comply with the terms of use of datasets. But their responsibility goes beyond compliance,” the coauthors wrote. “The machine learning community is responding to a wide range of ethical concerns regarding datasets and asking fundamental questions about the role of datasets in machine learning research. We provide a new perspective … Through our analysis of the life cycles of three datasets, we showed how developments that occur after dataset creation can impact the ethical consequences, making them hard to anticipate a priori.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link

Categories
AI

Audit finds gender and age bias in OpenAI’s CLIP model

All the sessions from Transform 2021 are available on-demand now. Watch now.


In January, OpenAI released Contrastive Language-Image Pre-training (CLIP), an AI model trained to recognize a range of visual concepts in images and associate them with their names. CLIP performs quite well on classification tasks — for instance, it can caption an image of a dog “a photo of a dog.” But according to an OpenAI audit conducted with Jack Clark, OpenAI’s former policy director, CLIP is susceptible to biases that could have implications for people who use — and interact with — the model.

Prejudices often make their way into the data used to train AI systems, amplifying stereotypes and leading to harmful consequences. Research has shown that state-of-the-art image-classifying AI models trained on ImageNet, a popular dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more. Countless studies have demonstrated that facial recognition is susceptible to bias. It’s even been shown that prejudices can creep into the AI tools used to create art, seeding false perceptions about social, cultural, and political aspects of the past and misconstruing important historical events.

OpenAI CLIP

Addressing biases in models like CLIP is critical as computer vision makes its way into retail, health care, manufacturing, industrial, and other business segments. The computer vision market is anticipated to be worth $21.17 billion by 2028. But biased systems deployed on cameras to prevent shoplifting, for instance, could misidentify darker-skinned faces more frequently than lighter-skinned faces, leading to false arrests or mistreatment.

CLIP and bias

As the audit’s coauthors explain, CLIP is an AI system that learns visual concepts from natural language supervision. Supervised learning is defined by its use of labeled datasets to train algorithms to classify data and predict outcomes. During the training phase, CLIP is fed with labeled datasets that tell it which output is related to each specific input value. The supervised learning process progresses by constantly measuring the resulting outputs and fine-tuning the system to get closer to the target accuracy.

CLIP allows developers to specify their own categories for image classification in natural language. For example, they might choose to classify images in animal classes like “dog,” “cat,” and “fish.” Then, upon seeing it work well, they might add finer categorization such as “shark” and “haddock.”

Customization is one of CLIP’s strengths — but also a potential weakness. Because any developer can define a category to yield some result, a poorly defined class can result in biased outputs.

The auditors carried out an experiment in which CLIP was tasked with classifying 10,000 images from FairFace, a collection of over 100,000 photos showing White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latinx people. With the goal of checking for biases in the model that might certain demographic groups, the auditors added “animal,” “gorilla,” “chimpanzee,” “orangutan,” “thief,” “criminal,” and “suspicious person” to the existing categories in FairFace.

OpenAI CLIP

The auditors found that CLIP misclassified 4.9% of the images into one of the non-human categories they added (e.g., “animal,” “gorilla,” “chimpanzee,” “orangutan”). Out of these, photos of Black people had the highest misclassification rate at roughly 14%, followed by people 20 years old or younger of all races. Moreover, 16.5% of men and 9.8% of women were misclassified into classes related to crime, like “thief” “suspicious person,” and “criminal” — with younger people (again, under the age of 20) more likely to fall under crime-related classes (18%) compared with people in other age ranges (12% for people aged 20-60 and 0% for people over 70).

OpenAI CLIP

In subsequent tests, the auditors tested CLIP on photos of female and male members of the U.S. Congress. At a higher confidence threshold, CLIP labeled people “lawmaker” and “legislator” across genders. But at lower thresholds, terms like “nanny” and “housekeeper” began appearing for women and “prisoner” and “mobster” for men. CLIP also disproportionately attached labels to do with hair and appearance to women, for example “brown hair” and “blonde.” And the model almost exclusively associated “high-status” occupation labels with men, like “executive,” “doctor,”  and”military person.”

Paths forward

The auditors say their analysis shows that CLIP inherits many gender biases, raising questions about what sufficiently safe behavior may look like for such models. “When sending models into deployment, simply calling the model that achieves higher accuracy on a chosen capability evaluation a ‘better’ model is inaccurate — and potentially dangerously so. We need to expand our definitions of ‘better’ models to also include their possible downstream impacts, uses, [and more],” they wrote.

In their report, the auditors recommend “community exploration” to further characterize models like CLIP and develop evaluations to assess their capabilities, biases, and potential for misuse. This could help increase the likelihood models are used beneficially and shed light on the gap between models with superior performance and those with benefit, the auditors say.

“These results add evidence to the growing body of work calling for a change in the notion of a ‘better’ model — to move beyond simply looking at higher accuracy at task-oriented capability evaluations and toward a broader ‘better’ that takes into account deployment-critical features, such as different use contexts and people who interact with the model, when thinking about model deployment,” the report reads.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI startup funding remained strong in Q2, report finds

All the sessions from Transform 2021 are available on-demand now. Watch now.


The pandemic spurred investments in AI across nearly every industry. That’s according to CB Insights’ AI in the Numbers Q2 2021 report, which found that AI startups attracted record funding — more than $20 billion — despite a drop in deal volume.

While the adoption rate varies between businesses, a majority of them — 95% in a recent S&P Global report — consider AI to be important in their digital transformation efforts. Organizations were expected to invest more than $50 billion in AI systems globally in 2020, according to IDC, up from $37.5 billion in 2019. And by 2024, investment is expected to reach $110 billion.

The U.S. led as an AI hub in Q2, according to CB Insights, attracting 41% of AI startup venture equity deals. U.S.-based companies accounted for 41% of deals in the previous quarter, up 39% year-over-year. Meanwhile, China remained second to the U.S., with an uptick of 17% quarter-over-quarter.

AI startup funding in Q2 was driven mostly by “mega-rounds,” or deals worth $100 million or more. A total of 24 companies reached $1 billion “unicorn” valuations for the first time, and AI exits increased 125% from the previous quarter, while AI initial public offerings (IPO) reached an all-time quarterly high of 11.

Unicorn valuations

Cybersecurity and processor companies led the wave of newly minted unicorns, with finance and insurance and retail and consumer packaged goods following close behind. On the other hand, health care AI continued to have the largest deal share, accounting for 17% of all AI deals in Q2.

Overall mid-stage deal share — i.e., series B and series C — reached an all-time high of 26% during Q2, while late-stage deal share — series D and beyond — remained tied with its Q1 2021 record of 9%. But the news wasn’t all positive. CB Insights found that seed, angel, and series A deals took a downward trend, making up only 55% of Q2 deals, with corporate venture backing leveling out. Just 39% of all deals for AI startups included participation from a corporate or corporate venture capital investor, up slightly from 31% in Q1 2021.

CB Insights

But CB Insights says that the rise in AI startup exits in Q2 reflects the strength of the sector. “The decline of early-stage deals and increase of mid- and late-stage deals hint at a maturing market — however, early-stage rounds still represent the majority of AI deals,” analysts at the firm wrote. “Plateauing [corporate] participation in AI deals may reflect a stronger focus on internal R&D or corporations choosing to develop relationships with AI portfolio companies instead of sourcing new deals.”

Experts predict that the AI and machine learning technologies market will reach $191 billion by the year 2025, a jump from the approximately $40 billion it’s valued at currently. In a recent survey, Appen found that companies increased investments by 4.6% on average in 2020, with a plan to invest 8.3% per year over the next three years.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI adoption and analytics are rising, survey finds

All the sessions from Transform 2021 are available on-demand now. Watch now.


The need for enterprise digital transformation during the pandemic has bolstered investments in AI. Last year, AI startups raised a collective $73.4 billion in Q4 2020, a $15 billion year-over-year increase. And according to a new survey from ManageEngine, the IT division of Zoho, business deployment of AI is on the rise.

In the survey of more than 1,200 tech execs at organizations looking at the use of AI and analytics, 80% of respondents in the U.S. said that they’d accelerated their AI adoption over the past two years. Moreover, 20% said they’d boosted their usage of business analytics compared with the global average, a potential sign that trust in AI is growing.

“The COVID-19 pandemic forced businesses to adopt — and adapt to — new digital technologies overnight,” ManageEngine VP Rajesh Ganesan, a coauthor of the survey, said in a press release. “These findings indicate that, as a result, organizations and their leaders have recognized the value of these technologies and have embraced the promises they are offering even amidst global business challenges.”

AI use cases

ManageEngine’s survey found that the dominant motivation behind business analytics technologies, at least in the U.S., is data-driven decision-making. Seventy-seven percent of respondents said that they’re using business analytics for augmented decision making while 69% said they’d improved the use of available data with business analytics. Sixty-five percent said that business analytics helps them make decisions faster, furthermore, reflecting an increased confidence in AI.

Execs responding to the survey also emphasized the importance of customer experience in their AI adoption decisions, with 59% in the U.S. saying that they’re leveraging AI to enhance customer services. Beyond customer experience, 61% of IT teams saw an uptick in applying business analytics, while marketing leaders saw a 44% surge; R&D teams saw 39%; software development and finance saw 38%; sales saw 37%; and operations saw 35%.

HR was among the groups that showed the lowest increase in business analytics usage, according to the survey. Research shows that companies are indeed struggling to apply data strategies to their HR operations. A Deloitte report found that more than 80% of HR professionals score themselves low in their ability to analyze, a troubling fact in a highly data-driven field.

Still, Ganesan said that the report’s findings reinforce the notion that AI is a critical business enabler — particularly when combined with cloud solutions that can support remote workers. “Increased reliance on AI and business analytics is fueling data-driven decisions to operate the organization more efficiently and make customers happier,” he continued.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link