Categories
Computing

Steam survey hints at a GPU market recovery

Valve’s latest monthly report might give a clue that the PC industry is recovering from its component shortage that has dragged on for over two years.

In its monthly user survey, Valve’s data shows that gamers accessing Steam were doing so from PCs running a higher number of Nvidia Ampere graphics cards. The RTX 3080 graphics card saw a 0.24% increase in usage during the month of May, while the RTX 3070 GPU saw a 0.19% increase.

Of course, cheaper and more readily accessible cards still remain the most common, but the growth is certainly encouraging to see. It coincides with recent drops in GPU pricing and increases in supply.

AMD components have also made an appearance on the survey, with the RTX 3060 increasing 0.18% and the Radeon RX 6800 XT increasing 0.15%.

These stats are pivotal as they hint that more and more gamers are getting access to these high-end GPUs, both as desktop parts and in gaming laptops, which have been scarce in recent months.

The overall graphics card increase has also seen AMD gain some market share against the consumer favorite, Intel at 1.24%. However, the competitor remains in a staunch lead with an overall 67.19% of users.

On the CPU side, the survey also uncovered that processors on the PCs used on Steam have steadily increased from four cores to six cores over the last five years, as noted by PCWorld.

Previously, a four-core CPU was the most common, but this May survey continues to show growth in higher core count processors. Approximately 33% of PCs on Steam were running four-core CPUs, while over 50% of PCs were running six-core systems.

Both Intel and AMD have introduced high-core CPUs beyond six, with even eight, 12, and 16 cores to its mid-range component line. Still, six seems to be the new standard for the average PC gamer, especially now that Intel has increased its CPU’s core count with the 12th-gen Alder Lake chips.

Lastly, Valve’s survey continues to show increases in Windows 11. The update is 0.41% away from being installed on every five PCs. Additionally, over 50% of those surveyed had 16GB of RAM.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

AI-driven strategies are becoming mainstream, survey finds

Deloitte today released the fourth edition of its State of AI in the Enterprise report, which surveyed 2,857 business decision-makers between March and May 2021 about their perception of AI technologies. Few organizations claim to be completely AI-powered, the responses show, but a significant percentage are beginning to adopt practices that could get them there.

In the survey, Deloitte explored the transformations happening inside firms applying AI and machine learning to drive value. During the pandemic, digitization efforts prompted many companies to adopt AI-powered solutions to back-office and customer-facing challenges. A PricewaterhouseCoopers whitepaper found that 52% percent of companies have accelerated their AI adoption plans, with global spending on AI systems set to jump from $85.3 billion in 2021 to over $204 billion in 2025, according to IDC.

However, only 40% of respondents to the Deloitte survey agreed that their employer has an enterprise-wide AI strategy in place. While 66% view AI as critical to their success, only 38% believe that their use of AI differentiates them from competitors and only about one-third say that they’ve adopted “leading operational practices” for AI.

“The risks associated with AI remain top of mind for executives,” Deloitte executive director of the AI institute Beena Ammanath said in a statement. “We found that high-achieving organizations report being more prepared to manage risks associated with AI and confident that they can deploy AI initiatives in a trustworthy way.”

Embracing AI is a marathon, not a sprint

To this end, “AI-fueled” businesses leverage data to deploy and scale AI across core processes in a human-centric way, according to Deloitte. Using data-driven decision-making, they enhance workforce and customer experiences to achieve an advantage, continuously innovating.

Organizations with an enterprise-wide strategy and leaders who communicate a bold vision are nearly twice as likely to achieve high-level outcomes, Deloitte reports. Furthermore, businesses that document and enforce MLOps processes are twice as likely to achieve their goals “to a high degree,” four times more likely to be prepared for AI risks, and three times more confident in their ability to deploy AI products “in a trustworthy way.”

MLOps, a compound of “machine learning” and “information technology operations,” is a newer discipline involving collaboration between data scientists and IT professionals with the aim of productizing machine learning algorithms. MLOps essentially aims to capture and expand on previous operational practices while extending these practices to manage the unique challenges of machine learning.

“Becoming an AI-fueled organization is to understand that the transformation process is never complete, but rather a journey of continuous learning and improvement,” Deloitte AI principal Nitin Mittal said.

Companies successfully adopting AI also haven’t ignored cultural and change management, the Deloitte report found. Those investing heavily in change management are 60% more likely to report that their AI initiatives exceed expectations and 40% more likely to achieve their desired goals. As for organizations that have undergone significant changes to workflows or added new roles, they’re almost 1.5 times more likely to achieve outcomes to a high degree, while 83% of the highest-achieving organizations create a diverse ecosystem of partnerships to execute their AI strategy, according to Deloitte.

But only 37% of decision-maker respondents reported a major investment in change management, incentives, or training activities, highlighting roadblocks companies will need to overcome. “By embracing AI strategically and challenging orthodoxies, organizations can define a roadmap for adoption, quality delivery, and scale to create or unlock value faster than ever before,” Deloitte AI principal Irfan Saif said.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

Fortnite survey hints at SpongeBob, Witcher, Matrix, and Among Us crossovers

Epic Games has sent out another survey to Fortnite players to evaluate their interest in potential future crossovers, shedding light on some of the IP that may be in the pipeline for consideration. This isn’t the first time we’ve seen a survey like this, but it’s hard to guess what comes next.

Epic regularly surveys players about various things, including their experiences in recent matches and their perspectives on various firearms and items. Sometimes, these surveys instead focus on crossovers. We’ve seen some past potential crossovers become actual in-game characters, skins, and more.

As with previous examples, the new survey offers a huge number of potential crossovers covering popular IP across television, video games, and cartoons, as well as celebrities and athletes. It’s hard to tell which listed items are actually in consideration and which may be included to simply obscure the actual potential crossovers — if that’s the case, of course.

Listed items include popular games like Among Us, Yoshi’s Island, Final Fantasy, Clash of Clans, and Uncharted. These are joined by movies like Lord of the Rings, Nightmare on Elm Street, Scream, The Matrix, and Independence Day.

Some of the items on the list have already appeared in the game — it includes Predator, for example, which was already a crossover. Iron Man is also listed as a potential movie crossover and the character has likewise already had a Fortnite crossover. We may never see any of these listed items become Fortnite crossovers, but the survey does indicate that an end to these tie-ins is nowhere in sight.



Repost: Original Source and Author Link

Categories
AI

Bias in AI isn’t an enterprise priority, but it should be, survey warns

All the sessions from Transform 2021 are available on-demand now. Watch now.


A global survey published today finds nearly a third (31%) of respondents consider the social impact of bias in models to be AI’s biggest challenge. This is followed by concerns about the impact AI is likely to have on data privacy (21%). More troubling, only 10% of respondents said their organization has addressed bias in AI, with another 30% planning to do so sometime in the next 12 months.

Conducted by Anaconda, whose platform provides access to curated instances of open source tools for building AI models, the survey of 4,299 individuals includes IT and business professionals, alongside students and academics. It suggests IT organizations are now exercising more influence over AI, with nearly a quarter of respondents (23%) noting data science teams report up through the IT organization. Approval of the AI platforms employed by IT teams ranked third (45%) after performance (60%) and memory (46%).

Respondents said they spend about 39% of their time on data prep and data cleansing, which is more than the time spent on model training, model selection, and deployment combined.

Among respondents responsible for deploying AI models in production environments, the top challenges cited are security (27%), recoding models from Python or R to another programming language (24%), managing dependencies and environments (23%), and recoding models from other languages into Python or R (23%). Python remains the dominant language (63%) employed by data science teams, while a full 87% said they are employing open source software to some degree.

The worst AI myths

The top two biggest data science myths cited are 1) that having access to lots of data leads to greater accuracy (33%) and 2) the perception that data scientists don’t know how to code (31%).

The survey suggests there is also a long way to go in terms of embedding AI within business workflows. Only 39% of respondents said many decisions are based on insights surfaced by their data science efforts. A little over a third (35%) said some decisions are influenced by their work. Only 36% said their organization’s decision-makers are very data literate and understand the stories told by visualizations and models. Just over half (52%) said decision-makers as mostly data literate but need some coaching.

However, the percentage of individuals across an organization that will be employing data science is only going to increase in the months ahead, said Anaconda CEO Peter Wang. “You don’t need to know data science to use data science,” he said.

AI spending drops over 2020

In the short term, however, the survey suggests investment in AI fell somewhat in the last year. More than a third of respondents said they saw a decline in AI investments in the wake of the economic downturn brought on by the COVID-19 pandemic. Only just over a quarter of respondents (26%) said their organization actually increased investments in AI.

Nearly half of respondents (45%) said reduced investments manifested themselves in the form of reduced budgets. Nearly half (47%) said their teams did not grow, while 39% said members of their teams were actually laid off. Just over a third (35%) said projects were put on hold or had their deadlines extended. Just under a third of respondents (32%) said they expect to be looking for a new job in the next 12 months.

There’s no doubt organizations of all sizes are engaging to the best of their ability in what is rapidly becoming an AI arms race. But not all processes lend themselves equally well to AI, so the issue is not just how to build AI models but where best to apply them.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Game

GDC Dev Survey Shows Full Impact of the Pandemic on Gaming

It probably comes as no surprise that the pandemic had a major impact on the video game industry. That’s been especially apparent in 2021 as major game delays seem to happen every week. Companies like Ubisoft have completely shifted their release schedule as studios adapt to work from home development.

That’s left gamers with plenty of anxiety about just how long we’ll be feeling the effects of the pandemic in the gaming industry. If games that were close to completion needed to be delayed a full year, what does that mean for games that were early in their development cycle?

We now have some clearer answers to those questions thanks to GDC’s 2021 State of the Game industry report. GDC surveyed over 3,000 developers this year, who shed some light on how the pandemic impacted their games. While the short-term effects have been grim, a move to remote work may be a net positive for gaming in the future.

The bad news

The immediate effects of the pandemic on gaming have been fairly obvious. With studios forced to suddenly shift to remote work on a dime, game development hit a snag back in March 2020. Games like The Last of Us Part 2 had their release dates shifted back a few months to deal with that shift, but the real scope of the blowback wasn’t felt until 2021. After all, games that were scheduled to launch in 2020 were already close to the finish line. It’s the games that weren’t quite as far along that could be impacted hardest.

GDC’s State of the Game survey echoed that result. In 2020’s poll, only 33% of responders said their game faced a pandemic-related delay. This year, that number shot up to 44%. While the majority of developers polled said their games hadn’t been delayed, that’s still a significant rise compared to last year.

There are plenty of challenges that game developers have faced in different phases of the process, from playtesting to prototyping new ideas. I spoke to GDC Content Marketing Lead Kris Graft about the survey’s findings, who gave a specific example of the obstacles developers have faced.

“Early on in the pandemic, we were talking to the Jackbox folks and they were talking about the way that they make games,” says Graft. “They’re stuff is so community-based and interactive with other issues. They had an issue with playtesting. I am an advocate for remote work, but I understand that you can’t devalue having face-to-face time.”

Logistical nightmares aren’t the only problems game developers faced in the last year and a half. Several developers cited that childcare created some headaches as well. Work-life balance between challenging to juggle as developers found themselves interrupted by kids who were stuck learning from home. Others lost access to some of their contractors as they had to pivot to taking care of their kids full time.

The good news

With a year and a half of challenges, one has to wonder if gaming will continue struggling with delays (both internal and external ones) beyond 2021. Fortunately, Kris Graft believes that we may be through the worst of it as studios begin to reopen and formally adopt hybrid workflows.

“If a game is in its early stages right now, I don’t think five years from now people are going to say it’s because of COVID-19 that it got pushed back,” Graft jokes. “As people have gotten used to the new processes, things start to move more smoothly. Once offices start opening up safely and people are able to start doing hybrid or in-office work, I don’t see why the past year and a half would continue to impact game release dates.”

If anything, the survey indicates that remote work could be a net positive for the industry in the long run. Developers largely felt that their productivity didn’t suffer while working from home. In fact, 35% said their productivity either somewhat or greatly increased, while 32% said it stayed about the same. While others found it more challenging, the results indicate that developers were more than capable of working outside of a studio.

The amount that developers worked during a week remained static as well compared to 2020’s polling. A majority of responders said they were working somewhere between 36 and 45 hours a week, which is a positive sign that developers were able to create boundaries in their work-life balance.

There’s one aspect of remote work that’s been especially positive for gaming. Studios are less restricted in who they can hire. Rather than just recruiting local talent who can come into an office, companies can expand their talent search since work can happen remotely. That means studios can make more diverse hires, as well as nab powerhouse talent that wouldn’t be willing to move for a job.

Despite the delays, Graft finds that the forced shift to work from home is ultimately good for the industry. The long-term effects may sound troubling in theory, but survey data indicates that remote work is able to help game developers, not hinder them.

“I do think that it’s good for this to be an option for companies to have now,” says Graft. “I don’t think remote working is ever going to replace all in-person stuff, but you can strike a balance and expand the talent pool. I think that’s really good for the game industry; getting more people in and not forcing them to move out to the Bay or something.”

The video game industry was forced to change in the past year and a half. While some of those were tough, companies shouldn’t throw out what’s worked for developers. The industry should come out of this strong, not weaker.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

AI adoption and analytics are rising, survey finds

All the sessions from Transform 2021 are available on-demand now. Watch now.


The need for enterprise digital transformation during the pandemic has bolstered investments in AI. Last year, AI startups raised a collective $73.4 billion in Q4 2020, a $15 billion year-over-year increase. And according to a new survey from ManageEngine, the IT division of Zoho, business deployment of AI is on the rise.

In the survey of more than 1,200 tech execs at organizations looking at the use of AI and analytics, 80% of respondents in the U.S. said that they’d accelerated their AI adoption over the past two years. Moreover, 20% said they’d boosted their usage of business analytics compared with the global average, a potential sign that trust in AI is growing.

“The COVID-19 pandemic forced businesses to adopt — and adapt to — new digital technologies overnight,” ManageEngine VP Rajesh Ganesan, a coauthor of the survey, said in a press release. “These findings indicate that, as a result, organizations and their leaders have recognized the value of these technologies and have embraced the promises they are offering even amidst global business challenges.”

AI use cases

ManageEngine’s survey found that the dominant motivation behind business analytics technologies, at least in the U.S., is data-driven decision-making. Seventy-seven percent of respondents said that they’re using business analytics for augmented decision making while 69% said they’d improved the use of available data with business analytics. Sixty-five percent said that business analytics helps them make decisions faster, furthermore, reflecting an increased confidence in AI.

Execs responding to the survey also emphasized the importance of customer experience in their AI adoption decisions, with 59% in the U.S. saying that they’re leveraging AI to enhance customer services. Beyond customer experience, 61% of IT teams saw an uptick in applying business analytics, while marketing leaders saw a 44% surge; R&D teams saw 39%; software development and finance saw 38%; sales saw 37%; and operations saw 35%.

HR was among the groups that showed the lowest increase in business analytics usage, according to the survey. Research shows that companies are indeed struggling to apply data strategies to their HR operations. A Deloitte report found that more than 80% of HR professionals score themselves low in their ability to analyze, a troubling fact in a highly data-driven field.

Still, Ganesan said that the report’s findings reinforce the notion that AI is a critical business enabler — particularly when combined with cloud solutions that can support remote workers. “Increased reliance on AI and business analytics is fueling data-driven decisions to operate the organization more efficiently and make customers happier,” he continued.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

OnePlus 7 and 7T OxygenOS 11 survey irks users over AOD question

OnePlus takes great pride in how it handles customer feedback, sometimes even conceding its position to accommodate their requests. It is, however, far from perfect as this latest incident around a user survey demonstrates. It may have been a simple clerical error but when OnePlus asked owners of the OnePlus 7 and OnePlus 7T about the Always-On Display feature that came with OxygenOS 11, those owners were up in arms because that feature was not made available to those devices without prior warning or explanation.

Always-on Display has been one of the most requested features from users to the point that OnePlus eventually agreed to implement it. It arrived together with OxygenOS 11 and Android 11 and users were definitely excited about it. Unfortunately, it seems to only apply to the OnePlus 9 and OnePlus 8 generations as the stable OxygenOS 11 update for the OnePlus 7 and 7T was silent on that feature.

Now OnePlus is asking owners to fill in to give their feedback on the update and asks about their experience with the AOD feature. Users definitely made their displeasure known on the OnePlus forums for what they considered to be an insulting question given the context. It is, of course, a single non-essential feature and might have been just a clerical error but the incident has rubbed salt on still open wounds.

The OxygenOS 11 update for both phones has been widely regarded as buggy and, for some, almost unusable. There is, of course, no way to go back to the older Android 10 version, especially without wiping one’s phone completely. A vocal number of users have not only called out OnePlus over it but also stated their departure from the brand.

Some of those have pointed out this survey as an example of how OnePlus doesn’t listen or pay attention to user feedback, which makes the exercise moot and academic for them. To be fair, though, there are also instances where OnePlus has indeed listened or at least acknowledged the community’s concerns but its silence over the OnePlus 7 OxygenOS 11 update and missing AOD feature isn’t helping its cause.

Repost: Original Source and Author Link

Categories
AI

The pandemic led to ‘significant’ increase in AI adoption across manufacturing, survey finds

Elevate your enterprise data technology and strategy at Transform 2021.


New research from Google Cloud and The Harris Poll reveals that the pandemic led to a significant increase in AI use across manufacturers. According to a survey of senior executives at over 1,000 companies, two-thirds of manufacturers that use AI in their day-to-day operations report that their reliance on AI is increasing, with 74% claiming that they align with the changing work landscape.

According to a 2020 PricewaterhouseCoopers survey, companies in manufacturing expect efficiency gains over the next five years attributable to digital transformations. McKinsey’s research with the World Economic Forum puts the value creation potential of manufacturers implementing “Industry 4.0” — the automation of traditional industrial practices — at $3.7 trillion in 2025.

Seventy-six percent of respondents to the Google Cloud report say that they’ve turned to “disruptive technologies” like AI, data analytics, and the cloud to help navigate the pandemic. Manufacturers told surveyors that they’ve tapped AI to optimize their supply chains including in the management (36%), risk management (36%), and inventory management (34%) domains. Even among firms that currently don’t use AI in their day-to-day operations, about a third believe it would make employees more efficient (37%) and be helpful for employees overall (31%), according to Google Cloud.

Manufacturing is undergoing a resurgence as business owners look to modernize their factories and speed up operations. According to ABI Research, more than 4 million commercial robots will be installed in over 50,000 warehouses around the world by 2025, up from under 4,000 warehouses as of 2018. Oxford Economics anticipates 12.5 million manufacturing jobs will be automated in China, while McKinsey projects machines will take upwards of 30% of these jobs in the U.S.

Ford is among the manufacturers using AI within its operations via a relationship with Google. Announced in February, the automaker plans to leverage Google’s expertise in data, AI, and machine learning as a part of Team Upshift, a six-year partnership and collaborative group launching in 2023. Ford says the initiative will accelerate modernization of product development, manufacturing, and supply chain management, including exploration of using vision AI for manufacturing employee training and even more reliable plant equipment performance.

“[This] will supercharge our efforts to democratize AI across our business, from the plant floor to vehicles to dealerships,” Bryan Goodman, director of AI and cloud at Ford, said in a statement. “We used to count the number of AI and machine learning projects at Ford. Now it’s so commonplace that it’s like asking how many people are using math. This includes an AI ecosystem that is fueled by data, and that powers a ‘digital network flywheel.’”

Barriers to adoption

Automotive OEMs, automotive suppliers, and heavy machinery are among the top three subsectors deploying AI, with companies in metals, industrial and assembly, and heavy machinery seeing the highest uptick. The five dominant areas where AI is currently employed in manufacturing spans quality inspection (39%), supply chain management (36%), risk management (36%), production line quality checks (35%), and inventory management (34%). And manufacturers peg assisting with business continuity (38%), helping employees increasing efficiency (38%), and helping employees overall (34%) as the top reasons they leverage AI.

But despite the uptick in deployment of AI in the manufacturing industry, barriers threaten to slow adoption. Twenty-five percent of respondents say that they lack the talent to properly use AI, while 23% say they don’t have the IT infrastructure and over 20% say it’s too cost-prohibitive. Nineteen percent of manufacturers told Google Cloud that they consider AI an “unproven” technology, and 16% claim that they lack the necessary stakeholder buy-in, stymieing AI implementation efforts.

The Google Cloud findings come after Alation’s latest quarterly State of Data Culture Report, which similarly discovered that only a small percentage of professionals believe AI is being used effectively across their organizations. A lack of executive buy-in was a top reason, Alation reported, with 55% of respondents to the company’s survey citing this as more important than a lack of employees with data science skills.

“Even though some barriers exist, many companies believe they have the right IT infrastructure to successfully implement AI,” the coauthors of the Google Cloud report wrote. “As AI becomes more pervasive in solving real-world problems for manufacturers, we see a shift from ‘pilot purgatory’ to the ‘golden age of AI.’ The industry is no stranger to innovation — from the days of mass production to lean manufacturing, six sigma, and more recently, enterprise resource planning. And now, AI promises to deliver even more innovation.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

65% of execs can’t explain how their AI models make decisions, survey finds

Elevate your enterprise data technology and strategy at Transform 2021.


Despite increasing demand for and use of AI tools, 65% of companies can’t explain how AI model decisions or predictions are made. That’s according to the results of a new survey from global analytics firm FICO and Corinium, which surveyed 100 C-level analytic and data executives to understand how organizations are deploying AI and whether they’re ensuring AI is used ethically.

“Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level,” FICO chief analytics officer Scott Zoldi said in a press release. “Organizations are increasingly leveraging AI to automate key processes that – in some cases – are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible.”

The study, which was conducted by Corinium and commissioned by FICO, found that 33% of executive teams have an incomplete understanding of AI ethics. While IT, analytics, and compliance staff have the highest awareness, understanding across organizations remains patchy. As a result, there’s significant barriers to building support — 73% of stakeholders say they’ve struggled to get executive support for responsible AI practices.

Implementing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.

According to Corinium and FICO, while almost half (49%) of respondents to the survey report an increase in resources allocated to AI projects over the past year, only 39% and 28% say they’ve prioritized AI governance and model monitoring or maintenance, respectively. Potentially contributing to the ethics gap is a lack of consensus among executives about what a company’s responsibilities should be when it comes to AI. The majority of companies (55%) agree that AI for data ingestion must meet basic ethical standards and that systems used for back-office operations must also be explainable. But almost half (43%) say that they don’t have responsibilities beyond meeting regulations to manage AI systems like whose decisions might indirectly affect people’s livelihoods.

Turning the tide

What can enterprises do to embrace responsible AI? Combating bias is an important step, but only 38% of companies say that they have bias mitigation steps built into their model development processes. In fact, only a fifth of respondents (20%) to the Corinium and FICO survey actively monitor their models in production for fairness and ethics, while just one in three (33%) have a model validation team to assess newly developed models.

The findings agree with a recent Boston Consulting Group survey of 1,000 enterprises, which found fewer than half of those that achieved AI at scale had fully mature, “responsible” AI implementations. The lagging adoption of responsible AI belies the value these practices can bring to bear. A study by Capgemini found customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t.

This being the case, businesses appear to understand the value of evaluating the fairness of model outcomes, with 59% of survey respondents saying they do this to detect model bias. Additionally, 55% say they isolate and assess latent model features for bias and half (50%) say they have a codified mathematical definition for data bias and actively check for bias in unstructured data sources.

Businesses also recognize that things need to change, as the overwhelming majority (90%) agree that inefficient processes for model monitoring represent a barrier to AI adoption. Thankfully, almost two-thirds (63%) respondents to the Corinium and FICO report believe that AI ethics and responsible AI will become a core element of their organization’s strategy within two years.

“The business community is committed to driving transformation through AI-powered automation. However, senior leaders and boards need to be aware of the risks associated with the technology and the best practices to proactively mitigate them,” Zoldi added. “AI has the power to transform the world, but as the popular saying goes – with great power, comes great responsibility.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

Less than 30% of business have a plan to combat deepfakes, survey finds

Elevate your enterprise data technology and strategy at Transform 2021.


Deepfakes, or AI-generated videos that take a person in an existing video and replace them with someone else’s likeness, are multiplying at an accelerating rate. According to startup Deeptrace, the number of deepfakes on the web increased 330% from October 2019 to June 2020, reaching over 50,000 at their peak. That’s troubling not only because these fakes might be used to sway opinion during an election or implicate a person in a crime, but because they’ve already been abused to generate pornographic material of actors and defraud a major energy producer.

While much of the discussion to date around deepfakes has focused on social media, pornography, and fraud, it’s worth noting that deepfakes pose a threat to people portrayed in manipulated videos and their circle of trust. As a result, deepfakes also represent an existential threat to businesses, particularly in industries that depend on digital media to make important decisions. The FBI earlier this year warned that deepfakes are a critical emerging threat targeting businesses.

To help promote awareness, Attestiv, a data authentication startup, surveyed U.S.-based professionals about threats to their employers related to altered or manipulated digital media. Over 130 people across various industries responded to the questionnaire, including those working in IT, data services, health care, and financial services.

Over 80% of respondents said that manipulated media poses a potential risk to their organization, according to Attestiv. However, less than 30% say they’ve taken steps to mitigate fallout from a deepfake attack. Twenty-five percent of respondents claim they’re planning to take action, but 46% say that their organization lacks a plan or that they personally lack knowledge of the plan.

Attestiv also requested that respondents consider a possible solution to their potential deepfake problem. When asked, “What’s the best defense organizations can take against altered digital media?,” 48% of survey takers felt the best defense was automated detection and filtering solutions. Thirty-eight percent believed that training employees to detect deepfakes was a superior course of action.

“Training employees to detect deepfakes may not be a viable solution given the likelihood that they are rapidly becoming undetectable to human inspection,” the Attestiv report’s authors wrote. “It appears there may be a need for further education regarding the deepfake threat and the trajectory the technology is taking.”

Challenging road ahead

The fight against deepfakes is likely to remain challenging, especially as media generation techniques continue to improve. Earlier this year, deepfake footage of Tom Cruise posted to an unverified TikTok account racked up 11 million views on the app and millions more on other platforms. And when scanned through several of the best publicly available deepfake detection tools, they avoided discovery, according to Vice.

In an attempt to fight the spread of deepfakes, Facebook — along with Amazon and Microsoft, among others — spearheaded the Deepfake Detection Challenge, which ended last June. The challenge’s launch came after the release of a large corpus of visual deepfakes produced in collaboration with Jigsaw, Google’s internal technology incubator, which was incorporated into a benchmark made freely available to researchers for synthetic video detection system development.

More recently, Microsoft launched its own deepfake-combating solution in Video Authenticator, a tool that can analyze a still photo or video to provide a score for its level of confidence that the media hasn’t been artificially manipulated. The company also developed a technology built into Microsoft Azure that enables a content producer to add metadata to a piece of content, as well as a reader that checks the metadata to let people know that the content is authentic.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Repost: Original Source and Author Link