Categories
Computing

New metaverse standards to address lack of interoperability

Big-name tech companies such as Meta, Microsoft, and Epic Games have formed a standards organization called the Metaverse Standards Forum (MSF). This is meant to be a group that creates open standards for all things metaverse, including virtual reality, augmented reality, and 3D technology.

Over 30 companies have signed on, some of which are deep in metaverse technology like Meta itself. Others include Nvidia, Unity (the creators of the popular game engine), Qualcomm, Sony, and even the web standards organization itself — the Worldwide Web Consortium (W3).

Meta Quest

According to the official press release:

“The Forum will explore where the lack of interoperability is holding back metaverse deployment and how the work of Standards Developing Organizations (SDOs) defining and evolving needed standards may be coordinated and accelerated. Open to any organization at no cost, the Forum will focus on pragmatic, action-based projects such as implementation prototyping, hackathons, plugfests, and open-source tooling to accelerate the testing and adoption of metaverse standards, while also developing consistent terminology and deployment guidelines.”

This seems to imply that many of the future technologies created for the metaverse will include some level of interoperability between companies. That doesn’t mean the metaverse will be the Internet 2.0, but it may allow users to use certain profiles or data across metaverse platforms. In fact, this is directly stated in the press release:

“The metaverse will bring together diverse technologies, requiring a constellation of interoperability standards, created and maintained by many standards organizations,” said Neil Trevett, Khronos president. “The Metaverse Standards Forum is a unique venue for coordination between standards organizations and industry, with a mission to foster the pragmatic and timely standardization that will be essential to an open and inclusive metaverse.”

A vision of Meta's metaverse in the work setting.

Besides the W3, other standards organizations have also joined the Forum, such as the Open AR Cloud, Spatial Web Foundation, and the Open Geospatial Consortium. This gives a lot of weight and much needed legitimacy to the organization, as the metaverse is very much a burgeoning field of technology.

Interestingly, major VR/AR players are conspicuously missing at the moment. Apple, who has already invested much in AR technology and is planning its own headset, has not yet joined the MSF. Niantic, maker of popular AR game Pokemon Go, is also missing from the roster. Protocol also points out that the Roblox Corporation, maker of the wildly successful Roblox game, has also declined to join for now.

While not considered a “metaverse” in the popular usage, Roblox in particular has been able to create an immersive 3D world where people can create entire games within it.

The exclusion of Apple, Niantic, and Roblox isn’t a forgone conclusion, however, as the MSF has just begun. The good thing is that most of the major players in the metaverse tech are agreeing to create some kind of unified standard to make development much easier. The press release named several important technology fields, including avatars, privacy and identity management, and financial transactions.

The Metaverse Standards Forum is scheduled to begin meeting next month.

Editors’ Choice




Repost: Original Source and Author Link

Categories
Game

Six state treasurers want Activision Blizzard to address its toxic workplace culture

Following scrutiny from state and federal regulators, Activision Blizzard and its CEO Bobby Kotick now face pressure from an unexpected source. Per , state treasurers from California, Massachusetts, Illinois, Oregon, Delaware and Nevada recently contacted the company’s board of directors to discuss its “response to the challenges and investment risk exposures that face Activision.” In a letter dated to November 23rd, the group tells the board it would “weigh” a “call to vote against the re-election of incumbent directors.”

That call was made on November 17th by a collection of activist shareholders known as . SOC, which holds about , has demanded Kotick resign and that two of the board’s longest-serving directors, Brian Kelly and Robert Morgado, retire by December 31st.

“We think there needs to be sweeping changes made in the company,” Illinois state treasurer Michael Frerichs told Axios. “We’re concerned that the current CEO and board directors don’t have the skillset, nor the conviction to institute these sweeping changes needed to transform their culture, to restore trust with employees and shareholders and their partners.”

Between the six treasurers, they manage about a trillion dollars in assets. But as Axios points out, it’s unclear how much they have invested in Activision, and it’s not something they disclosed to the outlet. However, Frerichs did confirm Illinois has been impacted by the company’s falling stock price.

To that point, the day before  published its bombshell report on Activision and CEO Bobby Kotick, the company’s stock closed at $70.43. The day California’s fair employment agency sued the company its stock was worth $91.88. As of the writing of this article, it’s trading at about $58.44.

The group has asked to meet with Activision’s board by December 20th. We’ve reached out to Activision for comment.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Repost: Original Source and Author Link

Categories
AI

Why AI ethics needs to address AI literacy, not just bias

All the sessions from Transform 2021 are available on-demand now. Watch now.


Women in the AI field are making research breakthroughs, spearheading vital ethical discussions, and inspiring the next generation of AI professionals. We created the VentureBeat Women in AI Awards to emphasize the importance of their voices, work, and experience and to shine a light on some of these leaders. In this series, publishing Fridays, we’re diving deeper into conversations with this year’s winners, whom we honored recently at Transform 2021. Check out last week’s interview with the winner of our AI research award. 

When you hear about AI ethics, it’s mostly about bias. But Noelle Silver, a winner of VentureBeat’s Women in AI responsibility and ethics award, has dedicated herself to an often overlooked part of the responsible AI equation: AI literacy.

“That’s my vision, is that we really increase literacy across the board,” she told VentureBeat of her effort to educate everyone from C-suites to teenagers about how to approach AI more thoughtfully.

After presenting to one too many boardrooms that could only see the good in AI, Silver started to see this lack of knowledge and ability to ask the important questions as a danger. Now, she’s a consistent champion for public understanding of AI, and has also established several initiatives supporting women and underrepresented communities.

We’re excited to present Silver with this much-deserved award. We recently caught up with her to chat more about the inspiration for her work, the misconceptions about responsible AI, and how enterprises can make sure AI ethics is more than a box to check.

VentureBeat: What would you say is your unique perspective when it comes to AI? What drives your work?

Noelle Silver: I’m driven by the fact that I have a house full of people who are consuming AI for various reasons. There’s my son with Down syndrome, and I’m interested in making the world accessible to him. And then my dad who is 72 and suffered a traumatic brain injury, and so he can’t use a smartphone and he doesn’t have a computer. Accessibility is a big part of it, and for the products I have the opportunity to be involved in, I want to make sure I’m representing those perspectives.

I always joke about how when we first started on Alexa, it was a pet project for Jeff Bezos. We weren’t consciously thinking about what this could do for classrooms, nursing homes, or people with speech difficulties. But all of those are really relevant use cases Amazon Alexa has now invested in. I always quote Arthur C. Clarke, who said, “Any sufficiently advanced technology is indistinguishable from magic.” And that’s true for my dad. When he uses Alexa, he’s like, “This is amazing!” You feel that it mystifies him, but the reality is there’s someone like me with fingers on a keyboard building the model that supports that magic. And I think being transparent and letting people know there are humans making them do what they do, and the more diverse and inclusive those humans can be in their development, the better. So I took that lesson and now I’ve talked to hundreds of executives and boards around the world to educate them about the questions they should be asking.

VentureBeat: You’ve created several initiatives championing women and underrepresented communities within the AI community, including AI Leadership Institute, Women in AI, and more. What led you to launch these groups? And what is your plan and hope for them in the near future and the long run? 

Silver: I launched the AI Leadership Institute six years ago because I was being asked, as part of my profession, to go and talk to executives and boards about AI. And I was selling a product, so I was there to, you know, talk about the art of the possible and get them excited, which was easy to do. But I found there was really a lack of literacy at the highest levels. And the fact that those with the budgets didn’t have that literacy, it made it dangerous that someone like me could tell a good story and tap into the optimistic feels of AI and they couldn’t recognize that’s not the only course. I tell the good and the bad, but what if it’s someone who’s trying to get them to do something without being as transparent? And so I started that leadership institute with the support of AWS, Alexa, and Microsoft to just try and educate these executives.

A couple years later, I realized there was very little diversity in the boardrooms where I was presenting, and that concerned me.  I met Dr. Safiya Noble, who had just written Algorithms of Oppression about the craziness that was Google algorithms years ago. You know, you type “CEO” and it only shows you white males — those types of things. That was a signal of a much larger problem, but I found that her work was not well known. She wasn’t a keynote speaker at the events that I was attending; she was like a sub session. And I just felt like the work was critical. And so I started Women in AI just to be a mechanism for it. I did a TikTok series on 12 African American women in AI to know, and that turned into a blog series, which turned into a community. I have a unique ability, I’ll say, to advocate for that work, and so I felt it was my mission.

VentureBeat: I’m glad you mentioned TikTok because I was going to say, even besides the boardroom discussions, I’ve seen you talking about building better models and responsible AI everywhere from TikTok to Clubhouse and so on. With that, are you hoping to reach the masses, get the average user caring, and get awareness bubbling up to decision makers that way?

Silver: Yeah, that’s right. Last year I was part of a LinkedIn learning course on how to spot deepfakes, and we ended up with three million learners. I think three or four of the videos went viral. And this wasn’t YouTube with its elaborate search model that will drive traffic or anything, right. So I started doing more AI literacy content after that because it showed me people want to know about these emerging technologies. And I have teenagers, and I know they’re going to be leading these companies. So what better way to avoid systemic bias than by educating them on these principles of inclusive engineering, asking better questions, and design justice? What if we taught that in middle or high school? And it’s funny because my executives are not the ones I’m showing my TikTok videos to, but I was on the call with one recently and I overheard her seventh grade daughter ask, “Oh my gosh. Is that the Noelle Silver?” And I was like, you know, that’s when you’ve got it — when you’ve got the seventh grader and the CEO on the same page.

VentureBeat: The idea of responsible AI and AI ethics is finally starting to receive the attention it needs. But do you fear — or already feel like — it’s becoming a buzzword? How do we make sure this work is real and not a box to check off?

Silver: It’s one of those things that companies realize they have to have an answer for, which is great. Like good, they’re creating teams. The thing that concerns me is, but like how impactful are these teams? When I see something ethically wrong with a model and I know it’s not going to serve the people it’s meant to, or I know it’s going to harm someone, when I pull the chain as a data scientist and say “we shouldn’t do this,” what happens then?  Most of these ethical organizations have no authority to actually stop production. It’s just like diversity and inclusion — everything is fine until you tell me this will delay going to market and we’ll lose $2 billion in revenue over five years. I’ve had CEOs tell me, “I’ll do everything you ask, but the second I lose money, I can’t do it anymore. I have stakeholders to serve.” So if we don’t give authority to these teams to actually do anything, they’re going to end up like many of the ethicists we’ve seen and either are going to quit or get pushed out.

VentureBeat: Are there any misconceptions about the push for responsible AI you think are important to clear up? Or anything important that often gets overlooked?

Silver: I think the biggest is that people often just think about ethical and responsible AI and bias, but it’s also about how we educate the users and communities consuming this AI. Every company is going to be data-driven, and that means everyone in the company needs to understand the impact of what that data can do and how it should be protected. These rules barely exist for the teams that create and store the data, and they definitely don’t exist for other people inside a company who might happen to run into that data. AI ethics isn’t just reserved just for the practitioners; it’s much more holistic than that.

VentureBeat: What advice do you have for enterprises building or deploying AI technologies about how to approach it more responsibly?

Silver: The reason I went to Red Hat is because I actually do believe in open source communities where different companies come together to solve common problems and build better things. What happens when health care meets finance? What happens when we come together and share our challenges and ethical practices and build a solution that reaches more people? Especially when we’re looking at things like Kubernetes, which almost every company is using to launch their applications. So being part of an open source community where you can collaborate and build solutions that serve more people outside of your limited scope, I feel like that’s a good thing.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

How Mastercard is using AI to address cyber risk

Join executive leaders at the Data, Analytics, & Intelligent Automation Summit, presented by Accenture. Register here.


As with just about every industry, AI has increasingly infiltrated the financial sector — from visual AI tools that monitor customers and workers to automating the Paycheck Protection Program (PPP) application process.

Talking at VentureBeat’s Transform 2021 event today, Johan Gerber, executive VP for security and cyber innovation at Mastercard, discussed how Mastercard is using AI to better understand and adapt to cyber risk, while keeping people’s data safe.

Lego blocks

On the one hand, consumers have never had it so easy — making payments is as frictionless as it has ever been. Ride-hail passengers can exit their cab without wasting precious minutes finalizing the transaction with the driver, while home-workers can configure their printer to automatically reorder ink when it runs empty. But behind the scenes things aren’t quite so simple. “As easy as it is for the consumer, the complexity lies in the background — we have seen the evolution of this hyper connected world in the backend just explode,” Gerber said.

Even the largest companies don’t build everything in their technology and data stacks from scratch, with countless components from different parties coming together to create the slick experiences that customers have come to expect. It’s also partly why big companies will often acquire smaller startups, as Mastercard did a few months back when it agreed to buy digital identity verification upstart Ekata for $850 million.

However, connecting all these “Lego blocks,” as Gerber calls them, is where the complexity comes in — not just from a technological standpoint (i.e. making it work), but from a data privacy perspective too.

“We’ve seen innovation happening faster than ever before, but it happens not because every company is innovating from A all the way through Z, but [because] we’ve got these third parties in the middle that are creating these wonderful experiences,” Gerber said. “Now, once I put all of this together, how do I manage security, how do I manage cyber risk, when I’ve got a hundred or thousand different third-parties connected to create that one experience for the consumer?”

In cybersecurity, there is an obvious temptation to “isolate things” to minimize the impact from cyberattacks or data leaks, but for products to work, the “Lego blocks” need to be connected. Moreover, companies need to share intelligence internally and within their industry, so that if a cyber attack is happening all their collective systems around the world are put on alert.

“Systemic risk” is what we’re talking about here, something that major financial institutions comprised of myriad Lego blocks need to address, all the while considering compliance and data privacy issues. This is particularly pertinent for global businesses that have a plethora of regional data privacy regulations to contend with, including country-specific laws around data residency.

From Mastercard’s perspective, it leans on a philosophy it calls connected intelligence, or collaborative AI, which is about connecting the dots between systems by “sharing intelligence or outcomes, and not the underlying data,” Gerber noted.

“So by not sharing the underlying data but sharing confidence levels and outcomes, I can maintain your privacy — I don’t have to say ‘this is you’ or ‘this is your card,’ I can just say ‘this person passed the first test and passed it really well,’” he said. “So the collaborative AI is basically how AI systems can share outcomes as variables, so the output of the model becomes the input variable to another model.”

Platform approach

So how does Mastercard achieve all this, so that the data is safeguarded while the systems can still derive insights from the data itself? According to Gerber, the company takes a platform approach — at the bottom end is where the raw data is ingested, upon which the company uses all manner of technologies such as Hadoop and similar tools capable of processing multiple sources of data in real time. From this raw data, Mastercard creates what it refers to as “intelligence blocks,” which are variables derived from the underlying data.

“By the time you get to the derived variable, we’ve applied a layer of compliance checking, data governance checking, [and] made sure that our models are not biased,” Gerber said. “We’ve basically done all the regulatory data scrubbing to ensure that we don’t abuse anything that goes in.”

This is the data that Mastercard can now freely use to build its AI models and products, leading to the top-end customer access layer through which third-parties such as retail stores or card issuers can query a transaction in real time through Mastercard’s API.

Above: Mastercard: Platform approach to data security and privacy

Through all of this, Mastercard doesn’t share any data with banks or retailers, but it can still greenlight a transaction on an individual level. And all this data in aggregate form can also give Mastercard valuable insights into possible attacks; for example, an unexpected spike in transactions coming from a particular retailer might indicate that something untoward is happening. Criminals have been known to procure a bunch of stolen card numbers and then try to imitate retail stores by running transactions against the cards.

Mastercard’s AI can also start imposing certain restrictions — for example, limiting specific types of card at specific retail stores to small-value purchases of less than $50 — or otherwise block any kind of transaction that it considers questionable.

So it’s clear that there is quite a lot of automation at play here — and there really needs to be, given that it would be impossible for humans alone to analyze millions of transactions in real time. The ultimate goal is to help companies improve their security and combat fraud, while ensuring that legitimate customers and retailers are affected as little as possible, as well as adhering to strict data governance rules and regulations.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Computing

Google to Finally Address Chrome Windows Closing Frustrations

Chrome has a new feature in the works that lets you reload all your tabs in an instant after you accidentally close your Chrome window. 

You’ve likely experienced the frustration of accidentally closing your entire Chrome window when you only wanted to minimize it. It then takes a long time to reload the window and wait for all the tabs to load. Connection problems can make this worse, as can certain content-loaded webpages. Fortunately, Google may soon introduce a Chrome feature that resolves the issue.

Google is releasing a feature that will apparently significantly decrease the time it takes for tabs to reload once they’ve accidentally been shut down. First noticed by Android Police, three new commits have been spotted at the Chromium Gerrit that will function together to attempt to make sure your tabs are up and running within milliseconds. Reportedly, the time it will now take to resume work on your Chrome window will decrease by such an extent that it will feel as if you never closed the window in the first place.

The requirement here is to open the window within 15 seconds of closing it. As long as you do so, Chrome will retrieve the lost data back from its cache. The code that enables this feature to function works almost the same way as Chrome’s back/forward cache does. BFcache is Google’s way of loading a webpage instantly when a user clicks on the back or forward buttons on the browser.

With the new feature, when you close the window, Chrome will no longer erase the browser’s data from its cache. Instead, it will instantly pull it all up back again when you open the window within the given time frame. The process will hopefully happen instantaneously.

The question about how Chrome will manage to save the data of its closed tabs in its cache under memory pressure is still unanswered. The original report suggests that Chrome might reload some of the tabs instantly instead of all of them at once. 

There isn’t a Chrome flag in the Canary channel yet, which means the update is in the works but isn’t ready to be tested at the moment.

Editors’ Choice




Repost: Original Source and Author Link

Categories
AI

AI legislation must address bias in algorithmic decision-making systems

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


In early June, border officials “quietly deployed” the mobile app CBP One at the U.S.-Mexico border to “streamline the processing” of asylum seekers. While the app will reduce manual data entry and speed up the process, it also relies on controversial facial recognition technologies and stores sensitive information on asylum seekers prior to their entry to the U.S. The issue here is not the use of artificial intelligence per se, but what it means in relation to the Biden administration’s pre-election promise of civil rights in technology, including AI bias and data privacy.

When the Democrats took control of both House and Senate in January, onlookers were optimistic that there was an appetite for a federal privacy bill and legislation to stem bias in algorithmic decision-making systems. This is long overdue, said Ben Winters, Equal Justice Works Fellow of the Electronic Privacy Information Center (EPIC), who works on matters related to AI and the criminal justice system. “The current state of AI legislation in the U.S. is disappointing, [with] a majority of AI-related legislation focused almost solely on investment, research, and maintaining competitiveness with other countries, primarily China,” Winters said.

Legislation moves forward

But there is some promising legislation waiting in the wings. The Algorithmic Justice and Online Platform Transparency bill, introduced by Sen. Edward Markey and Rep. Doris Matsui in May, clamps down on harmful algorithms, encourages transparency of websites’ content amplification and moderation practices, and proposes a cross-government investigation into discriminatory algorithmic processes throughout the economy.

Local bans on facial recognition are also picking up steam across the U.S. So far this year, bills or resolutions related to AI have been introduced in at least 16 states. They include California and Washington (accountability from automated decision-making apps); Massachusetts (data privacy and transparency in AI use in government); Missouri and Nevada (technology task force); and New Jersey (prohibiting “certain discrimination” by automated decision-making tech). Most of these bills are still pending, though some have already failed, such as Maryland’s Algorithmic Decision Systems: Procurement and Discriminatory Acts.

The Wyden Bill from 2019 and more recent proposals, such as the one from Markey and Matsui, provide much-needed direction, said Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “Companies are looking to the federal government for guidance and standards-setting,” Lin said. “Likewise, AI laws can protect technology developers in the new and tricky cases of liability that will inevitably arise.”

Transparency is still a huge challenge in AI, Lin added: “They’re black boxes that seem to work OK even if we don’t know how … but when they fail, they can fail spectacularly, and real human lives could be at stake.”

Compliance standards and policies expand

Though the Wyden Bill is a good starting point to give the Federal Trade Commission broader authority, requiring impact assessments that include considerations about data sources, bias, fairness, privacy, and more, it would help to expand compliance standards and policies, said Winters. “The main benefit to [industry] would be some clarity about what their obligations are and what resources they need to devote to complying with appropriate regulations,” he said. But there are drawbacks too, especially for companies that rely on fundamentally flawed or discriminatory data, as “it would be hard to accurately comply without endangering their business or inviting regulatory intervention,” Winters added.

Another drawback, Lin said, is that even if established players support a law to prevent AI bias, it isn’t clear what bias looks like in terms of machine learning. “It’s not just about treating people differently because of their race, gender, age, or whatever, even if these are legally protected categories,” Lin said. “Imagine if I were casting for a movie about Martin Luther King, Jr. I would reject every actor who is a teenage Asian girl, even though I’m rejecting them precisely because of age, ethnicity, and gender.” Algorithms, however, don’t understand context.

The EU’s General Data Protection Regulation (GDPR) is a good example to emulate, even though it’s aimed not at AI specifically, but on underlying data practices. “GDPR was fiercely resisted at first … but it’s now generally regarded as a very beneficial regulation for individual, business, and societal interests,” Lin said. “There is also the coercive effect of other countries signing an international law, making a country think twice or three times before it acts against the treaty and elicits international condemnation. … Even if the US is too laissez-faire in its general approach to embrace guidelines [like the EU’s], they still will want to consider regulations in other major markets.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
AI

AI Weekly: NIST proposes ways to identify and address AI bias

The National Institute of Standards and Technology (NIST), the U.S. agency responsible for developing technical metrics to promote “innovation and industrial competitiveness,” this week published a document outlining feedback and recommendations for mitigating the risk of bias in AI. The paper, about which NIST is accepting comments until August, proposes an approach for identifying and managing “pernicious” biases that can damage public trust in AI.

As NIST scientist Reva Schwartz, who coauthored the paper, points out, AI is transformative in its ability to make sense of data more quickly than humans. But as AI pervades the world, it’s becoming clear that its predictions can be affected by algorithmic and data biases. Making matters worse, some AI systems are built to model complex concepts that can’t be directly measured by data in the first place. For example, hiring algorithms use proxies — some of which are dangerously imprecise — like “area of residence” or “education level” — for the concepts they attempt to capture.

The effects are often catastrophic. Biases in AI have yielded wrongful arrests, racist recidivism scores, sexist recruitment, erroneous high school grades, offensive and exclusionary language generators, and underperforming speech recognition systems, to name a few injustices. Unsurprisingly, trust in AI systems is eroding. According to survey conducted by KPMG, across five countries — the U.S., the U.K., Germany, Canada, and Australia — over a third of the general public says that they’re unwilling to trust AI systems in general.

Proposed framework

The NIST document lays out a framework to spot and address AI biases at different points in a system’s lifecycle, from conception, iteration, and debugging to release. It starts at the pre-design or ideation stage before moving onto design and development and, finally, deployment.

At the pre-design phase, since many of the downstream processes hinge on decisions made here, there’s a lot of pressure to “get things right,” the NIST coauthors note. Central to these decisions is who makes them and which people or teams have the most power or control over them, which can reflect limited points of view, affect later stages and decisions, and lead to biased outcomes.

For example, it’s an obvious risk to build predictive models for scenarios already known to be discriminatory, like hiring. Yet developers often don’t address the possibility of inflated expectations related to AI. Indeed, current assumptions in development often revolve around the idea of technological solutionism, the perception that technology will lead to only positive solutions.

The design and development phases present other, related sets of challenges. Here, data scientists are often singularly focused on performance and optimization, which can be sources of bias in their own rights. For instance, modelers will almost always select the most accurate machine learning models. But not taking context into consideration can lead to biased results for certain populations, as can the use of aggregated data about groups to make predictions about individual behavior. This latter type of bias, known as an “ecological fallacy,” unintentionally weights certain factors such that societal inequities are exacerbated.

The ecological fallacy is widespread in health care modeling, where much of the data used to train algorithms for diagnosing and treating diseases has been shown to perpetuate inequalities. Recently, a team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, Stanford University researchers claimed that most of the U.S. data for studies involving medical uses of AI are sourced from New York, California, and Massachusetts.

When AI systems reach the deployment phase — i.e., where people start interacting with them — poor decisions in the earlier phases start to have an impact, typically unbeknownst to the affected people. For example, by not designing to compensate for activity biases, algorithmic models may be built on data only from the most active users. The NIST coauthors peg the problem on the fact that groups who invent the algorithms are unlikely to be aware — sometimes willfully — of all the potentially problematic ways they’ll be repurposed. Beyond this, there are individual differences in how people interpret AI models’ predictions, which could cause the “offloading” of decisions to coarse, imprecise automated tools.

This is particularly evident in the language domain, where model behavior can’t be reduced to universal standards because “desirable” behavior differs by application and social context. A study by researchers at the University of California, Berkeley, and the University of Washington illustrates the point, showing that language models deployed into production might struggle to understand aspects of minority languages and dialects. This could force people using the models to switch to “white-aligned English” to ensure that the models work better for them, for instance, which could discourage minority speakers from engaging with the models to begin with.

Tackling bias in AI

What’s to be done about the pitfalls? The NIST coauthors recommend pinpointing biases early in the AI development process by maintaining “diversity” — including racial, gender, age — along social lines, where bias is a concern. While they acknowledge that identifying impacts may take time and require the involvement of end-users, practitioners, subject matter experts, and professionals from the law and social sciences, the coauthors say that these stakeholders can bring experience to bear on the challenge of considering all possible outcomes.

The suggestions are aligned with a paper published last June by a group of researchers at Microsoft. It advocated for a closer examination and exploration of the relationships between language, power, and prejudice in their work, concluding that the machine learning research field generally lacks clear descriptions of bias and fails to explain how, why, and to whom that bias is harmful.

“Technology or datasets that seem non-problematic to one group may be deemed disastrous by others. The manner in which different user groups can game certain applications or tools may also not be so obvious to the teams charged with bringing an AI-based technology to market,” the NIST paper reads. “These kinds of impacts can sometimes be identified in early testing stages, but are usually very specific to the contextual end-use and will change over time.”

Beyond this, the coauthors advocate for “cultural effective challenge,” a practice that seeks to create an environment where developers can question steps in engineering to help root out biases. Requiring AI practitioners to defend their techniques, the coauthors posit, can incentivize new ways of thinking and help create change in approaches by organizations and industries.

Many organizations fall short of the mark. After a 2019 research paper demonstrated that commercially available facial analysis tools fail to work for women with dark skin, Amazon Web Services executives attempted to discredit study coauthors Joy Buolamwini and Deb Raji in multiple blog posts. More recently, Google fired leading AI researcher Timnit Gebru from her position on an AI ethics team in what she claims was retaliation for sending colleagues an email critical of the company’s managerial practices.

But others, particularly in academia, have taken preliminary steps. For instance, a new program at Stanford — the Ethics and Society Review (ESR) — is requiring AI researchers to evaluate their proposals for any potential negative impact on society before being green-lighted for funding. Starting in 2020, Stanford ran the ESR across 41 proposals seeking Stanford HAI grant funding. The panel most commonly identified issues of harm to minority groups, inclusion of diverse stakeholders in the research plan, dual use, and representation in data. One research team that examined the use of ambient AI for in-home care for elderly adults wrote an ESR statement that considered privacy ethics in their research, outlining recommendations for the use of face blurring, body masking, and other methods to ensure participants were protected.

Finally, at the deployment phase, the coauthors make the case that monitoring and auditing are key ways to manage bias risks. There’s a limit to what this can accomplish — for example, it’s not clear whether “detoxification” methods can thoroughly debias language models of a certain size. However, techniques like counterfactual fairness, which uses causal methods to produce “fair” algorithms, can perhaps begin to bridge gaps between lab and real-world environments.

Comments on NIST’s proposed approach can be submitted by August 5, 2021, by downloading and completing a template form and sending it to NIST’s dedicated email account. The coauthors say that they’ll use the responses to help shape the agenda of virtual events NIST will hold in coming months, a part of the agency’s broader effort to support the development of trustworthy and responsible AI.

“Managing the risk of bias in AI is a critical part of developing trustworthy AI systems, but the path to achieving this remains unclear. We want to engage the community in developing voluntary, consensus-based standards for managing AI bias and reducing the risk of harmful outcomes that it can cause,” Schwarz said in a statement. “An AI tool is often developed for one purpose, but then it gets used in other very different contexts. Many AI applications also have been insufficiently tested, or not tested at all in the context for which they are intended. All these factors can allow bias to go undetected … [Because] we know that bias is prevalent throughout the AI lifecycle … [not] knowing where your model is biased, or presuming that there is no bias, would be dangerous. Determining methods for identifying and managing it is a vital next step.”

Repost: Original Source and Author Link

Categories
AI

G2 expands software research categories to address data boom

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.


Earlier this week, software marketplace G2 released results from its summer 2021 research project, which invites enterprises to compare aggregated software review scores and discover solutions. In total, the research includes a whooping 7,000 reports and market grids across 2,000 categories. The company also announced it had raised $157 million in funding.

In addition to traditional topics like analytics, customer service, and IT management, this quarter’s research includes 32 new categories. The company says the new categories are largely driven by travel and digital marketing software, including CMS tools, ecommerce tools, and event stream processing. Additional fast-growing verticals included for the first time include revenue operations, hybrid cloud storage, and conversational support.

“This is the largest number of new reports since our fall 2020 reports, which signals strong growth and reflects what we’re seeing in the market. Companies are investing in digital transformation and software tools that enable nimble growth in the post-pandemic age,” G2 market research manager Emily Malis told VentureBeat.

The reports evaluate software based on customer satisfaction and market presence, providing scores for each, as well as a “G2 score.” Based on these factors, the evaluated offerings are also compared to each other for a view of each category’s competitive landscape. Products described as “leaders,” for example, are rated highly by G2 users and show a substantial market presence. There are also “high-performing” products, which have high customer satisfaction but low market presence compared to competitors. “Contender” products, on the other hand, may have positive reviews but not enough total reviews to validate the ratings. G2 also considers “niche” products, which have low satisfaction and market presence scores.

CMS tools

G2 said total visits to the CRM tools category on its platform increased 38% between June 2020 and June 2021, showing strong growth. Additionally, the number of unique visits increased 48%.

The research in this category shows WeTransfer is a clear leader, with more than 4 times the number of reviews as its competitors. The company, which provides file-sharing tools, achieved near-perfect scores in both customer satisfaction and market presence — 98 and 99, respectively. G2’s report also lists Beyond Compare, Patreon, and Tube Buddy as leaders in the CMS tools category, though most of their scores were significantly lower than WeTransfer’s.

While several additional CMS tool providers scored well in customer satisfaction, they generally demonstrated low market share. Overall, the findings indicate that We Transfer is edging out competitors, even those with satisfied users.

Revenue operations software

In the revenue operations software report, the landscape looks much more distributed, with Clari, InsightSquared, Gong, and Aviso emerging as leaders. And while Clari ranks highest — as the only one with a G2 score above 90 — the others aren’t far behind. Tableau CRM, formerly Einstein Analytics, is the only company currently considered a contender. BoostUp.ai and Boost showed high market presence, while Gainsight, TopOPPS, and SalesDirector.ai came in as “niche.”

G2 told VentureBeat it considers software categories robust enough for a grid report when they have six or more products with more than 10 reviews on a platform. Additionally, the category as a whole must have at least 150 reviews. G2 just added revenue operations to its platform last month, so the fact that the category was already eligible for inclusion speaks to its growth.

“We believe this category will continue to expand as revenue operations software continues to grow in popularity,” said Malis, who added that the category’s boost stems from a need for better alignment with customer data across departments involved with revenue. Customer success, marketing, and sales teams have long operated in silos, but revenue software can combine customer data across various tech stacks into one unified platform and allow businesses to improve efficiency, drive revenue predictability, and achieve higher revenue growth.

G2 said monthly Google search volumes for the term “revenue operations” have increased over 500% since May of 2018, according to data from Ahrefs. What’s more, over 25% of the revenue operations vendors on G2 have secured additional funding rounds in the last two years.

AI and data

The pandemic “turbocharged” digital transformation, so it’s not surprising to see interest spiking in AI and data-related solutions.

“Established companies ended up scrambling for software to ensure business continuity during the past year, and SMB companies struggled to stay afloat. Many companies realized they had tons of unstructured data but no plan or strategy on how to use or manage it,” Malis said.

She added that G2 has seen an influx in buyer interest in new data-related categories and has recently added categories like data fabric software to meet that demand. Soon it will launch additional data-related categories, including DataOps platforms and data warehouses.

Along with the rise in interest, G2 has observed changes in the data marketplace, including an increase in mergers and acquisitions (M&A). Malis said since February 2021 G2 has addressed over 100 M&As, and the company is seeing an increase in consolidation in various software markets. Predictably, the tech giants of the world, including Amazon, Google, and Alibaba, are getting bigger, due to acquisitions and the rewards of the cloud boom. G2 has also seen established companies become more entrenched by launching new products or partnering with other vendors. For example, Mailchimp moved into the ecommerce space, and Celonis announced a partnership with IBM to sell its process mining software. But G2 says even small and medium-sized businesses are focusing on data more than ever.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Repost: Original Source and Author Link

Categories
Tech News

Peloton made a free feature subscription-only to address safety concerns

The sudden obsession over indoor fitness, especially during the pandemic, has catapulted Peloton’s name into mainstream media. Unfortunately, it stayed under the spotlight not because of its success but because of the accidents and even death related to the super-expensive exercise equipment. Initially refusing to recall its products, Peloton eventually announced a voluntary recall of its Tread+ and Tread treadmills. Now it is taking one step further by locking what used to be a free feature of the treadmills behind a monthly subscription in a move that some have characterized as effectively bricking the equipment.

Peloton’s popularity isn’t actually based on its mass appeal and ubiquity. On the contrary, Peloton is notorious for selling exercise treadmills and bikes that amount to around $4,000, more or less. But that’s actually just the upfront price of the equipment, as Peloton also sells exercise programs, services, and addons through a subscription that costs at least $39 a month.

Not everyone wants to pay that monthly fee and have eventually opted to use the Tread+ and Tread as just treadmills and took advantage of Peloton’s “Just Run” feature. It was a simple virtual button on the treadmill’s screen that allowed the use of the equipment as a regular treadmill. Now Peloton requires a four-digit passcode to unlock that functionality. The problem is that this Tread Lock feature is available only as part of the Peloton Membership subscription.

What this effectively means is that owners will have to pay that $39 a month fee to be able to use the Tread+ at all, whether they get in with Peloton’s exercise regimen or not. Peloton says this is to prevent unauthorized access and activation of the treadmills that have resulted in those accidents. In other words, it is locking users out of their $4,000 purchase in order to address what may have been the products’ faulty design.

Reactions to this news are surprisingly split, with some chiding Peloton owners for being cheap for not willing to pay $39 a month after buying a $3,000 piece of equipment. There are, of course, also legal ramifications to Peloton’s decision and the company claims it is working to restore free access to Just Run so that owners can just use their treadmills to just run.



Repost: Original Source and Author Link

Categories
Tech News

Chrome will no longer try to hide the full address of websites

As the maker of the world’s most-used web browser, Google has both a moral and maybe even legal obligation to protect the privacy and security of its users. Not all of its efforts have been welcomed without scrutiny, however, as shown by the Privacy Sandbox and FLoC, short for Federated Learning of Cohorts. Even before that, however, Google has been trying to fight off phishing scams by modifying what uses see on Chrome’s address bar. It turns out that strategy wasn’t as effective as it presumed and Google is now backtracking on the position it defended strongly last year.

Many phishing scams rely on people’s tendency not to double-check things, be it numbers that are calling them or the address that websites have. The latter can be even trickier when some phishing sites try to use URLs or addresses that look or sound so close to the original, use extra-long strings of text to deter inspection, or use other tricks to hide their true source. Google’s prosed solution was to hide those URLs altogether and only show the real domain name of the web page.

Last year, Google started an experiment where it would hide all but the domain name of a site in the hopes that it would help users more easily distinguish “google.com” from “gooogle.com”. It is a far tamer option compared to an even older proposal where Chrome would not even show URLs but only search terms. That, of course, presumed everyone uses address bars for directly searching on Google or other web engines.

Now Google is apparently now ending this “simplified domain experiment”, which means it will no longer land on end-users’ Chrome browsers. It simply said that the strategy didn’t move relevant security metrics, which is probably another way of saying it wasn’t actually effective in combating spoofing sites. There is probably an even bigger risk that people won’t give the simplified URL a second look because it actually looked more legit by looking simpler.

Beyond doubts about the effectiveness of the solution, however, Google also got criticized for favoring its own apps and services with this strategy. In particular, it would have hidden Google’s AMP pages from plain sight, driving more traffic to Google’s servers rather than the actual source of those sites.

Repost: Original Source and Author Link